The differences between BLOB and TEXT

本文详细介绍了数据库中BLOB与TEXT类型的特性及其差异。BLOB类型包括TINYBLOB、BLOB、MEDIUMBLOB及LONGBLOB,用于存储二进制大数据对象;TEXT类型则包括TINYTEXT、TEXT、MEDIUMTEXT及LONGTEXT,主要用于存储文本数据。两种类型的最大区别在于BLOB不支持字符集而TEXT类型支持。此外,文章还提供了每种类型能存储的最大数据长度。

A BLOB is a binary large object that can hold a variable amount of data. The four BLOB types are TINYBLOB, BLOB, MEDIUMBLOB, and LONGBLOB. These differ only in the maximum length of the values they can hold. The four TEXT types are TINYTEXT, TEXT, MEDIUMTEXT, and LONGTEXT. These correspond to the four BLOB types and have the same maximum lengths and storage requirements.

BLOB doesn't have a character set, but TEXT has.

Keeping in mind the caveats above about max packet size and RAM, here are the sizes of each of the text types:
TINYBLOB, TINYTEXT:
2^8 or 256 bytes
BLOB, TEXT:
2^16 or 65536 bytes (64 kiB)
MEDIUMBLOB, MEDIUMTEXT:
2^24 or 16777216 bytes (16 MiB)
LONGBLOB, LONGTEXT:
2^32 or 4294967296 bytes (4 GiB)

 

UniVAD) PS D:\UniVAD\UniVAD-main> python segment_components.py G:\Anaconda\envs\UniVAD\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning) D:\UniVAD\UniVAD-main\models\GroundingDINO\groundingdino\models\GroundingDINO\ms_deform_attn.py:31: UserWarning: Failed to load custom C++ ops. Running on CPU mode Only! warnings.warn("Failed to load custom C++ ops. Running on CPU mode Only!") G:\Anaconda\envs\UniVAD\lib\site-packages\torch\functional.py:513: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\TensorShape.cpp:3610.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] final text_encoder_type: bert-base-uncased D:\UniVAD\UniVAD-main\models\grounded_sam.py:48: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a fu ture release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loade d via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. checkpoint = torch.load(model_checkpoint_path, map_location="cpu") _IncompatibleKeys(missing_keys=[], unexpected_keys=['label_enc.weight', 'bert.embeddings.position_ids']) D:\UniVAD\UniVAD-main\models\segment_anything\build_sam_hq.py:107: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be a llowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. state_dict = torch.load(f, map_location=device) <All keys matched successfully> grounding...: 0%| | 0/83 [00:00<?, ?it/s]G:\Anaconda\envs\UniVAD\lib\site-packages\transformers\modeling_utils.py:1161: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers. warnings.warn( G:\Anaconda\envs\UniVAD\lib\site-packages\transformers\models\bert\modeling_bert.py:440: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.) attn_output = torch.nn.functional.scaled_dot_product_attention( G:\Anaconda\envs\UniVAD\lib\site-packages\torch\_dynamo\eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants. return fn(*args, **kwargs) G:\Anaconda\envs\UniVAD\lib\site-packages\torch\utils\checkpoint.py:92: UserWarning: None of the inputs have requires_grad=True. Gradients will be None warnings.warn( grounding...: 0%| | 0/83 [00:00<?, ?it/s] Traceback (most recent call last): File "segment_components.py", line 37, in <module> grounding_segmentation( File "D:\UniVAD\UniVAD-main\models\component_segmentaion.py", line 271, in grounding_segmentation boxes_filt, pred_phrases = get_grounding_output( File "D:\UniVAD\UniVAD-main\models\grounded_sam.py", line 63, in get_grounding_output outputs = model(image[None], captions=[caption]) File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "D:\UniVAD\UniVAD-main\models\GroundingDINO\groundingdino\models\GroundingDINO\groundingdino.py", line 327, in forward hs, reference, hs_enc, ref_enc, init_box_proposal = self.transformer( File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "D:\UniVAD\UniVAD-main\models\GroundingDINO\groundingdino\models\GroundingDINO\transformer.py", line 258, in forward memory, memory_text = self.encoder( File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "D:\UniVAD\UniVAD-main\models\GroundingDINO\groundingdino\models\GroundingDINO\transformer.py", line 576, in forward output = checkpoint.checkpoint( File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\_compile.py", line 31, in inner return disable_fn(*args, **kwargs) File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\_dynamo\eval_frame.py", line 600, in _fn return fn(*args, **kwargs) File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\utils\checkpoint.py", line 481, in checkpoint return CheckpointFunction.apply(function, preserve, *args) File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\autograd\function.py", line 574, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\utils\checkpoint.py", line 255, in forward outputs = run_function(*args) File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "D:\UniVAD\UniVAD-main\models\GroundingDINO\groundingdino\models\GroundingDINO\transformer.py", line 785, in forward src2 = self.self_attn( File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "D:\UniVAD\UniVAD-main\models\GroundingDINO\groundingdino\models\GroundingDINO\ms_deform_attn.py", line 338, in forward output = MultiScaleDeformableAttnFunction.apply( File "G:\Anaconda\envs\UniVAD\lib\site-packages\torch\autograd\function.py", line 574, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "D:\UniVAD\UniVAD-main\models\GroundingDINO\groundingdino\models\GroundingDINO\ms_deform_attn.py", line 53, in forward output = _C.ms_deform_attn_forward( NameError: name '_C' is not defined 这问题该怎么解决
08-21
内容概要:本文详细介绍了“秒杀商城”微服务架构的设计与实战全过程,涵盖系统从需求分析、服务拆分、技术选型到核心功能开发、分布式事务处理、容器化部署及监控链路追踪的完整流程。重点解决了高并发场景下的超卖问题,采用Redis预减库存、消息队列削峰、数据库乐观锁等手段保障数据一致性,并通过Nacos实现服务注册发现与配置管理,利用Seata处理跨服务分布式事务,结合RabbitMQ实现异步下单,提升系统吞吐能力。同时,项目支持Docker Compose快速部署和Kubernetes生产级编排,集成Sleuth+Zipkin链路追踪与Prometheus+Grafana监控体系,构建可观测性强的微服务系统。; 适合人群:具备Java基础和Spring Boot开发经验,熟悉微服务基本概念的中高级研发人员,尤其是希望深入理解高并发系统设计、分布式事务、服务治理等核心技术的开发者;适合工作2-5年、有志于转型微服务或提升架构能力的工程师; 使用场景及目标:①学习如何基于Spring Cloud Alibaba构建完整的微服务项目;②掌握秒杀场景下高并发、超卖控制、异步化、削峰填谷等关键技术方案;③实践分布式事务(Seata)、服务熔断降级、链路追踪、统一配置中心等企业级中间件的应用;④完成从本地开发到容器化部署的全流程落地; 阅读建议:建议按照文档提供的七个阶段循序渐进地动手实践,重点关注秒杀流程设计、服务间通信机制、分布式事务实现和系统性能优化部分,结合代码调试与监控工具深入理解各组件协作原理,真正掌握高并发微服务系统的构建能力。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值