C# - Blocking Queue Implementation

本文详细介绍了阻塞队列的概念、实现方式及其在多线程环境中的应用。通过使用.NET Framework 提供的 BlockingCollection<T> 和 ConcurrentQueue<T> 类,展示了如何构建一个具有阻塞能力的队列,以及如何在不同线程间安全地进行数据的存取操作。

Blocking Queue is something that we might demand constantly for various kinds of applications, where one thread might take data from the queue, and other threads might try to put data into the queue. Unlike other thread-safe queue, the blocking queue should have one unique requiremnt in that whether a thread tries to take item from the queue or a thread try to put an item into the queue, the calling thread might be blocked until certain condition is met, for the put thread, the condition is that the queue is not full, and for the take thrad, the condition being the queue is not empty! They will be unblock immediately once the blocking condition is lifted.

there are many way that you can implement the blocking queue. you can use WaitHandle with the test on the number of items in the concurrent collections, then you will be able to know when to block and when to notify the threads. 

But the DotNet framework has provided a very neat means for you to implement the Blocking queue with the off-the-shelf System.Collection.Concurrent namespace. 

From the above mentioned namespace, there are BlockingCollection<> and ConcurrentQueue<>, but there is no such ConcurrentQueue<>?? Why Microsoft forget to provide such an important FIFO class, which can be applied in many a place?


Ah, Microsoft has actually inventted a very smart trick here because if you take a look at the signature of  the BlockingCollection, it is somethig like this: 

    [DebuggerDisplay("Count = {Count}, Type = {m_collection}")]
    [DebuggerTypeProxy(typeof(SystemThreadingCollections_BlockingCollectionDebugView<>))]
    [ComVisible(false)]
    public class BlockingCollection<T> : IEnumerable<T>, ICollection, IEnumerable, IDisposable
    {
        public BlockingCollection();
        public BlockingCollection(int boundedCapacity);
        public BlockingCollection(IProducerConsumerCollection<T> collection);
        public BlockingCollection(IProducerConsumerCollection<T> collection, int boundedCapacity);
        // ...
    }

 as you can see some overload of the constructors takes a IProducerConsumerCollection<T> as it parameter, and what are those IProducerConsumerCollection<T> are coming from?

Ah, if you take a look at the signature of the ConcurrentQueue<T> 

 

public class ConcurrentQueue<T> : IProducerConsumerCollection<T>, IEnumerable<T>, ICollection, IEnumerable
{
//...
}

 

Ah, so ConcurrentQueue is an instance of IProducerConsumerCollection<T>, For those of you who are interested to find ou wha the IProducerConsumerCollection<T> is, here it is the signature.

    public interface IProducerConsumerCollection<T> : IEnumerable<T>, ICollection, IEnumerable
    {
        void CopyTo(T[] array, int index);
        T[] ToArray();
        bool TryAdd(T item);
        bool TryTake(out T item);
    }

 So, basically IProducerConsumerCollction provides a TryAdd/TryTake pair with additional methods to supports copy to/from array.

So, back to our question, it sounds like that BlockingCollection is something that if you don't provide a underlying ProducerConsumer collection it will use a default one (not sure if that one is a FIFO, or FILO - you simply cannot make the assumption), but if you did, it will use the underlying collection that you provided (which means it will use the take/put characteristic of underlying collection, which means if you pass in a FIFO collection, then take/put will work in a FIFO manner ... The BlockingQueue serves as a container/wrapper of a ConcurrencyCollection with additional blocking ability.

Knowing that the work sounds almost trivial, you simply put a ConcurrentQueue<T> inside a BlockingQueue then you willget a blocking queue. But let's make a class for this.

public class BlockingQueue<T> : BlockingCollection<T>
        {
            #region ctor(s)
            public BlockingQueue() : base(new ConcurrentQueue<T>())
            {
            }

            public BlockingQueue (int maxSize)  : base (new ConcurrentQueue<T>(), maxSize)
	        {
	        }
            #endregion ctor(s)

            #region Methods
            /// <summary>
            /// Enqueue an Item
            /// </summary>
            /// <param name="item">Item to enqueue</param>
            /// <remarks>blocks if the blocking queue is full</remarks>
            public void Enqueue(T item)
            {
                Add(item);
            }

            /// <summary>
            /// Dequeue an item
            /// </summary>
            /// <param name="Item"></param>
            /// <returns>Item dequeued</returns>
            /// <remarks>blocks if the blocking queue is empty</remarks>
            public T Dequeue()
            {
                return Take();
            }
            #endregion Methods
        }

 
Ah, we explicitly add Enquee and Dequeue to mock up a Queue operation (you can use directly the Take or Add methods as you like) 

Let's put this into a multiple thread environment and give it a spin.

First, we declare a Blocking Collection.

        public List<HashSet<ITabularPushCallback>> callbackChannelLists = new List<HashSet<ITabularPushCallback>>();
        // this is the Blocking Queue List.
        public List<BlockingQueue<string[][]>> messageQueues = new List<BlockingQueue<string[][]>>();

 

and If there is data coming, one background thread will send a background thread to do Enqueue.

public void NotifyMesasge(string[][] messages, int tableId)
        {
            if (IsChannelRegistered(tableId))
            {
                HashSet<ITabularPushCallback> callbackChannelList = null;
                BlockingQueue<string[][]> queue = null;

                lock (SyncObj)
                {
                   callbackChannelList = new HashSet<ITabularPushCallback>(callbackChannelLists[tableId]);
                   queue = messageQueues[tableId];
                }

                if (callbackChannelList.Count > 0 && queue != null)
                {
                    ThreadPool.QueueUserWorkItem((o =>
                    {
                        if (queue != null)
                            queue.Enqueue(messages);
                    }), null);
                }
            }
            else
            {
                // throw or swallow?
                //throw new ArgumentOutOfRangeException("tableId", tableId, "Invalid callback channel");
            }
        }

 

 and there a dedicated Background thread on parameterized on each "tableId" , which peeks data from the Queue and do processing, here is the main code that does the Dequeue and processing.

 

        private void NotifyMessageThunk(int tableId)
        {
            HashSet<ITabularPushCallback> callbackChannelList = null;
            BlockingQueue<string[][]> queue = null;
            lock (SyncObj)
            {
                if (tableId < 0 || tableId > callbackChannelLists.Count) throw new ArgumentOutOfRangeException("tableId", tableId, "Expected nonnegative number and existing tableId!");
                if (!IsChannelRegistered(tableId))
                {
                    Thread.Sleep(100); // CPU effecient means.
                    return;
                }
                callbackChannelList = GetCallbackChannel(tableId);
                queue = messageQueues[tableId];
                if (queue == null)
                {
                    Thread.Sleep(100); // CPU effecient boosts
                    return;
                }
            }

            string[][] message = queue.Dequeue();
            if (message != null)
            {
                HashSet<ITabularPushCallback> channelCopy = null;
                channelCopy = new HashSet<ITabularPushCallback>(callbackChannelList);

                foreach (var channel in channelCopy)
                {
                    try
                    {
                        channel.NotifyMessage(message, tableId);
                    }
                    catch
                    {
                        // swallow? 
                        if (!TryRemoveCallbackChannel(channel, tableId))
                        {
                            // Logs
                        }
                    }
                }
            }
        }

 

Pretty easy, isn't it?

Actually you can make you own blocking queue with primitives of WaitHandles , something as shown in the references list [0] where you can do something simiar to 

class SizeQueue<T>
{
    private readonly Queue<T> queue = new Queue<T>();
    private readonly int maxSize;
    public SizeQueue(int maxSize) { this.maxSize = maxSize; }

    public void Enqueue(T item)
    {
        lock (queue)
        {
            while (queue.Count >= maxSize)
            {
                Monitor.Wait(queue);
            }
            queue.Enqueue(item);
            if (queue.Count == 1)
            {
                // wake up any blocked dequeue
                Monitor.PulseAll(queue);
            }
        }
    }
    public T Dequeue()
    {
        lock (queue)
        {
            while (queue.Count == 0)
            {
                Monitor.Wait(queue);
            }
            T item = queue.Dequeue();
            if (queue.Count == maxSize - 1)
            {
                // wake up any blocked enqueue
                Monitor.PulseAll(queue);
            }
            return item;
        }
    }
}

 

The author, Marc Gravell, actually proposed a improved version which enales exiting cleanly. Below is his idea.

bool closing;
public void Close()
{
    lock(queue)
    {
        closing = true;
        Monitor.PulseAll(queue);
    }
}
public bool TryDequeue(out T value)
{
    lock (queue)
    {
        while (queue.Count == 0)
        {
            if (closing)
            {
                value = default(T);
                return false;
            }
            Monitor.Wait(queue);
        }
        value = queue.Dequeue();
        if (queue.Count == maxSize - 1)
        {
            // wake up any blocked enqueue
            Monitor.PulseAll(queue);
        }
        return true;
    }
}

 
And someone has geniously knocked up with Reactive extension, something such as this:

public class BlockingQueue<T>
{
    private readonly Subject<T> _queue;
    private readonly IEnumerator<T> _enumerator;
    private readonly object _sync = new object();

    public BlockingQueue()
    {
        _queue = new Subject<T>();
        _enumerator = _queue.GetEnumerator();
    }

    public void Enqueue(T item)
    {
        lock (_sync)
        {
            _queue.OnNext(item);
        }
    }

    public T Dequeue()
    {
        _enumerator.MoveNext();
        return _enumerator.Current;
    }
}

 
However, we can work more on the synchronization.

References:

Creatingg a blocking Queue<T> in .NET

<think>好的,我现在需要处理用户关于“parser-blocking cross-site solutions or explanations”的查询。首先,我需要明确这两个术语的具体含义以及它们之间的关系。 首先,“parser-blocking”通常指的是在HTML解析过程中阻塞文档解析的资源加载。例如,传统的JavaScript脚本(没有async或defer属性)会阻塞HTML解析器,直到脚本下载并执行完毕。这会影响页面加载性能,尤其是在资源较大或网络较慢的情况下。 然后是“cross-site”,即跨站问题,这可能涉及跨域资源共享(CORS)或安全策略,比如同源策略。跨站资源加载可能会引发安全问题,浏览器会有不同的处理方式,比如限制某些资源的加载或执行。 用户可能是在问如何处理跨站资源加载时导致的解析阻塞问题,或者如何优化这类资源的加载以提升性能,同时确保安全性。需要结合这两方面的知识点来解答。 接下来,我需要回忆相关的技术解决方案。例如,对于parser-blocking的JavaScript,可以使用async或defer属性来异步加载脚本,避免阻塞解析。对于跨站资源,可能需要正确配置CORS头部,比如Access-Control-Allow-Origin,以确保资源可以被安全地加载。 另外,预加载(preload)或预连接(preconnect)资源可能也是优化手段,通过<link rel="preload">或<link rel="preconnect">来提前处理跨站资源的连接,减少延迟。同时,使用资源提示如dns-prefetch也可以优化性能。 需要注意的是,跨站资源的处理可能涉及安全策略,比如Subresource Integrity(SRI)来确保外部资源的完整性,防止篡改。这在加载跨站脚本或样式时尤为重要。 用户可能还关心如何检测这些问题,比如使用Lighthouse或Chrome DevTools的性能分析工具来识别阻塞资源和跨站请求,进而进行优化。 需要验证这些方案的正确性,确保引用正确的技术文档或标准。比如,MDN Web Docs对async/defer属性的说明,以及CORS的相关配置。 最后,生成相关问题时要覆盖性能优化、安全配置、检测工具等方面,帮助用户进一步深入理解。</think>### 解析阻塞(Parser-Blocking)与跨站(Cross-Site)问题解析 #### 1. **Parser-Blocking 的定义与影响** **解析阻塞**是指浏览器在加载某些资源时(如JavaScript脚本、CSS文件)会暂停HTML解析器的执行,直到这些资源加载并处理完成。例如: ```html <script src="blocking-script.js"></script> <!-- 阻塞解析 --> ``` 若脚本未使用`async`或`defer`属性,浏览器会暂停解析,导致页面渲染延迟[^1]。 **优化方案**: - 使用`async`或`defer`属性异步加载脚本: ```html <script async src="non-blocking-script.js"></script> <!-- 异步执行 --> <script defer src="deferred-script.js"></script> <!-- 延迟到DOM解析后执行 --> ``` - 内联关键脚本或使用`Intersection Observer`延迟非关键资源加载。 #### 2. **跨站(Cross-Site)资源的安全与性能** **跨站资源**指从不同域(跨源)加载的资源,如CDN上的第三方库。此类资源可能触发以下问题: - **CORS(跨源资源共享)限制**:浏览器默认禁止跨站AJAX请求,需通过`Access-Control-Allow-Origin`响应头授权[^2]。 - **安全性风险**:恶意第三方脚本可能篡改页面内容,需使用`Subresource Integrity (SRI)`验证资源完整性: ```html <script src="https://cdn.example.com/lib.js" integrity="sha384-xxxx" crossorigin="anonymous"></script> ``` **性能优化**: - **预加载跨站资源**:通过`<link rel="preconnect">`提前建立TCP连接,减少DNS查询和握手延迟: ```html <link rel="preconnect" href="https://cdn.example.com"> ``` - **使用HTTP/2协议**:多路复用和服务器推送可加速跨站资源加载。 #### 3. **Parser-Blocking与跨站的综合解决方案** 1. **异步加载跨站脚本**: ```html <script async src="https://cdn.example.com/analytics.js" crossorigin="anonymous"></script> ``` 2. **动态资源加载**: ```javascript const script = document.createElement('script'); script.src = 'https://cdn.example.com/dynamic-script.js'; document.body.appendChild(script); // 不阻塞解析 ``` 3. **服务端优化**: - 配置CDN返回`Timing-Allow-Origin`头部,允许浏览器分析跨站资源性能。 - 启用`Cache-Control`和`ETag`缓存策略,减少重复请求。 #### 4. **检测与分析工具** - **Lighthouse**:识别阻塞渲染的跨站资源并提出优化建议。 - **Chrome DevTools Performance Tab**:分析资源加载时间线及主线程阻塞情况。 - **Security Headers审计工具**:检查CORS和SRI配置是否合规。 ---
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值