Advanced Architecture for ASP.NET Core Web API

本文介绍了使用.NET Core和ASP.NET Core构建Web API的架构。ASP.NET Core摆脱了旧技术,性能更好且支持跨平台。文中阐述了依赖注入的优势,还介绍了端口和适配器模式,以及领域层和数据层的功能与实现,如领域层定义实体、视图模型等,数据层使用Entity Framework Core。

转自:

https://www.infoq.com/articles/advanced-architecture-aspnet-core

  • ASP.NET Core's new architecture offers several benefits as compared to the legacy ASP.NET technology
  • ASP.NET Core benefits from incorporating support for dependency injection from the start
  • Single Responsibility Principle simplifies implementation and design
  • The Ports and Adapter Pattern decouples business logic from other dependencies
  • Decoupled architecture makes testing much easier and more robust

 

 

With the release of .NET Core 2.0, Microsoft has the next major version of the general purpose, modular, cross-platform and open source platform that was initially released in 2016. .NET Core has been created to have many of the APIs that are available in the current release of .NET Framework. It was initially created to allow for the next generation of ASP.NET solutions but now drives and is the basis for many other scenarios including IoT, cloud and next generation mobile solutions. In this series, we will explore some of the benefits .NET Core and how it can benefit not only traditional .NET developers but all technologists that need to bring robust, performant and economical solutions to market.

This InfoQ article is part of the series ".NET Core". You can subscribe to receive notifications via RSS.

 

The Internet is a very different place from five years ago, let alone 20when I first started as a professional developer. Today, Web APIs connect the modern internetand drive both web applications and mobile apps. The skill of creating robust Web APIs that other developers can consume is in high demand. Our API’s that drive most modern web and mobile apps need to have the stability and reliability to stay servicing even when traffic is at the performance limits.

The purpose of this article is to describe the architecture of an ASP.NET Core 2.0 Web API solution using the Hexagonal Architecture and Ports and Adapters Pattern. First, we will look at the new features of .NET Core and ASP.NET Core that benefit modern Web API’s.

The solution and all code from this article’s examples can be foundin my GitHub repository ChinookASPNETCoreAPIHex.

.NET Core and ASP.NET Core for Web API

ASP.NET Core is a new web framework that Microsoft built on top of .NET Core to shed the legacy technology that has been around since .NET 1.0.  By comparison, ASP.NET 4.6 still uses the System.Webassembly that contains all the WebForms libraries and as a result is still broughtinto more recent ASP.NET MVC 5 solutions. By shedding these legacy dependencies and developing the framework from scratch, ASP.NET Core 2.0 gives the developer much better performance and is architected for cross-platform execution. With ASP.NET Core 2.0, your solutions will work as well on Linux as they do on Windows.

You can read more about the benefits of .NET Core and ASP.NET Core from the three other articles in this series. The first is Performance isa .NET Core Feature by Maarten Balliauw, ASP.NET Core - The Power of Simplicity by Chris Klug and finally, Azure and .NET Core Are Beautiful Together by Eric Boyd.

Architecture

Building a great API depends ongreatarchitecture. We will be looking at many aspects of our API design and development from the built-infunctionality of ASP.NET Core to architecture philosophy and finally design patterns. There is muchplanning and thought behind this architecture, so let’s get started.

Dependency Injection

Before we dig into the architecture of our ASP.NET Core Web API solution, I want to discuss what I believe is a singlebenefit which makes .NET Core developers lives so much better; that is, DependencyInjection (DI). Now, I know you will say that we had DI in .NET Framework and ASP.NET solutions. I will agree, butthe DI we used in the past would be from third-party commercial providers or maybe open source libraries. They did a good job, butfor a good portion of .NET developers, there was a big learning curve, andall DI libraries had their uniqueway of handling things. Today with .NET Core, we have DI built right into the framework from the start. Moreover,it is quite simple to work with, andyou get it out of the box.

The reason we need to use DI in our API is that it allows usto have the best experience decoupling our architecture layers and also to allowus to mock the data layer, or have multiple data sources built for our API.

To use the .NET Core DI framework, justmake sure your project references the Microsoft.AspNetCore.AllNuGet package (which contains a dependency on Microsoft.Extnesions.DependencyInjection.Abstractionspackage). This package gives access to the IServiceCollection interface, which has a System.IServiceProvider interface that you can call GetService<TService>. Toget the services you need from the IServiceCollection interface, you will need to add the services your project needs. 

To learn more about .NET Core Dependency Inject, I suggest you review the following document on MSDN: Introduction to Dependency Injection in ASP.NET Core

We will now look at the philosophy of why we architected our API as I did. The two aspects of designing any architecture dependon these two ideas: allowing deep maintainability and utilizing proven patterns and architectures in your solutions.

Maintainability of the API

Maintainability for any engineering process is the ease with which a product can be maintained: finding defects, correcting found defects, repairing or replacing defective components without having to replace still-working parts, preventing unexpected malfunctions, maximizing a product's useful life, having the ability to meet new requirements, make future maintenance easier, or cope with a changing environment.Thiscan be a difficult road to go down without a well planned and executed architecture.

Maintainability is a long-term issue and should be lookedat with a vision of your API in the far distance. With that in mind,you need to make decisions that lead to this future vision and not to short-termshortcuts that seem to make life easier right now. Making hard decisions at the start will allow your project to have a long life and provide benefits that users demand.

What makes a software architecture have high maintainability? How do we evaluate if our API canbe maintained?

  • Does our architecture allow for changes that have minimal if not zero impact onother areas of the system?
  • Debugging of the API should be easy and not have to difficult set up to be done. We should have established patterns and be through common methods (such as browser debugging tools).
  • Testing should be automated as possible and be clear and not complicated.

Interfaces and Implementations

The key to my API architecture is the use of C# interfaces to allow for alternative implementations. If you have written .NET code with C#,you have probably used interfaces. I use interfaces in my solution to build out a contract in my Domain layer that guarantees that any Data layer I develop for my API adheres to the contract for data repositories. It also allows the Controllers in my API project to adhere to another established contract for getting the correct methods to process the API methods in the domain project’s Supervisor. Interfaces are very important to .NET Core and if you need a refresher go here for more information.

Ports and Adapter Pattern

We want our objects throughout our API solution to have single responsibilities. Thiskeeps our objects simple and easily changed if we need to fix bugs or enhance our code. If you have these “code smells” in your code,then you might be violating the single responsibility principle. As a general rule, I look at the implementations of the interface contracts for length and complexity. I do nothave a limit to the lines of code in my methods,butif you passeda single view in your IDE,it might be too long. Also,check the cyclomatic complexity of your methods to determine the complexity of your project’s methods and functions.

The Ports and Adapter Pattern (aka Hexagonal Architecture) is a way to fix this problem of having business logic coupled too tightly to other dependencies such as data access or API frameworks. Using this pattern will allow your API solution to have clear boundaries, well-namedobjects that have single responsibilities and finally allow easier development and maintainability.

We can see the pattern best visually like an onion with ports located on the outside of the hexagon and the adapters and business logic located closer to the core. I see the external connections of the architecture as the ports. The API endpoints that are consumedor the database connection used by Entity Framework Core 2.0 would be examples of ports while the internal data repositories would be the adapters.

 

 

Next,let’s look at the logical segments of our architecture and some demo code examples.

 

 

Domain Layer

Before we look at the API and Domain layers, we need to explain how we build out the contracts through interfaces and the implementations for our API business logic. Let’s look at the Domain layer. The Domain layer has the following functions:

  • Defines the Entities objects that will be usedthroughout the solution. These models will represent the Data layer’s DataModels.
  • Defines the ViewModels which will be used by the API layer for HTTP requests and responses as single objects or sets of objects.
  • Defines the interfaces through which our Data layer can implement the data access logic
  • Implements the Supervisor that will containmethods called from the API layer. Each method will represent an API call and will convert data from the injected Data layer to ViewModels to be returned

Our Domain Entity objects are a representation of the database that we are using to store and retrieve data used for the API business logic. Each Entity object will contain the properties represented in our case the SQL table. As an exampleis the Album entity.

public sealed class Album
{ public int AlbumId { get; set; } public string Title { get; set; } public int ArtistId { get; set; } public ICollection<Track> Tracks { get; set; } = new HashSet<Track>(); public Artist Artist { get; set; } } 

The Album table in the SQL database has threecolumns: AlbumId, Title, andArtistId. These three properties are part of the Album entity as well as the Artist’s name, a collection of associated Tracks and the associated Artist. As we will see in the other layers in the API architecture, we will build upon this entity object’s definition for the ViewModels in the project.

The ViewModels are the extension of the Entities and help give more information for the consumer of the APIs. Let'slook at the Album ViewModel. It is very similar to the Album Entity but with an additional property. In the design of my API, I determined that each Album should have the name of the Artist in the payload passed back from the API. Thiswill allow the API consumer to have that crucial piece of information about the Album without having to have the Artist ViewModel passed back in the payload (especially when we are sending back a large set of Albums). An example of our AlbumViewModel is below.

public class AlbumViewModel
{
    public int AlbumId { get; set; } public string Title { get; set; } public int ArtistId { get; set; } public string ArtistName { get; set; } public ArtistViewModel Artist { get; set; } public IList<TrackViewModel> Tracks { get; set; } } 

The other area that is developedinto the Domain layer isthe contracts via interfaces for each of the Entities defined in the layer. Again, we will use the Album entity to show the interface that is defined. 

public interface IAlbumRepository : IDisposable
{ Task<List<Album>> GetAllAsync(CancellationToken ct = default(CancellationToken)); Task<Album> GetByIdAsync(int id, CancellationToken ct = default(CancellationToken)); Task<List<Album>> GetByArtistIdAsync(int id, CancellationToken ct = default(CancellationToken)); Task<Album> AddAsync(Album newAlbum, CancellationToken ct = default(CancellationToken)); Task<bool> UpdateAsync(Album album, CancellationToken ct = default(CancellationToken)); Task<bool> DeleteAsync(int id, CancellationToken ct = default(CancellationToken)); } 

As shown in the above example, the interface definesthe methods needed to implement the data access methods for the Album entity. Each entity object and interface are well defined and simplistic that allows the next layer to be well defined.

Finally, the core of the Domain project is the Supervisor class. Itspurpose is to translate to and from Entities and ViewModels and perform business logic away from either the API endpoints and the Data access logic. Having the Supervisor handle this also will isolate the logic to allow unit testing on the translations and business logic.

Looking at the Supervisor method for acquiring and passing a single Album to the API endpoint, we can see the logic in connecting the API front end to the data access injected into the Supervisor but still keeping each isolated.

public async Task<AlbumViewModel> GetAlbumByIdAsync(int id, CancellationToken ct = default(CancellationToken)) { var albumViewModel = AlbumCoverter.Convert(await _albumRepository.GetByIdAsync(id, ct)); albumViewModel.Artist = await GetArtistByIdAsync(albumViewModel.ArtistId, ct); albumViewModel.Tracks = await GetTrackByAlbumIdAsync(albumViewModel.AlbumId, ct); albumViewModel.ArtistName = albumViewModel.Artist.Name; return albumViewModel; } 

Keeping most of the code and logic in the Domain project will allow every project to keep and adhere to the single responsibility principle. 

Data Layer

The next layer of the API architecture we will look at is the Data Layer. In our example solution,we are using Entity Framework Core 2.0. Thiswill mean that we have the Entity Framework Core’s DBContext defined but also the Data Models generated for each entity in the SQL database. If we look at the data model for the Album entity as an example, we will see that we have three properties that are storedin the database along with a property that contains a list of associated tracks to the album, in addition to a property that contains the artist object.

While you can have a multitude of Data Layer implementations, justremember that it must adhere to the requirements documented on the Domain Layer; each Data Layer implementation must work with the View Models and repository interfaces detailed in the Domain Layer. The architecture we are developing for the API uses the Repository Pattern for connecting the API Layer to the Data Layer. Thisis done using Dependency Injection (as we discussed earlier) for each of the repository objects we implement. We will discuss how we use Dependency Injection and the code when we look at the API Layer. The key to the Data Layer is the implementation of each entity repository using the interfaces developed in the Domain Layer. Looking at the Domain Layer’s Album repository as an example shows that it implements the IAlbumRepository interface. Each repository will inject the DBContext that will allow for access to the SQL database using Entity Framework Core.

public class AlbumRepository : IAlbumRepository
{ private readonly ChinookContext _context; public AlbumRepository(ChinookContext context) { _context = context; } private async Task<bool> AlbumExists(int id, CancellationToken ct = default(CancellationToken)) { return await GetByIdAsync(id, ct) != null; } public void Dispose() { _context.Dispose(); } public async Task<List<Album>> GetAllAsync(CancellationToken ct = default(CancellationToken)) { return await _context.Album.ToListAsync(ct); } public async Task<Album> GetByIdAsync(int id, CancellationToken ct = default(CancellationToken)) { return await _context.Album.FindAsync(id); } public async Task<Album> AddAsync(Album newAlbum, CancellationToken ct = default(CancellationToken)) { _context.Album.Add(newAlbum); await _context.SaveChangesAsync(ct); return newAlbum; } public async Task<bool> UpdateAsync(Album album, CancellationToken ct = default(CancellationToken)) { if (!await AlbumExists(album.AlbumId, ct)) return false; _context.Album.Update(album); _context.Update(album); await _context.SaveChangesAsync(ct); return true; } public async Task<bool> DeleteAsync(int id, CancellationToken ct = default(CancellationToken)) { if (!await AlbumExists(id, ct)) return false; var toRemove = _context.Album.Find(id); _context.Album.Remove(toRemove); await _context.SaveChangesAsync(ct); return true; } public async Task<List<Album>> GetByArtistIdAsync(int id, CancellationToken ct = default(CancellationToken)) { return await _context

转载于:https://www.cnblogs.com/tylertang/p/10800818.html

标题SpringBoot智能在线预约挂号系统研究AI更换标题第1章引言介绍智能在线预约挂号系统的研究背景、意义、国内外研究现状及论文创新点。1.1研究背景与意义阐述智能在线预约挂号系统对提升医疗服务效率的重要性。1.2国内外研究现状分析国内外智能在线预约挂号系统的研究与应用情况。1.3研究方法及创新点概述本文采用的技术路线、研究方法及主要创新点。第2章相关理论总结智能在线预约挂号系统相关理论,包括系统架构、开发技术等。2.1系统架构设计理论介绍系统架构设计的基本原则和常用方法。2.2SpringBoot开发框架理论阐述SpringBoot框架的特点、优势及其在系统开发中的应用。2.3数据库设计与管理理论介绍数据库设计原则、数据模型及数据库管理系统。2.4网络安全与数据保护理论讨论网络安全威胁、数据保护技术及其在系统中的应用。第3章SpringBoot智能在线预约挂号系统设计详细介绍系统的设计方案,包括功能模块划分、数据库设计等。3.1系统功能模块设计划分系统功能模块,如用户管理、挂号管理、医生排班等。3.2数据库设计与实现设计数据库表结构,确定字段类型、主键及外键关系。3.3用户界面设计设计用户友好的界面,提升用户体验。3.4系统安全设计阐述系统安全策略,包括用户认证、数据加密等。第4章系统实现与测试介绍系统的实现过程,包括编码、测试及优化等。4.1系统编码实现采用SpringBoot框架进行系统编码实现。4.2系统测试方法介绍系统测试的方法、步骤及测试用例设计。4.3系统性能测试与分析对系统进行性能测试,分析测试结果并提出优化建议。4.4系统优化与改进根据测试结果对系统进行优化和改进,提升系统性能。第5章研究结果呈现系统实现后的效果,包括功能实现、性能提升等。5.1系统功能实现效果展示系统各功能模块的实现效果,如挂号成功界面等。5.2系统性能提升效果对比优化前后的系统性能
在金融行业中,对信用风险的判断是核心环节之一,其结果对机构的信贷政策和风险控制策略有直接影响。本文将围绕如何借助机器学习方法,尤其是Sklearn工具包,建立用于判断信用状况的预测系统。文中将涵盖逻辑回归、支持向量机等常见方法,并通过实际操作流程进行说明。 一、机器学习基本概念 机器学习属于人工智能的子领域,其基本理念是通过数据自动学习规律,而非依赖人工设定规则。在信贷分析中,该技术可用于挖掘历史数据中的潜在规律,进而对未来的信用表现进行预测。 二、Sklearn工具包概述 Sklearn(Scikit-learn)是Python语言中广泛使用的机器学习模块,提供多种数据处理和建模功能。它简化了数据清洗、特征提取、模型构建、验证与优化等流程,是数据科学项目中的常用工具。 三、逻辑回归模型 逻辑回归是一种常用于分类任务的线性模型,特别适用于二类问题。在信用评估中,该模型可用于判断借款人是否可能违约。其通过逻辑函数将输出映射为0到1之间的概率值,从而表示违约的可能性。 四、支持向量机模型 支持向量机是一种用于监督学习的算法,适用于数据维度高、样本量小的情况。在信用分析中,该方法能够通过寻找最佳分割面,区分违约与非违约客户。通过选用不同核函数,可应对复杂的非线性关系,提升预测精度。 五、数据预处理步骤 在建模前,需对原始数据进行清理与转换,包括处理缺失值、识别异常点、标准化数值、筛选有效特征等。对于信用评分,常见的输入变量包括收入水平、负债比例、信用历史记录、职业稳定性等。预处理有助于减少噪声干扰,增强模型的适应性。 六、模型构建与验证 借助Sklearn,可以将数据集划分为训练集和测试集,并通过交叉验证调整参数以提升模型性能。常用评估指标包括准确率、召回率、F1值以及AUC-ROC曲线。在处理不平衡数据时,更应关注模型的召回率与特异性。 七、集成学习方法 为提升模型预测能力,可采用集成策略,如结合多个模型的预测结果。这有助于降低单一模型的偏差与方差,增强整体预测的稳定性与准确性。 综上,基于机器学习的信用评估系统可通过Sklearn中的多种算法,结合合理的数据处理与模型优化,实现对借款人信用状况的精准判断。在实际应用中,需持续调整模型以适应市场变化,保障预测结果的长期有效性。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
Foodpanda 的全面记录,包含 6000 条精心整理的记录,涵盖了从客户人口统计信息到订单、支付、评价和配送细节的各个方面。它为数据分析师和研究人员提供了一个丰富的资源,可用于深入分析和洞察 Foodpanda 的业务运营和客户行为。 数据集内容客户人口统计信息:数据集详细记录了客户的年龄、性别、收入水平、地理位置等基本信息。这些信息有助于了解不同客户群体的特征,为精准营销和客户细分提供数据支持。 订单信息:每条记录都包含了订单的日期、时间、金额以及购买的商品或服务。通过分析这些数据,可以发现客户的购买习惯和偏好,例如哪些时间段是订单高峰期,哪些菜品最受欢迎。 支付信息:数据集中还包含了支付方式、支付状态和支付金额等信息。这些数据可以帮助分析不同支付方式的使用频率,以及支付成功率等关键指标。 评价信息:客户对订单、服务或产品的评分和评论也被记录在数据集中。这些评价数据对于情感分析和客户满意度研究至关重要,能够帮助 Foodpanda 了解客户的真实反馈,从而改进服务质量。 配送细节:数据集还详细记录了配送时间、配送地址和配送状态等信息。通过分析这些数据,可以优化配送路线和时间,提高客户满意度。 数据集的应用场景:客户行为分析:通过分析客户的购买习惯、偏好和评价,可以更好地了解客户需求,从而提供更个性化的服务。 客户流失预测:利用数据集中的客户行为和评价数据,可以构建模型预测哪些客户可能会流失,以便提前采取措施挽留。 客户细分:根据客户的人口统计信息和购买行为,可以将客户划分为不同的群体,为每个群体提供定制化的服务和营销策略。 销售趋势分析:通过分析订单数据,可以发现销售的增长或下降趋势,为业务决策提供依据。 情感洞察:通过分析客户的评价和评论,可以了解客户对产品或服务的情感倾向,及时发现潜在问题并加以改进。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值