Introducing the 4+1 view model

本文介绍4+1视图模型,一种用于从不同角度理解软件架构的方法。包括逻辑视图、开发视图、过程视图、物理视图及用例视图,帮助架构师全面了解系统。

From: http://www-128.ibm.com/developerworks/wireless/library/wi-arch11/ 

Modified a little by Newweapon.

developerWorks
Document options
Set printer orientation to landscape mode

Print this page

Email this page

E-mail this page


Rate this page

Help us improve this content


Level: Introductory

Mikko Kontio (mikko.kontio@softera.fi), Production Manager, Softera

02 Feb 2005

The designing software architectures series continues with a look at the 4+1 view model. The 4+1 view model lets you separate a system into four essential views: logical, process, physical, and development. Then, for good measure, it gives you one more: a use case view that describes the functional aspects of the system as a whole.

This month you're being introduced to the 4+1 view model, an architect's tool for viewing and documenting application software. The 4+1 view model was originally introduced by Philippe Kruchten in 1995 (see Resources). Kruchten's new approach gave architects a way to examine different parts of an architecture separately, thus easing the complexity of the overall viewing.

Each of the five views in the 4+1 view model highlights some elements of the system, while intentionally suppressing others. The 4+1 view model is an excellent way for both architects and other team members to learn about a system's architecture. Architects use it to understand and document the many layers of an application in a systematic, standardized way. Documents created using the 4+1 view process are easily used by all members of the development team.

The first four views represent the logical, processing, physical, and developmental aspects of the architecture. The fifth view consists of use cases and scenarios that might further describe or consolidate the other views.

Figure 1 shows the five views of the 4+1 view model.


Figure 1. The 4+1 view model  End User                Programmer
Image of the 4+1 views 

   System integrator          System engineering

Note that, like some of the other articles in this series, this article assumes you are familiar with UML diagramming.

The logical view

The 4+1's logical view supports behavioral requirements and shows how the system is decomposed into a set of abstractions. Classes and objects are the main elements studied in this view. You can use class diagrams, collaboration diagrams, and sequence diagrams, among others, to show the relationship of these elements from a logical view.

Class diagrams show classes and their attributes, methods, and associations to other classes in the system. The class diagram in Figure 2 shows a simple use case of an ordering system (or part of one). The customer can have from zero to several orders, and an order can have from one to several items.


Figure 2. A class diagram of an ordering system
A simple class diagram of an ordering system

While useful, the class diagram hardly gives you a complete picture of the system. For one thing, class diagrams are static, so they tell you nothing about how the system will react to user input. For another, class diagrams are often too detailed to offer a useful overview of the system. You can only learn so much from studying a class diagram for a system comprised of thousands of classes.

You can use collaboration diagrams (or communication diagrams) and sequence diagrams to see how objects in the system interact. A collaboration diagram is a simple way to show system objects and the messages and calls that pass between them. Figure 3 is a simple collaboration diagram. Note that each message is assigned a number that indicates its order in the sequence.


Figure 3. A collaboration diagram of an ordering system
A simple collaboration diagram of the ordering action

Collaboration diagrams are very practical for showing a birds-eye view of collaborating objects in the system. If you want a more detailed window into the system's logic you might want to try drawing a sequence diagram. Sequence diagrams provide more detail than collaboration diagrams, but still let you study the system from a distance. Architects and designers often use sequence diagrams to fine-tune system design. For example, looking at the sequence diagram in Figure 4 might lead you to change a number of the system's method calls to reduce their number. Alternately, you might change the design by creating a vector (or similar collection) of all the Items. You could then pass the vector and a Customer id to the Order constructor. (Note that doing this would change the roles of the Customer and Order classes completely.)


Figure 4. A sequence diagram of an ordering system
A sequence diagram shows in more detail how the objects interact


Back to top


The development view

The development view is used to describe the modules of the system. Modules are bigger building blocks than classes and objects and vary according to the development environment. Packages, subsystems, and class libraries are all considered modules. Figure 5 is a package diagram showing how packages are nested in the system.


Figure 5. A package diagram shows how packages are nested in the system
The package diagram shows  the packages and how they are nested

You can also use the development view to study the placement of actual files in the system and development environment. Alternately, it is a good way to view the layers of a system in a layered architecture. A typical layered architecture might contain a UI layer, a Presentation layer, an Application Logic layer, a Business Logic layer, and a Persistence layer.



Back to top


The process view

The process view lets you describe and study the system's processes and how they communicate, if they communicate with each other at all. An overview of the processes and their communication can help you avert unintentional errors. This view is helpful when you have multiple, simultaneous processes or threads in your software.

For example, Java servlets usually create threads of one servlet instance to serve requests. Without access to a process view, a developer might unintentionally store something in the servlet class's attributes, which could lead to complex errors if other threads did the same. The process view could reduce this type of problem by describing clearly how to communicate.

The process view can be described from several levels of abstraction, starting from independently executing logical networks of communicating programs. The process view takes into account many of the nonfunctional requirements or quality requirements (which last month's column talked about) like performance, availability, etc. Activity diagrams are quite often used to describe this view.



Back to top


The physical view

The physical view describes how the application is installed and how it executes in a network of computers. This view takes into account nonfunctional requirements like availability, reliability, performance, and scalability.

Figure 6 is a deployment diagram of the example ordering system. It has one node for users who run the Web browser on their own computers. The ordering system and database are on their own nodes. The nodes contain one or more components, which can be either larger entities or smaller actual components.


Figure 6. A deployment diagram of an ordering system
A deployment diagram of the  ordering system


Back to top


The 'plus-one' view

The "plus-one" view of the 4+1 view model consists of use cases and scenarios that further describe or consolidate the other views. As discussed in previous columns in this series, use cases represent the functional side of the system. In the case of the 4+1 model they are used to explain the functionality and structures described by the other views. Some of the other views also utilize use cases, like the sequence diagram shown in Figure 4. The use case view consists of use case diagrams and specifications detailing the actions and conditions inside each use case. See Resources for more about use cases.



Back to top


In conclusion

The 4+1 view model is a useful, standardized method for studying and documenting a software system from an architectural perspective. In this fifth article in the "Designing software architectures" series, I've introduced you to the five views of the 4+1 model. Each view offers a window into a different layer of the system. Taken together they ensure that all of the important aspects of the system are studied and documented.

Let's review the views:

  • The logical view describes the (object-oriented system) system in terms of abstractions, such as classes and objects. The logical view typically contains class diagrams, sequence diagrams, and collaboration diagrams. Other types of diagrams can be used where applicable.

  • The development view describe the structure of modules, files, and/or packages in the system. The package diagram can be used to describe this view.

  • The process view describes the processes of the system and how they communicate with each other.

  • The physical view describes how the system is installed and how it executes in a network of computers. Deployment diagrams are often used to describe this view.

  • The use case view describes the functionality of the system. This view can be described using case diagrams and use case specifications.

 

Resources offers more links where you can learn about the 4+1 view model, as well as other architectural views. In next month's Architectural manifesto I will discuss the evaluation of architecture prototypes.



Resources



About the author

 

Mikko Kontio works as a Production Manager for the leading-edge Finnish software company, Softera. He holds a Masters degree in Computer Science and is the author and co-author of several books, the latest being Professional Mobile Java with J2ME, published by IT Press. Mikko can be reached at mikko.kontio@softera.fi.

 
题:Homework 5: Image Classification November 19, 2025 Due Date: December 10 by 23:59:59 Introduction In this assignment, you will implement and test various image classification models on the CIFAR-10 dataset. The goals of this assignment are as follows: • Implement and compare the linear classifier and the full-connected neural network. • Train and test two types of classifiers. • Compare the AdamW and the SGD optimizer based on FCNN. • Compare the StepLR and the CosineAnnealingLR scheduler based on FCNN. You can learn how to create, train, and test a model using PyTorch here. You are highly encouraged to go through this tutorial before you start. Here are some other supplementary materials that may help you: • PyTorch Documentation • PyTorch Chinese Documentation • Dive into deep learning Notes for hyper-parameter tuning: you can get full score when accuracy is above 60%, save time for your busy end-of-term season. 1 Define Classifiers (30 pts.) Here are some useful function: • torch.nn.Linear() • torch.nn.ReLU() • torch.nn.Tanh() You are free to use any torch functions. Note: if you want to use Convolution-based classifiers, feel free to have a try. But we don’t set bonus in this homework. 1.1 Linear classifier (15 pts.) Add your own code to the LinearClassifier class to define a linear classifier. Your classifier is required to process a mini-batch data. 1 1.2 Full-connected neural network classifier (15 pts.) Add your own code to the FCNN class to define a full-connected neural network classifier. You are responsible for choosing the network depth, width, and activation type. 2 Implement the training and testing function (40 pts.) There is a whole training code in PyTorch Tutorial: train a classifier, you can learn from it. In this task, you need to implement the train() and test() function that can choose a model, optimizer, scheduler, and so on; see the end of the main.py for details. 3 Report(30 pts.) You can use TensorBoard in PyTorch to record and visualize the loss and accuracy curves. Here is a tutorial introducing TensorBoard. Based on FCNN, analysis sec￾tion 3.1 and section 3.2. 3.1 Compare AdamW and SGD optimizer (10 pts.) Train the classifiers you implemented using the AdamW (torch.optim.AdamW) and SGD (torch.optim.SGD) optimizer and compare the loss and accuracy curves. Put the results in your report. 3.2 Compare StepLR and CosineAnnealingLR scheduler (10 pts.) Train the classifiers you implemented using two learning rate schedulers, including the StepLR (torch.optim.lr scheduler.StepLR) and CosineAnnealingLR (torch.opt￾im.lr scheduler.CosineAnnealingLR) scheduler and compare the loss and accuracy curves. Put the results in your report. 3.3 Visualization (10 pts.) You have now completed the entire process of this project. Put all the visualizations and results in your report: the loss and accuracy curves and the final classification accuracy scores. For the result of Linear Classifier, you can report with arbitrary optimizer or learning rate scheduler. For FCNN, you should report section 3.1 and section 3.2. 4 Submit Be sure to zip your code and final report; Name it as StudentID YourName HW5.zip. Any wrong name will cost you 0.5 pts in final score. 2 原代码:import torch import torch.nn as nn import argparse class LinearClassifier(nn.Module): # define a linear classifier def __init__(self, in_channels: int, out_channels: int): super().__init__() # inchannels: dimenshion of input data. For example, a RGB image [3x32x32] is converted to vector [3 * 32 * 32], so dimenshion=3072 # out_channels: number of categories. For CIFAR-10, it's 10 def forward(self, x: torch.Tensor): return class FCNN(nn.Module): # def a full-connected neural network classifier def __init__(self, in_channels: int, hidden_channels: int, out_channels: int): super().__init__() # inchannels: dimenshion of input data. For example, a RGB image [3x32x32] is converted to vector [3 * 32 * 32], so dimenshion=3072 # hidden_channels # out_channels: number of categories. For CIFAR-10, it's 10 # full connected layer # activation function # full connected layer # ...... def forward(self, x: torch.Tensor): return def train(model, optimizer, scheduler, args): ''' Model training function input: model: linear classifier or full-connected neural network classifier loss_function: Cross-entropy loss optimizer: Adamw or SGD scheduler: step or cosine args: configuration ''' # create dataset # create dataloader # for-loop # train # get the inputs; data is a list of [inputs, labels] # zero the parameter gradients # forward # loss backward # optimize # adjust learning rate # test # forward # calculate accuracy # save checkpoint (Tutorial: https://pytorch.org/tutorials/recipes/recipes/saving_and_loading_a_general_checkpoint.html) def test(model, args): ''' input: model: linear classifier or full-connected neural network classifier loss_function: Cross-entropy loss ''' # load checkpoint (Tutorial: https://pytorch.org/tutorials/recipes/recipes/saving_and_loading_a_general_checkpoint.html) # create testing dataset # create dataloader # test # forward # calculate accuracy if __name__ == '__main__': parser = argparse.ArgumentParser(description='The configs') parser.add_argument('--run', type=str, default='train') parser.add_argument('--model', type=str, default='linear') parser.add_argument('--optimizer', type=str, default='adamw') parser.add_argument('--scheduler', type=str, default='step') args = parser.parse_args() # create model if args.model == 'linear': model = elif args.model == 'fcnn': model = else: raise AssertionError # create optimizer if args.optimizer == 'adamw': # create Adamw optimizer optimizer = elif args.optimizer == 'sgd': # create SGD optimizer optimizer = else: raise AssertionError # create scheduler if args.scheduler == 'step': # create torch.optim.lr_scheduler.StepLR scheduler scheduler = elif args.scheduler == 'cosine': # create torch.optim.lr_scheduler.CosineAnnealingLR scheduler scheduler = else: raise AssertionError if args.run == 'train': train(model, optimizer, scheduler, args) elif args.run == 'test': test(model, args) else: raise AssertionError # You need to implement training and testing function that can choose model, optimizer, scheduler and so on by command, such as: # python main.py --run=train --model=fcnn --optimizer=adamw --scheduler=step 我的:import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader import torchvision import torchvision.transforms as transforms from torch.optim.lr_scheduler import StepLR, CosineAnnealingLR from tqdm import tqdm import argparse import os class LinearClassifier(nn.Module): """ Define a linear classifier """ def __init__(self, in_channels: int, out_channels: int): super().__init__() self.fc = nn.Linear(in_channels, out_channels) def forward(self, x: torch.Tensor): x = x.view(x.size(0), -1) # flatten return self.fc(x) class FCNN(nn.Module): """ Define a full-connected neural network classifier """ def __init__(self, in_channels: int, hidden_channels: int, out_channels: int): super().__init__() self.classifier = nn.Sequential( nn.Linear(in_channels, hidden_channels), nn.BatchNorm1d(hidden_channels), nn.ReLU(), nn.Linear(hidden_channels, hidden_channels), nn.BatchNorm1d(hidden_channels), nn.ReLU(), nn.Linear(hidden_channels, hidden_channels // 2), nn.BatchNorm1d(hidden_channels // 2), nn.ReLU(), nn.Dropout(0.3), nn.Linear(hidden_channels // 2, out_channels) ) def forward(self, x: torch.Tensor): x = x.view(x.size(0), -1) return self.classifier(x) def load_cifar10(batch_size=128): """Load CIFAR-10 dataset with standard normalization.""" transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)) ]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) trainloader = DataLoader(trainset, batch_size=batch_size, shuffle=True) testloader = DataLoader(testset, batch_size=batch_size, shuffle=False) return trainloader, testloader @torch.no_grad() def evaluate(model, dataloader, device): model.eval() correct = 0 total = 0 for inputs, labels in dataloader: inputs, labels = inputs.to(device), labels.to(device) outputs = model(inputs) _, predicted = torch.max(outputs, 1) total += labels.size(0) correct += (predicted == labels).sum().item() acc = 100 * correct / total return acc def train(model, optimizer, scheduler, args): """ Model training function input: model: linear classifier or full-connected neural network classifier optimizer: AdamW or SGD scheduler: step or cosine args: configuration """ device = torch.device("cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu") model.to(device) criterion = nn.CrossEntropyLoss() trainloader, testloader = load_cifar10(batch_size=128) epochs = 80 # 充分训练以突破60% for epoch in range(epochs): model.train() loop = tqdm(trainloader, desc=f"Epoch [{epoch+1}/{epochs}] Training") for inputs, labels in loop: inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() loop.set_postfix(loss=loss.item()) scheduler.step() acc = evaluate(model, testloader, device) print(f"Epoch {epoch+1} finished. Test Accuracy: {acc:.2f} %") # Save checkpoint torch.save({ 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'scheduler_state_dict': scheduler.state_dict(), 'accuracy': acc, 'epoch': epoch }, 'checkpoint.pth') print("Checkpoint saved as checkpoint.pth") def test(model, args): """ Test the trained model """ device = torch.device("cuda" if torch.cuda.is_available() else "mps" if torch.backends.mps.is_available() else "cpu") if not os.path.exists('checkpoint.pth'): raise FileNotFoundError("checkpoint.pth not found! Run training first.") checkpoint = torch.load('checkpoint.pth', map_location=device) model.load_state_dict(checkpoint['model_state_dict']) model.to(device) _, testloader = load_cifar10(batch_size=128) acc = evaluate(model, testloader, device) print(f"Final Test Accuracy: {acc:.2f} %") if __name__ == '__main__': parser = argparse.ArgumentParser(description='The configs') parser.add_argument('--run', type=str, default='train', choices=['train', 'test']) parser.add_argument('--model', type=str, default='linear', choices=['linear', 'fcnn']) parser.add_argument('--optimizer', type=str, default='adamw', choices=['adamw', 'sgd']) parser.add_argument('--scheduler', type=str, default='step', choices=['step', 'cosine']) args = parser.parse_args() in_dim = 3 * 32 * 32 out_dim = 10 # Create model if args.model == 'linear': model = LinearClassifier(in_channels=in_dim, out_channels=out_dim) elif args.model == 'fcnn': model = FCNN(in_channels=in_dim, hidden_channels=1024, out_channels=out_dim) else: raise AssertionError("Unknown model") # Create optimizer if args.optimizer == 'adamw': optimizer = optim.AdamW(model.parameters(), lr=3e-4, weight_decay=5e-4) elif args.optimizer == 'sgd': optimizer = optim.SGD(model.parameters(), lr=1e-2, momentum=0.9, weight_decay=1e-4) else: raise AssertionError("Unknown optimizer") # Create scheduler if args.scheduler == 'step': scheduler = StepLR(optimizer, step_size=15, gamma=0.5) elif args.scheduler == 'cosine': scheduler = CosineAnnealingLR(optimizer, T_max=80) # 匹配 80 轮 else: raise AssertionError("Unknown scheduler") # Run train or test if args.run == 'train': train(model, optimizer, scheduler, args) elif args.run == 'test': test(model, args) else: raise AssertionError("Invalid run mode") 帮我看一下我的是不是对的,按照题意我会得多少分
最新发布
12-11
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值