Auto Acc Assign(MM)

Table T030 - GL account assignments in MM 

Trans CKM9 - shows MM config / GL assignments

Configure Automatic Postings

In this step, you enter the system settings for Inventory Management and Invoice Verification transactions for automatic postings to G/L accounts.

You can then check your settings using a simulation function.

Under Further information there is a list of transactions in Materials Management and their definitions.

What are automatic postings?

Postings are made to G/L accounts automatically in the case of Invoice Verification and Inventory Management transactions relevant to Financial and Cost Accounting.

Example:
Posting lines are created in the following accounts in the case of a goods issue for a cost center:

  • Stock account
  • Consumption account

How does the system find the relevant accounts?

When entering the goods movement, the user does not have to enter a G/L account, since the R/3 System automatically finds the accounts to which postings are to be made using the following data:

  • Chart of accounts of the company code
  • If the user enters a company code or a plant when entering a transaction, the R/3 System determines the chart of accounts which is valid for the company code.
  • You must define the automatic account determination individually for each chart of accounts.
  • Valuation grouping code of the valuation area
  • If the automatic account determination within a chart of accounts is to run differently for certain company codes or plants (valuation areas), assign different valuation grouping codes to these valuation areas.
  • You must define the automatic account determination individually for every valuation grouping code within a chart of accounts. It applies to all valuation areas which are assigned to this valuation grouping code.
  • If the user enters a company code or a plant when entering a transaction, the system determines the valuation area and the valuation grouping code.
  • Transaction/event key (internal processing key)
  • Posting transactions are predefined for those inventory management and invoice verification transactions relevant to accounting. Posting records, which are generalized in the value string, are assigned to each relevant movement type in inventory management and each transaction in invoice verification. These contain keys for the relevant posting transaction (for example, inventory posting and consumption posting) instead of actual G/L account numbers.
  • You do not have to define these transaction keys, they are determined automatically from the transaction (invoice verification) or the movement type (inventory management). All you have to do is assign the relevant G/L account to each posting transaction.
  • Account grouping (only for offsetting entries, consignment liabilities, and price differences)
  • Since the posting transaction "Offsetting entry for inventory posting" is used for different transactions (for example, goods issue, scrapping, physical inventory), which are assigned to different accounts (for example, consumption account, scrapping, expense/income from inventory differences), it is necessary to divide the posting transaction according to a further key: account grouping code.
  • An account grouping is assigned to each movement type in inventory management which uses the posting transaction "Offsetting entry for inventory posting".
  • Under the posting transaction "Offsetting entry for inventory posting", you must assign G/L accounts for every account grouping, that is, assign G/L accounts.
  • If you wish to post price differences to different price difference accounts in the case of goods receipts for purchase orders, goods receipts for orders, or other movements, you can define different account grouping codes for the transaction key.
  • Using the account grouping, you can also have different accounts for consignment liabilities and pipeline liabilities.
  • Valuation class of material or (in case of split valuation) the valuation type
  • The valuation class allows you to define automatic account determination that is dependent on the material. for example: you post a goods receipt of a raw material to a different stock account than if the goods receipt were for trading goods, even though the user enters the same transaction for both materials.
  • You can achieve this by assigning different valuation classes to the materials and by assigning different G/L accounts to the posting transaction for every valuation class.
  • If you do not want to differentiate according to valuation classes you do not have to maintain a valuation class for a transaction.

Requirements

Before you maintain automatic postings, you must obtain the following information:

  1. 1. Valuation level ( plant or company code)
  • Establish whether the materials are valuated at plant or at company code level
  • When valuation is at plant level, the valuation area corresponds to a plant.
  • When valuation is at company code level, the valuation area corresponds to a company code.
  1. 2. Chart of accounts and valuation grouping code per valuation area
  • Find out whether the valuation grouping code is active.
  • If it is not active, determine the chart of accounts assigned to each valuation area (via the company code).
  • If it is active, determine the chart of accounts and the valuation grouping code assigned to each valuation area.
  • You must define a separate account determination process for chart of accounts and each valuation grouping code.
  1. 3. Valuation class per material type
  • If you wish to differentiate the account determination process for specific transactions according to valuation classes, find out which valuation classes are possible for each material type.
  1. 4. Account grouping for offsetting entries to stock accounts
  • Under Define account grouping for movement types, determine for which movement types an account grouping is defined for the transaction/event keys GGB (offsetting entry to stock posting), KON (consignment liabilities) and PRD (price differences).

Default settings

G/L account assignments for the charts of accounts INT and the valuation grouping code 0001 are SAP standard.

Activities

  1. 1. Create account keys for each chart of accounts and each valuation grouping code for the individual posting transactions. To do so, proceed as follows:
    1. a) Call up the activity Configure Automatic Postings.
    • The R/3 system first checks whether the valuation areas are correctly maintained. If, for example, a plant is not assigned to a company code, a dialog box and an error message appear.
    • From this box, choose Continue (next entry) to continue the check.
    • Choose Cancel to end the check.
    • The configuration menu Automatic postings appears.
    1. b) Choose Goto -> Account assignment.
    • A list of posting transactions in Materials Management appears. For further details of the individual transactions, see Further information.
    • The Account determination indicator shows whether automatic account determination is defined for a transaction.
    1. c) Choose a posting transaction.
      A box appears for the first posting transaction. Here you can enter a chart of accounts.
  • You can enter the following data for each transaction:
    • Rules for account number assignments
    • With Goto -> Rules you can enter the factors on which the account number assignments depend:
    • - debit/credit indicator
    • - general grouping (= account grouping)
    • - valuation grouping
    • - valuation class
    • Posting keys for the posting lines
    • Normally you do not have to change the posting keys.  If you wish to use new posting keys, you have to define them in the Customizing system of Financial Accounting.
    • Account number assignments
    • You must assign G/L accounts for each transaction/event key (except KBS). You can assign these accounts manually or copy them from another chart of accounts via Edit -> Copy.
    • If you want to differentiate posting transactions (e.g. inventory postings) according to valuation classes, you must make an account assignment for each valuation class.
    • Using the posting transaction "Offsetting entry for inventory posting", you have to make an account assignment for each account grouping
    • If the transaction PRD (price differences) is also dependent on the account grouping, you must create three account assignments:
    • - an account assignment without account grouping
    • - an account assignment with account grouping PRF
    • - an account assignment with account grouping PRA
    • If the transaction KON (consignment and pipeline liabilities) is also dependent on the account grouping, you must create two account assignments:
    • - an account assignment without account grouping (consignment)
    • - an account assignment with account grouping (pipeline)
    1. d) Save your settings.
  1. 2. Then check your settings with the simulation function.
  • With the simulation function, you can simulate the following:
    • Inventory Management transactions
    • Invoice Verification transactions
  • When you enter a material or valuation class, the R/3 system determines the G/L accounts which are assigned to the corresponding posting transactions. Depending on the configuration, the SAP system checks whether the G/L account exists
  • In the simulation you can compare the field selection of the movement type with that of the individual accounts and make any corrections.
  • If you want to print the simulation, choose Simulation -> Report.
  • To carry out the simulation, proceed as follows:
    1. a) Choose Settings to check the simulation defaults for
      - the application area (Invoice Verification or Inventory Management)
      - the input mode (material or valuation class)
      - account assignment
    1. b) Choose Goto -> Simulation.
    • The screen for entering simulation data appears.
    1. c) Depending on the valuation level, enter a plant or a company code on the screen.
    1. d) When you simulate Inventory Management transactions, goods movements are simulated. The R/3 system suggests the first movement type for simulation. If several movements are possible with this movement type, you can select a line.
    • When you simulate Invoice Verification transactions, a list appears on the screen of the possible transaction types. Select a line.
    1. e) Then choose Goto -> Account assignments.
    • A list appears of the posting lines which can be created by the selected transaction. For each posting line, the G/L account for the debit posting as well as the G/L account for the credit posting are displayed.
    1. f) From this screen, choose Goto -> Movement+ to get a list of the posting lines for the next movement type or transaction type.
    • If you work with valuation classes, choose Goto -> Valuation class+ to receive the simulation for the next valuation class. This function is not possible when simulating with material numbers.
    • Choose Goto -> Check screen layout to compare the movement type with the G/L accounts determined by the system and make any necessary corrections.

Note

The simulation function does NOT obviate the need for a trial posting!

Further notes

The following list shows the individual transactions with examples of how they are used. The transaction/event key is specified in brackets.

  • Agency business: income (AG1)
  • This transaction can be used in agency business for income deriving from commission (e.g. del credere commission). The account key is used in the calculation schemas for agency business to determine the associated revenue accounts.
  • Agency business: turnover (AG2)
  • This transaction can be used in agency business if turnover (business volume) postings are activated in Customizing for the payment types. The account key is specified in Customizing for the billing type.
  • Agency business: expense (AG3)
  • This transaction can be used in agency business for commission expenses. The account key is used in the calculation schemas for agency business to determine the associated expense accounts.
  • Expense/revenue from consumption of consignment material (AKO)
  • This transaction is used in Inventory Management in the case of withdrawals from consignment stock or when consignment stock is transferred to own stock if the material is subject to standard price control and the consignment price differs from the standard price.
  • Expenditure/income from transfer posting (AUM)
  • This transaction is used for transfer postings from one material to another if the complete value of the issuing material cannot be posted to the value of the receiving material. This applies both to materials with standard price control and to materials with moving average price control. Price differences can arise for materials with moving average price if stock levels are negative and the stock value becomes unrealistic as a result of the posting. Transaction AUM can be used irrespective of whether the transfer posting involves a transfer between plants. The expenditure/income is added to the receiving material.
  • Provisions for subsequent (end-of-period rebate) settlement (BO1)
  • If you use the "subsequent settlement" function with regard to conditions (e.g. for period-end volume-based rebates), provisions for accrued income are set up when goods receipts are recorded against purchase orders if this is defined for the condition type.
  • Income from subsequent settlement (BO2)
  • The rebate income generated in the course of "subsequent settlement" (end-of-period rebate settlement) is posted via this transaction.
  • Income from subsequent settlement after actual settlement (BO3)
  • If a goods receipt occurs after settlement accounting has been effected for a rebate arrangement, no further provisions for accrued rebate income can be managed by the "subsequent settlement" facility. No postings should be made to the account normally used for such provisions. As an alternative, you can use this transaction to post provisions for accrued rebate income to a separate account in cases such as the one described.
  • Supplementary entry for stock (BSD)
  • This account is posted when closing entries are made for a cumulation run. This account is a supplementary account to the stock account; that is, the stock account is added to it to determine the stock value that was calculated via the cumulation. In the process, the various valuation areas (for example, commercial, tax), that are used in the balance sheet are taxed separately.
  • Change in stock (BSV)
  • Changes in stocks are posted in Inventory Management at the time goods receipts are recorded or subsequent adjustments made with regard to subcontract orders.
  • If the account assigned here is defined as a cost element, you must specify a preliminary account assignment for the account in the table of automatic account assignment specification (Customizing for Controlling) in order to be able to post goods receipts against subcontract orders. In the standard system, cost center SC-1 is defined for this purpose.
  • Stock posting (BSX)
  • This transaction is used for all postings to stock accounts. Such postings are effected, for example:
    • In inventory management in the case of goods receipts to own stock and goods issues from own stock
    • In invoice verification, if price differences occur in connection with incoming invoices for materials valuated at moving average price and there is adequate stock coverage
    • In order settlement, if the order is assigned to a material with moving average price and the actual costs at the time of settlement vary from the actual costs at the time of goods receipt
  • Because this transaction is dependent on the valuation class, it is possible to manage materials with different valuation classes in separate stock accounts.
  •   Caution
  • Take care to ensure that:
    • A stock account is not used for any transaction other than BSX
    • Postings are not made to the account manually
    • The account is not changed in the productive system before all stock has been booked out of it
  • Otherwise differences would arise between the total stock value of the material master records and the balance on the stock account.
  • Account determination of valuated sales order stock and project stock
  • Note that for valuated sales order stock and project stock (special stock E and Q) and for the transaction/event keys BSX and GBB, you must maintain an account determination to avoid receiving warning messages when entering data (purchase order or transfer posting) for valuated stock.
    During data entry, the system attempts to execute a provisional account determination for GBB for valuated stock. The system will only replace the provisional account determination for GBB with the correct account determination for the stock account (BSX), in the background, if you enter the data for valuated stock at a later point in time.
  • Revaluation of other consumption (COC)
  • This transaction/event key is required for the revaluation of consumption in Actual Costing/Material Ledger.
  • Revaluation of consumption valuates single-level consumption using the actual prices determined in the Actual Costing/Material Ledger application. This revaluation can either take place in the account where the original postings were made, or in a header account.
  • The header account is determined using the transaction/event key COC.
  • Del credere (DEL)
  • Transaction/event key for the payment/invoice list documents in Purchasing. The account key is needed in the calculation schema for payment/settlement processing to determine the associated revenue accounts.
  • Small differences, Materials Management (DIF)
  • This transaction is used in Invoice Verification if you define a tolerance for minor differences and the balance of an invoice does not exceed the tolerance.
  • Purchase account(EIN), purchase offsetting account (EKG), freight purchase account (FRE)
  • Note
  • Due to special legal requirements, this function was developed specially for certain countries (Belgium, Spain, Portugal, France, Italy, and Finland).
  • Before you use this function, check whether you need to use it in your country.
  • Freight clearing (FR1), provision for freight charges (FR2), customs duty clearing (FR3), provision for customs duty (FR4)
  • These transactions are used to post delivery costs (incidental procurement costs) in the case of goods receipts against purchase orders and incoming invoices. Which transaction is used for which delivery costs depends on the condition types defined in the purchase order.
  • You can also enter your own transactions for delivery costs in condition types.
  • External service (FRL)
  • The transaction is used for goods and invoice receipts in connection with subcontract orders.
  • If the account assigned here is defined as a cost element, you must specify a preliminary account assignment for the account in the table of automatic account assignment specification (Customizing for Controlling) in order to be able to post goods receipts against subcontract orders. In the standard system, cost center SC-1 is defined for this purpose.
  • External service, delivery costs (FRN)
  • This transaction is used for delivery costs (incidental costs of procurement) in connection with subcontract orders.
  • If the account assigned here is defined as a cost element, you must specify a preliminary account assignment for the account in the table of automatic account assignment specification (Customizing for Controlling) in order to be able to post goods receipts against subcontract orders. In the standard system, cost center SC-1 is defined for this purpose.
  • Offsetting entry for stock posting (GBB)
  • Offsetting entries for stock postings are used in Inventory Management. They are dependent on the account grouping to which each movement type is assigned. The following account groupings are defined in the standard system:
    • AUA: for order settlement
    • AUF: for goods receipts for orders (without account assignment)
      and for order settlement if AUA is not maintained
    • AUI: Subsequent adjustment of actual price from cost center directly
      to material (with account assignment)
    • BSA: for initial entry of stock balances
    • INV: for expenditure/income from inventory differences
    • VAX: for goods issues for sales orders without
      account assignment object (the account is not a cost element)
    • VAY: for goods issues for sales orders with
      account assignment object (account is a cost element)
    • VBO: for consumption from stock of material provided to vendor
    • VBR: for internal goods issues (for example, for cost center)
    • VKA: for sales order account assignment
      (for example, for individual purchase order)
    • VKP: for project account assignment (for example, for individual PO)
    • VNG: for scrapping/destruction
    • VQP: for sample withdrawals without account assignment
    • VQY: for sample withdrawals with account assignment
    • ZOB: for goods receipts without purchase orders (mvt type 501)
    • ZOF: for goods receipts without production orders
      (mvt types 521 and 531)
  • You can also define your own account groupings. If you intend to post goods issues for cost centers (mvt type 201) and goods issues for orders (mvt type 261) to separate consumption accounts, you can assign the account grouping ZZZ to movement type 201 and account grouping YYY to movement type 261.
  •   Caution
  • If you use goods receipts without a purchase order in your system (movement type 501), you have to check to which accounts the account groupings are assigned ZOB
  • If you expect invoices for the goods receipts, and these invoices can only be posted in Accounting, you can enter a clearing account (similar to a GR/IR clearing account though without open item management), which is cleared in Accounting when you post the vendor invoice.
  • Note that the goods movement is valuated with the valuation price of the material if no external amount has been entered.
  • As no account assignment has been entered in the standard system, the assigned account is not defined as a cost element. If you assign a cost element, you have to enter an account assignment via the field selection or maintain an automatic account assignment for the cost element.
  • Account determination of valuated sales order stock and project stock
  • Note that for valuated sales order stock and project stock (special stock E and Q) and for the transaction/event keys BSX and GBB, you must maintain an account determination to avoid receiving warning messages when entering data (purchase order or transfer posting) for valuated stock.
    During data entry, the system attempts to execute a provisional account determination for GBB for valuated stock. The system will only replace the provisional account determination for GBB with the correct account determination for the stock account (BSX), in the background, if you enter the data for valuated stock at a later point in time.
  • Purchase order with account assignment (KBS)
  • You cannot assign this transaction/event key to an account. It means that the account assignment is adopted from the purchase order and is used for the purpose of determining the posting keys for the goods receipt.
  • Exchange Rate Differences Materials Management(AVR) (KDG)
  • When you carry out a revaluation of single-level consumption in the material ledger for an alternative valuation run, the exchange rate difference accounts of the materials are credited with the exchange rate differences that are to be assigned to the consumption.
  • Exchange rate differences in the case of open items (KDM)
  • Exchange rate differences in the case of open items arise when an invoice relating to a purchase order is posted with a different exchange rate to that of the goods receipt and the material cannot be debited or credited due to standard price control or stock undercoverage/shortage.
  • Differences due to exchange rate rounding, Materials Management (KDR)
  • An exchange rate rounding difference can arise in the case of an invoice made out in a foreign currency. If a difference arises when the posting lines are translated into local currency (as a result of rounding), the system automatically generates a posting line for this rounding difference.
  • Exchange Rate Differences from Lower Levels (KDV)
  • In multi-level periodic settlement in the material ledger, some of the exchange rate differences that have been posted during the period in respect of the raw materials, semifinished products and cost centers performing the activity used in the manufacture of a semifinished or finished product are debited or credited to that semifinished or finished product.
  • Consignment liabilities (KON)
  • Consignment liabilities arise in the case of withdrawals from consignment stock or from a pipeline or when consignment stock is transferred to own stock.
  • Depending on the settings for the posting rules for the transaction/event key KON, it is possible to work with or without account modification. If you work with account modification, the following modifications are available in the standard system:
    • None for consignment liabilities
    • PIP for pipeline liabilities
  • Offsetting entry for price differences in cost object hierarchies (KTR)
  • The contra entry for price difference postings (transaction PRK) arising through settlement via material account determination is carried out with transaction KTR.
  • Accruals and deferrals account (material ledger) (LKW)
  • If the process of material price determination in the material ledger is not accompanied by revaluation of closing stock, the price and exchange rate differences that should actually be applied to the stock value are contra-posted to accounts with the transaction/event key LKW.
  • If, on the other hand, price determination in the material ledger is accompanied by revaluation of the closing stock, the price and exchange rate differences are posted to the stock account (i.e. the stock is revalued).
  • Price Difference from Exploded WIP (Lar.) (PRA)
  • If you use the WIP revaluation of the material ledger, the price variances of the exploded WIP stock of an activity type or a business process are posted to the price differences account with transaction/event key PRA.
  • Differences (AVR Price) (PRC)
  • In the alternative valuation run in the material ledger, some of the variances that accrue interest in the cost centers, are transfer posted to the semifinished or finished product.
  • Price differences (PRD)
  • Price differences arise for materials valuated at standard price in the case of all movements and invoices with a value that differs from the standard price. Examples: goods receipts against purchase orders (if the PO price differs from the standard pricedardpreis), goods issues in respect of which an external amount is entered, invoices (if the invoice price differs from the PO price and the standard price).
  • Price differences can also arise in the case of materials with moving average price if there is not enough stock to cover the invoiced quantity. In the case of goods movements in the negative range, the moving average price is not changed. Instead, any price differences arising are posted to a price difference account.
  • Depending on the settings for the posting rules for transaction/event key PRD, it is possible to work with or without account modification. If you use account modification, the following modifications are available in the standard system:
    • None for goods and invoice receipts against purchase orders
    • PRF for goods receipts against production orders and
      order settlement
    • PRA for goods issues and other movements
    • PRU for transfer postings (price differences in the case
      of external amounts)
  • Price Differences (Material Ledger, AVR) (PRG)
  • When you carry out a revaluation of single-level consumption in the material ledger during the alternative valuation run, the price difference accounts of the materials are credited with the price differences that are to be assigned to the consumption.
  • Price differences in cost object hierarchies (PRK)
  • In cost object hierarchies, price differences occur both for the assigned materials with standard price and for the accounts of the cost object hierarchy. In the course of settlement for cost object hierarchies after settlement via material account determination, the price differences are posted via the transaction PRK.
  • Price Difference from Exploded WIP (Mat.) (PRM)
  • If you use the WIP revaluation of the material ledger, the price and exchange rate differences of the exploded WIP stock of a material are posted to the price difference account with transaction/event key PRM.
  • Price differences, product cost collector (PRP)
  • During settlement accounting with regard to a product cost collector in repetitive manufacturing, price differences are posted with the transaction PRP in the case of the valuated sales order stock.
  • This transaction is currently used in the following instances only:
  • - Production cost collector in Release 4.0
  • - Product cost collector in IS Automotive Release 2.0 (product cost collector in connection with APO)
  • Offsetting entry: price differences, product cost collector (PRQ)
  • The offsetting (contra) entry to price difference postings (transaction PRP) in the course of settlement accounting with respect to a product cost collector in repetitive manufacturing in the case of the valuated sales order stock is carried out via transaction PRQ.
  • This transaction is currently used in the following instances only:
  • - Production cost collector in Release 4.0
  • - Product cost collector in IS Automotive Release 2.0 (product cost collector in connection with APO)
  • Price Differences from Lower Levels (PRV)
  • In multi-level periodic settlement in the material ledger, some of the price differences posted during the period in respect of the raw materials, semifinished products, and cost centers performing the activity used in a semifinished or finished product, are transfer posted to that semifinished or finished product.
  • Price differences for material ledger (PRY)
  • In the course of settlement in the material ledger, price differences from the material ledger are posted with the transaction PRY.
  • Expense and revenue from revaluation (retroactive pricing, RAP)
  • This transaction/event key is used in Invoice Verification within the framework of the revaluation of goods and services supplied for which settlement has already taken place. Any difference amounts determined are posted to the accounts assigned to the transaction/event key RAP (retroactive pricing) as expense or revenue.
  • At the time of the revaluation, the amounts determined or portions thereof) are posted neither to material stock accounts nor to price difference accounts. The full amount is always posted to the "Expense from Revaluation" or "Revenue from Revaluation" account. The offsetting (contra) entry is made to the relevant vendor account.
  • Invoice reductions in Logistics Invoice Verification (RKA)
  • This transaction/event key is used in Logistics Invoice Verification for the interim posting of price differences in the case of invoice reductions.
  • If a vendor invoice is reduced, two accounting documents are automatically created for the invoice document. With the first accounting document, the amount invoiced is posted in the vendor line. An additional line is generated on the invoice reduction account to partially offset this amount. With the second accounting document, the invoice reduction is posted in the form of a credit memo from the vendor. The offsetting entry to the vendor line is the invoice reduction account. Hence the invoice reduction account is always balanced off by two accounting documents within one transaction.
  • Provision for delivery costs (RUE)
  • Provisions are created for accrued delivery costs if a condition type for provisions is entered in the purchase order. They must be cleared manually at the time of invoice verification.
  • Taxes in case of transfer posting GI/GR (TXO)
  • This transaction/event key is only relevant to Brazil (nota fiscal).
  • Revenue/expense from revaluation (UMB)
  • This transaction/event key is used both in Inventory Management and in Invoice Verification if the standard price of a material has been changed and a movement or an invoice is posted to the previous period (at the previous price).
  • Expenditure/income from revaluation (UMD)
  • This account is the offsetting account for the BSD account. It is posted during the closing entries for the cumulation run of the material ledger and has to be defined for the same valuation areas.
  • Unplanned delivery costs (UPF)
  • Unplanned delivery costs are delivery costs (incidental procurement costs) that were not planned in a purchase order (e.g. freight, customs duty). In the SAP posting transaction in Logistics Invoice Verification, instead of distributing these unplanned delivery costs among all invoice items as hitherto, you have the option of posting them to a special account. A separate tax code can be used for this account.
  • Input tax, Purchasing (VST)
  • Transaction/event key for tax account determination within the "subsequent settlement" facility for debit-side settlement types. The key is needed in the settlement schema for tax conditions.
  • Inflation posting (WGB)
  • Transaction/event key that posts inflation postings to a different account, within the handling of inflation process for the period-end closing.
  • Goods issue, revaluation (inflation) (WGI)
  • This transaction/event key is used if already-posted goods issues have to be revaluated following the determination of a new market price within the framework of inflation handling.
  • Goods receipt, revaluation (inflation) (WGR)
  • This transaction/event key is used if already-effected transfer postings have to be revaluated following the determination of a new market price within the framework of inflation handling. This transaction is used for the receiving plant, whereas transaction WGI (goods receipt, revaluation (inflation)) is used for the plant at which the goods are issued.
  • WIP from Price Differences (Internal Activity) (WPA)
  • When you use the WIP revaluation of the material ledger, the price variances from the actual price calculation that are to be assigned to the WIP stock, an activity type or a business process are posted to the WIP account for activities.
  • WIP from Price Differences (Material) (WPM)
  • When you use the WIP revaluation of the material ledger, the price and exchange rate differences that are to be assigned to the WIP stock of a material are posted to the WIP account for material.
  • GR/IR clearing (WRX)
  • Postings to the GR/IR clearing account occur in the case of goods and invoice receipts against purchase orders. For more on the GR/IR clearing account, refer to the SAP Library (documentation MM Material Valuation).
  • Caution
  • You must set the Balances in local currency only indicator for the GR/IR clearing account  to enable the open items to be cleared. For more on this topic, see the field documentation.
  • GR/IR clearing for material ledger (WRY)
  • This transaction/event key is not used from Release 4.0 onwards.
  • Prior to 4.0, it was used for postings to the GR/IR clearing account if the material ledger was active. As of Release 4.0, the transaction is no longer necessary, since postings to the GR/IR account in parallel currencies are possible.
  • Customers who used the transaction WRY prior to Release 4.0 must make a transfer posting from the WRY account to the WRX account in order to ensure that the final balance on the WRY account is zero.
 
<template> <div class="page-container"> <el-row class="page-search"> <el-form :model="form" @submit.native.prevent label-position="left"> <el-row > <el-col :xs="24" :sm="12" :md="8" :lg="6"> <el-form-item label="部门名称:" label-width="80px" class="form-item"> <el-input v-model="form.orgName" size="small" class="input" clearable/> </el-form-item> </el-col> <el-col :xs="24" :sm="12" :md="8" :lg="6"> <el-form-item label="部门编码:" label-width="80px" class="form-item"> <el-input v-model="form.orgCode" size="small" class="input" clearable/> </el-form-item> </el-col> <el-col :xs="24" :sm="12" :md="8" :lg="6"> <el-form-item label="开始时间:" label-width="80px" class="form-item"> <el-date-picker class="input" v-model="form.startTime" type="date" placeholder="请选择开始时间" format="yyyy-MM-dd" value-format="yyyy-MM-dd" size="small" /> </el-form-item> </el-col> <el-col :xs="24" :sm="12" :md="8" :lg="6"> <el-form-item label="结束时间:" label-width="80px" class="form-item"> <el-date-picker class="input" v-model="form.endTime" type="date" placeholder="请选择结束时间" format="yyyy-MM-dd" value-format="yyyy-MM-dd" size="small" /> </el-form-item> </el-col> </el-row> </el-form> </el-row> <div class="page-btn"> <el-button-group> <el-button type="primary" size="mini" @click="search">查询</el-button> </el-button-group> </div> <div class="page-table"> <dl-table v-loading="table.loading" :data="table.data" :key="table.key" row-key="id" :border="true" height="100%" highlight-current-row @current-change="handleCurrentChange"> <el-table-column label="序号" width="80" align="center" fixed> <template v-slot="scope"> <span> {{ (page.current - 1 )*page.size+(scope.$index + 1) }}</span> </template> </el-table-column> <el-table-column prop="orgName" label="单位名称" show-overflow-tooltip min-width="300px" align="left"/> <el-table-column prop="orgCode" label="单位编码" min-width="200px" align="left"/> <el-table-column prop="personNum" label="违规人数" min-width="200px" align="right"> <template v-slot="scope"> <el-link type="primary" @click="toPerson(scope.row.orgCode)">{{scope.row.personNum}}</el-link> </template> </el-table-column> <el-table-column prop="num" label="违规次数" min-width="200px" align="right"> <template v-slot="scope"> <el-link type="primary" @click="toPerson(scope.row.orgCode)">{{scope.row.num}}</el-link> </template> </el-table-column> <el-table-column prop="comeLateNum" label="晚归次数" min-width="200px" align="right"> <template v-slot="scope"> <el-link type="primary" @click="toPerson(scope.row.orgCode,1)">{{scope.row.comeLateNum}}</el-link> </template> </el-table-column> <el-table-column prop="nightRiseNum" label="夜出次数" min-width="200px" align="right"> <template v-slot="scope"> <el-link type="primary" @click="toPerson(scope.row.orgCode,2)">{{scope.row.nightRiseNum}}</el-link> </template> </el-table-column> </dl-table> </div> <div class="page-pagination"> <el-pagination background class="pagination" :current-page="page.current" :page-sizes="[10, 20, 50, 100]" :page-size="page.size" layout="total, sizes, prev, pager, next, jumper" :total="page.total" @size-change="pageSizeChange" @current-change="pageCurrentChange" /> </div> </div> </template> <script> import useHome from '@/hooks/index.js' const { getFun, urlConfig:{ APARTMENT_ORG_VIOLATION_COUNT, } } = useHome() export default { name: 'ApartmentViolationOrgCount', data () { return { page: { total: 0, current :1, size: 20 }, form: { startTime: '', endTime: '' }, table: { loading: false, data: [], key: 1 }, currentRow: {}, parentName: '', } }, methods: { pageSizeChange(val) { this.page.size = val this.searchPage() }, pageCurrentChange(val) { this.page.current = val this.searchPage() }, //分页查询 search(){ this.page.current = 1; this.searchPage() }, searchPage() { this.table.loading = true const param = Object.assign({ current: this.page.current, size: this.page.size },{ ...this.form }) const filteredParam = Object.keys(param).reduce((acc, key) => { if (param[key] !== null && param[key] !== undefined && param[key] !== '') { acc[key] = param[key] } return acc }, {}); getFun(APARTMENT_ORG_VIOLATION_COUNT, filteredParam).then(res => { this.table.data = res.records this.page.total = res.total this.table.key += 1 }).finally(() => { this.table.loading = false; }); }, handleCurrentChange(val){ this.currentRow = val }, toPerson(orgCode,queryType){ this.$router.push({ name: 'ApartmentViolationPersonCount', query: { orgCode: orgCode, queryType: queryType } }) } }, created() { this.searchPage() } } </script> <style scoped lang="scss"> .page-container{ height:100%; display:flex; flex-direction:column; .page-search{ background-color:white; padding:5px; .form-item{ margin-bottom:0px; .input { width: 95%; } } .button { text-align: center; } } .page-btn{ margin-top:5px; background-color:white; padding:5px; } .page-table{ overflow: auto; flex:auto; margin-top:5px; background-color:white; padding:5px; } .page-pagination{ margin-top:5px; background-color:white; padding:5px; .pagination{ float:right; } } .custom-dialog { height: 200px; display: flex; flex-direction: column; .form-item{ margin-bottom:20px; .input { width: 90%; } } .el-dialog__body { flex: 1; overflow: auto; } } } </style> 初始化开始时间和结束时间为本月第一天和当前时间
最新发布
11-01
<template> <div> <div> <c-table :columns="materialColumns" :data-source="materialData" :sortConfig="{ showIcon: false }" :pagination="false" ref="materialTableRef" height="150" > </c-table> </div> <div class="middle-section"> <div class="action-buttons"> <a-button v-if="action != 'detail'" type="primary" @click="addPlanCompilation" :icon="h(PlusOutlined)" > 添加 </a-button> <a-button v-if="action != 'detail'" @click="autoAddMaterial" :icon="h(SyncOutlined)" > 自动添加物量 </a-button> <a-button v-if="action != 'detail'" type="primary" @click="getPlanAddTempList" :icon="h(SearchOutlined)" > 查询物量 </a-button> <!-- <a-button--> <!-- v-if="action != 'detail'"--> <!-- type="primary"--> <!-- danger--> <!-- @click="deleteSelectedMaterials"--> <!-- :icon="h(DeleteOutlined)"--> <!-- >--> <!-- 删除--> <!-- </a-button>--> <span style="margin-left: 16px; margin-top: 5px">计划时间:</span> <a-range-picker v-model:value="selectedDateRange" :disabled-date="disabledDate" :placeholder="['开始日期', '结束日期']" value-format="YYYY-MM-DD" /> <span v-if="action == 'add'" style="margin-left: 16px; margin-top: 5px">项目:</span> <c-project-select v-if="action == 'add'" v-model:value="projectNo" :fieldNames="fieldNames" /> <span style="margin-left: 16px; margin-top: 5px">小票号:</span> <a-input style="width: 200px" v-model:value="pipeNo"/> </div> <div class="summary-row"> <div class="summary-item"> <span>管数量:</span> <span>{{ totalPipes }}</span> </div> <div class="summary-item"> <span>重量:</span> <span>{{ totalWeight }} KG</span> </div> <div class="summary-item"> <span>寸口数:</span> <span>{{ totalInches }}</span> </div> </div> </div> <div> <c-table :columns="executionState.columns" :proxy-config="executionState.proxyConfig" :sortConfig="{ showIcon: false }" ref="executionTableRef" :row-selection="rowSelection" :rowKey="record => `${record.projectNo}_${record.pipeNo}_${record.pipeVersion}`" height="300" > <template #action="{ record }"> <a-button type="link" @click="editRow(record)">编辑</a-button> </template> </c-table> </div> <div> <c-table :columns="quotaWorkhourColumns" :data-source="quotaWorkhourData" :pagination="false" :sortConfig="{ showIcon: false }" ref="hoursTableRef" height="100" /> </div> <div class="footer-buttons"> <a-button @click="handleCancel">取消</a-button> <a-button v-if="action != 'detail'" type="primary" @click="generatePlan">生成计划</a-button> </div> </div> <c-modal v-model:open="modalVisible" title="添加" width="1600px" :footer="null"> <c-search-panel ref="searchPanelRef" :columns="addPlanTableState.columns.concat(extraColumns)" @search="onSearch"></c-search-panel> <c-table :columns="addPlanTableState.columns" :toolbar="addPlanTableState.toolbar" :sortConfig="{ showIcon: false }" :proxy-config="addPlanTableState.proxyConfig" :rowSelection="{ selectedRowKeys: addPlanTableState.selectedRowKeys, onChange: onPlanSelectChange }" @toolbar-button-click="onToolbarClick" ref="tableRef" height="400" > </c-table> </c-modal> <c-form-modal :isSubmitClose="false" ref="formModal" title="编辑" :columns="executionState.columns" @save="submitModal" > </c-form-modal> </template> <script setup> import {computed, h, nextTick, onActivated, reactive, ref, watch} from 'vue' import * as server from "@/packages/piping/api/cppf" import {useRoute, useRouter} from 'vue-router' import {PlusOutlined, SearchOutlined, SyncOutlined} from '@ant-design/icons-vue' import dayjs from "dayjs" import {message} from "ant-design-vue" import {getProjects} from "@/api/core"; import {getWorkSpaceList} from "@/packages/piping/api/basic"; import {user} from "@/utils"; const materialData = ref([]) const fieldNames = ref({value: 'projId', optionLabelProp: 'projId'}) const executionTableRef = ref(null) const cExecutionTable = computed(() => executionTableRef.value?.getTable()) const modalVisible = ref(false) const route = useRoute() const formModal = ref() const orgNo = computed(() => route.query.orgNo) const planNo = computed(() => route.query.planNo) const planVersion = computed(() => route.query.planVersion) const planDate = computed(() => route.query.planDate) const planStartDate = computed(() => route.query.planStartDate) const planEndDate = computed(() => route.query.planEndDate) const projectNo = ref(null) const pipeNo = ref(null) const projectNoList = computed(() => { const projectNos = route.query.projectNoList; return Array.isArray(projectNos) ? projectNos : (projectNos ? [projectNos] : []); }) const projectList = ref([]) const action = computed(() => route.query.action) const dynamicColumns = computed(() => { const columns = [ // 第一列保持不变 { title: '', dataIndex: 'matDefinition', width: 80, sorter: false } ] if (planDate.value) { const startDate = dayjs(planDate.value) const monthNames = ['one', 'two', 'three', 'four', 'five', 'six'] for (let i = 0; i < 6; i++) { const currentDate = startDate.add(i, 'month') const month = currentDate.month() + 1 columns.push({ title: `${month}月`, dataIndex: `${monthNames[i]}MonthData`, width: 100, }) } } return columns }) const materialColumns = dynamicColumns const executionState = reactive({ selectedRowKeys: [], selectedRows: [], proxyConfig: { autoLoad: false, ajax: { query: (pagination) => { if (action.value === 'edit' || action.value === 'detail') { return server.getPipePlanDespList({...pagination, ...executionConditionData.value}); } else if (action.value === 'add') { return server.getPlanAddTempList({...pagination, ...executionConditionData.value}); } }, }, }, columns: [ { title: "项目", dataIndex: "projectNo", width: 80, fixed: "left", type: "project", disabled: true, options: { options: [], fieldNames: {label: "projId", value: "projId"} }, }, { title: '小票号', fixed: "left", disabled: true, dataIndex: 'pipeNo', width: 150, }, { title: '版本', disabled: true, dataIndex: 'pipeVersion', width: 60, }, { title: "最新版", dataIndex: "isTopVersion", width: 100, disabled: true, type: "select", options: { options: [ {label: "Y", value: "Y"}, {label: "N", value: "N"} ] } }, { title: "是否删除", dataIndex: "isDelete", width: 100, disabled: true, type: "select", options: { options: [ {label: "Y", value: "Y"}, {label: "N", value: "N"} ] } }, { title: "是否暂停", dataIndex: "isPause", width: 100, disabled: true, type: "select", options: { options: [ {label: "Y", value: "Y"}, {label: "N", value: "N"} ] } }, { title: '工艺路线类型', dataIndex: 'craftLineType', formInvisible: true, width: 100, }, { title: '小票预制周期(天)', dataIndex: 'pipePreCyc', formInvisible: true, width: 130, }, { title: "四级基准计划", children: [ { title: "开始时间", dataIndex: "targetPlanStart", type: "date", formInvisible: true, width: 70, }, { title: "结束时间", dataIndex: "targetPlanEnd", type: "date", formInvisible: true, width: 70, } ], }, { title: "四级执行计划", children: [ { title: "开始时间", dataIndex: "correctBeginDate", type: "date", formInvisible: true, width: 70 }, { title: "结束时间", dataIndex: "lastPlanEnd", type: "date", formInvisible: true, width: 70 } ], }, { title: "下料", children: [ { title: "计划日期", dataIndex: "cutPlanDate", type: "date", width: 90 }, { title: "工位", dataIndex: "cutWorkSpaceId", width: 80, type: "select", formatter: ({row}) => row.cutPlanWorkSpaceNo, options: { options: [], fieldNames: {label: "workSpaceNo", value: "id"}, ajax: getWorkSpaceList({status: 'Y', workSpaceType: '下料'}).then((res) => { return res }) }, } ], }, { title: "装配", children: [ { title: "计划日期", dataIndex: "assyPlanDate", type: "date", width: 90 }, { title: "工位", dataIndex: "assyWorkSpaceId", width: 80, type: "select", formatter: ({row}) => row.assyPlanWorkSpaceNo, options: { options: [], fieldNames: {label: "workSpaceNo", value: "id"}, ajax: getWorkSpaceList({status: 'Y', workSpaceType: '一次组对,成品组对'}).then((res) => { return res }) }, } ], }, { title: '焊前报验计划日期', dataIndex: 'weldingPreInspPlanDate', type: "date", width: 130, }, { title: "焊接", children: [ { title: "计划日期", dataIndex: "weldingPlanDate", type: "date", width: 90 }, { title: "工位", dataIndex: "weldingWorkSpaceId", width: 80, type: "select", formatter: ({row}) => row.weldingPlanWorkSpaceNo, options: { options: [], fieldNames: {label: "workSpaceNo", value: "id"}, ajax: getWorkSpaceList({status: 'Y', workSpaceType: '一次焊接,成品焊接'}).then((res) => { return res }) }, } ], }, { title: '焊后报验计划日期', dataIndex: 'weldingPostInspPlanDate', type: "date", width: 130, }, { title: '装筐计划日期', dataIndex: 'inBasketPlanDate', type: "date", width: 110, }, { title: '安装计划日期', dataIndex: 'instNeedDate', type: "date", width: 110, formInvisible: true, }, { title: "操作", key: "action", scopedSlots: {customRender: "action"}, width: 60, fixed: "right", }, ] }) const quotaWorkhourColumns = [ {title: '', dataIndex: 'calculateType', width: 100, sorter: false}, {title: '下料(H)', dataIndex: 'cutQuotaWorkhour', width: 100, sorter: false}, {title: '装配(H)', dataIndex: 'assyQuotaWorkhour', width: 100, sorter: false}, {title: '焊接(H)', dataIndex: 'weldingQuotaWorkhour', width: 100, sorter: false}, {title: '集配(H)', dataIndex: 'handoverQuotaWorkhour', width: 100, sorter: false}, {title: '报验(H)', dataIndex: 'inspectionQuotaWorkhour', width: 100, sorter: false} ] const isDisabledComputed = computed(() => { return projectList.value.length == 1 }) const initialProjectNo = ref(null) const executionConditionData = ref({}) watch(projectList, (newList) => { if (newList.length == 1) { initialProjectNo.value = newList[0].projId } }, {immediate: true}) watch( () => [projectNo.value, pipeNo.value], ([newProjectNo, newPipeNo]) => { nextTick().then(() => { const [startPlanDate] = selectedDateRange.value Object.assign(executionConditionData.value, { orgNo: orgNo.value, planDate: startPlanDate, projectNoList: newProjectNo || '', pipeNo: newPipeNo || '' }); cExecutionTable.value?.commitProxy("reload"); }); }, {immediate: true} ); const addPlanTableState = reactive({ selectedRowKeys: [], toolbar: { buttons: [ { code: "addPlan", status: 'primary', icon: 'PlusOutlined', name: "添加" } ], }, proxyConfig: { autoLoad: false, ajax: { query: (pagination) => server.pipeDesignPlanPage({...pagination, ...conditionData.value}), }, }, columns: [ { title: "项目", dataIndex: "projectNo", width: 80, condition: true, sorter: false, type: "project", options: { projectList: projectList, options: [], fieldNames: {label: "projId", value: "projId"} }, disabled: isDisabledComputed, decorator: {initialValue: initialProjectNo, rules: [{required: true, message: '请选择项目!'}]}, }, { title: "基地", dataIndex: "orgNo", type: "buildCase", width: 100, disabled: true, decorator: {initialValue: orgNo}, condition: true, sorter: false, }, { title: "小票号", dataIndex: "pipeNo", width: 100, condition: true, conditionNotice: "%匹配,逗号相连", sorter: false, }, { title: '版本', disabled: true, dataIndex: 'pipeVersion', width: 60, }, { title: "预制周期", dataIndex: "preCycle", width: 80, sorter: false, }, { title: "安装需求时间", dataIndex: "instNeedDate", width: 100, type: "date", sorter: false, }, { title: "最新版", dataIndex: "isTopVersion", width: 100, type: "select", sorter: false, visible: false, options: { options: [ {label: "Y", value: "Y"}, {label: "N", value: "N"} ] } }, { title: "是否删除", dataIndex: "isDelete", width: 100, type: "select", sorter: false, visible: false, options: { options: [ {label: "Y", value: "Y"}, {label: "N", value: "N"} ] } }, { title: "是否暂停", dataIndex: "isPause", width: 100, type: "select", sorter: false, visible: false, options: { options: [ {label: "Y", value: "Y"}, {label: "N", value: "N"} ] } }, { title: "四级基准计划", children: [ { title: "开始时间", dataIndex: "targetPlanStart", type: "date", width: 70, sorter: false }, { title: "结束时间", dataIndex: "targetPlanEnd", type: "date", width: 70, sorter: false } ], sorter: false }, { title: "四级执行计划", children: [ { title: "开始时间", dataIndex: "correctBeginDate", type: "date", sorter: false, width: 70 }, { title: "结束时间", dataIndex: "lastPlanEnd", type: "date", sorter: false, width: 70 } ], sorter: false } ], }) const extraColumns = ref([ { title: "分段", dataIndex: "block", condition: true }, { title: "生产大区域", dataIndex: "bigArea", condition: true }, { title: "生产中区域", dataIndex: "middleArea", condition: true }, { title: "生成小区域", dataIndex: "smallArea", condition: true } ]) const totalPipes = computed(() => { return cExecutionTable.value?.getData()?.length || 0 }) const totalWeight = computed(() => { const data = cExecutionTable.value?.getData() || [] const sum = data.reduce((acc, item) => acc + (parseFloat(item.weight) || 0), 0) return sum.toFixed(2) }) const totalInches = computed(() => { const data = cExecutionTable.value?.getData() || [] const sum = data.reduce((acc, item) => acc + (parseFloat(item.inch) || 0), 0) return sum.toFixed(2) }) let conditionData = {} const tableRef = ref(null) const ctable = computed(() => tableRef.value?.getTable()) const selectedRow = ref([]) const executionData = ref([]) const quotaWorkhourData = ref([]) const rowSelection = computed(() => { return { selectedRowKeys: executionState.selectedRowKeys, onChange: (selectedRowKeys, selectedRows) => { executionState.selectedRowKeys = selectedRowKeys executionState.selectedRows = selectedRows } } }) const router = useRouter() const selectedDateRange = ref([ dayjs(planDate.value).startOf('month').format('YYYY-MM-DD'), dayjs(planDate.value).endOf('month').format('YYYY-MM-DD') ]) const disabledDate = (current) => { const today = dayjs().startOf('day') const currentMonth = dayjs(planDate.value).month() const currentYear = dayjs(planDate.value).year() return ( current && ( current.year() != currentYear || current.month() != currentMonth || current < today ) ); } // 搜索 const onSearch = (values) => { conditionData.value = values ctable.value.commitProxy("query", values) } const onToolbarClick = (target) => { switch (target.code) { case "addPlan": funAddPlan() break default: break } } const onPlanSelectChange = (selectedRowKeys, rows) => { addPlanTableState.selectedRowKeys = selectedRowKeys selectedRow.value = rows } onActivated(() => { executionData.value = [] quotaWorkhourData.value = [] executionConditionData.value = {} pipeNo.value = "" server.statisticsMatAttrMonth({ orgNo: orgNo.value, planDate: planDate.value, projectNoList: projectNoList.value.join(',') }).then((res) => { materialData.value = res.data }) if (action.value == "edit" || action.value == "detail") { selectedDateRange.value = [dayjs(planStartDate.value).format('YYYY-MM-DD'), dayjs(planEndDate.value).format('YYYY-MM-DD')] executionConditionData.value = { planNo: planNo.value, planVersion: planVersion.value, projectNo: projectNoList.value.join(','), orgNo: orgNo.value } cExecutionTable.value?.commitProxy("query", executionConditionData.value); } else { const today = dayjs().startOf('day'); const startOfMonth = dayjs(planDate.value).startOf('month'); selectedDateRange.value = [ startOfMonth < today ? today.format('YYYY-MM-DD') : startOfMonth.format('YYYY-MM-DD'), dayjs(planDate.value).endOf('month').format('YYYY-MM-DD') ]; cExecutionTable.value?.commitProxy("reload"); } calculateQuotaWorkhour() }) const calculateQuotaWorkhour = () => { const [startPlanDate, endPlanDate] = selectedDateRange.value server.calculateQuotaWorkhour({ orgNo: orgNo.value, planDate: planDate.value, projectNoList: projectNoList.value.join(','), projectNo: projectNoList.value.join(','), startPlanDate, endPlanDate, planNo: planNo.value, planVersion: planVersion.value, isAdd: action.value == "add" ? true : false }).then((res) => { console.log("res", res) if (res.data) { quotaWorkhourData.value = res.data.map(item => ({ ...item, handoverQuotaWorkhour: item.handoverQuotaWorkhour ?? '-', inspectionQuotaWorkhour: item.inspectionQuotaWorkhour ?? '-' })); } }) } const addPlanCompilation = () => { getProjects().then((res) => { projectList.value = res.data.filter(project => projectNoList.value.includes(project.projId) ) }).finally(() => { modalVisible.value = true }); } const autoAddMaterial = () => { const [startPlanDate, endPlanDate] = selectedDateRange.value server.autoAddPlanPipe({ orgNo: orgNo.value, planDate: planDate.value, projectNoList: projectNoList.value.join(','), startPlanDate, endPlanDate }).then(() => { message.warn("自动添加物量计算中...") }) } const getPlanAddTempList = () => { const [startPlanDate] = selectedDateRange.value executionConditionData.value = { orgNo: orgNo.value, planDate: dayjs(startPlanDate).format('YYYY-MM-DD'), projectNoList: projectNoList.value.join(','), projectNo: projectNoList.value.join(','), planNo: planNo.value, planVersion: planVersion.value } cExecutionTable.value.commitProxy("query", executionConditionData.value) calculateQuotaWorkhour() } // const deleteSelectedMaterials = () => { // if (executionState.selectedRowKeys.length == 0) { // message.error("请至少选择一条数据") // return // } // executionState.selectedRowKeys.forEach((rowKey) => { // const row = cExecutionTable.value.getRowById(rowKey) // if (row) { // cExecutionTable.value.deleteRow(row) // } // }) // executionState.selectedRowKeys = [] // } const generatePlan = () => { const allRecords = cExecutionTable.value.getData() if (allRecords.length == 0) { message.error("请添加物量") return } const selectedPlanMonth = planDate.value; const [startPlanDate, endPlanDate] = selectedDateRange.value let planNoPrefix = ''; switch (orgNo.value) { case 'C000030': planNoPrefix = 'Y'; break; case 'C000031': planNoPrefix = 'H'; break; case 'C000032': planNoPrefix = 'L'; break; default: planNoPrefix = ''; } const paramsList = projectNoList.value.map(projectNo => { const newPlanNo = action.value == "add" ? planNoPrefix + 'P' + projectNo + dayjs(selectedPlanMonth).format('YYYYMM') : planNo.value; return { orgNo: orgNo.value, projectNo: projectNo, planStartDate: startPlanDate, planEndDate: endPlanDate, planNo: newPlanNo, isAdd: action.value == "add" ? true : false, }; }); cExecutionTable.value.validateEditFields().then(() => { server.generatePrePlan(paramsList).then(() => { message.success("保存成功") materialData.value = [] executionData.value = [] quotaWorkhourData.value = [] router.push({ path: '/piping/cppf/cpPipePlan' }) }) }) } const handleCancel = () => { emit('cancel') } const emit = defineEmits(['generate', 'cancel']) const funAddPlan = () => { if (addPlanTableState.selectedRowKeys.length == 0) { message.error("请选择至少一条数据") return } executionConditionData.value = { orgNo: orgNo.value, planDate: dayjs(planDate.value).format('YYYY-MM-DD'), projectNoList: projectNoList.value.join(','), projectNo: projectNoList.value.join(','), planNo: planNo.value, planVersion: planVersion.value } selectedRow.value.map(item => { const selectedPlanMonth = planDate.value; let planNoPrefix = ''; switch (orgNo.value) { case 'C000030': planNoPrefix = 'Y'; break; case 'C000031': planNoPrefix = 'H'; break; case 'C000032': planNoPrefix = 'L'; break; default: planNoPrefix = ''; } item.planNo = action.value == "add" ? planNoPrefix + 'P' + item.projectNo + dayjs(selectedPlanMonth).format('YYYYMM') : planNo.value; item.tempKey = orgNo.value + "-" + user().number + "-" + dayjs(selectedPlanMonth).format('YYYYMM') item.planVersion = planVersion.value item.orgNo = orgNo.value item.planDate = dayjs(selectedPlanMonth).format('YYYY-MM-DD') return item }) if (action.value == "add") { server.pipePlanItemTempSave(selectedRow.value).then(() => { message.success("保存成功") }) } else if (action.value == "edit") { server.pipePlanEditItemTempSave(selectedRow.value).then(() => { message.success("保存成功") }) } addPlanTableState.selectedRowKeys = [] cExecutionTable.value.commitProxy("query", executionConditionData.value) modalVisible.value = false } const editRow = (record) => { formModal.value.show(record) } const submitModal = (record) => { const selectedPlanMonth = planDate.value; record.tempKey = orgNo.value + "-" + user().number + "-" + dayjs(selectedPlanMonth).format('YYYYMM') record.orgNo = orgNo.value record.planDate = dayjs(selectedPlanMonth).format('YYYY-MM-DD') if (action.value == "add") { let planNoPrefix = ''; switch (orgNo.value) { case 'C000030': planNoPrefix = 'Y'; break; case 'C000031': planNoPrefix = 'H'; break; case 'C000032': planNoPrefix = 'L'; break; default: planNoPrefix = ''; } record.planNo = planNoPrefix + 'P' + record.projectNo + dayjs(selectedPlanMonth).format('YYYYMM') server.pipePlanItemTempSave([record]).then(() => { message.success("修改成功") formModal.value.cancel() cExecutionTable.value.commitProxy("query", executionConditionData.value) }) } else if (action.value == "edit") { record.planNo = planNo.value record.planVersion = planVersion.value record.planDate = dayjs(selectedPlanMonth).format('YYYY-MM-DD') server.savePipePlanEdit(record).then(() => { message.success("修改成功") formModal.value.cancel() cExecutionTable.value.commitProxy("query", executionConditionData.value) }) } } </script> <style lang="less" scoped> .plan-compilation-container { display: flex; flex-direction: column; height: 100vh } .tables-container { flex: 1; display: flex; flex-direction: column; overflow: auto } .first-table { flex: 0 0 30% } .second-table { flex: 0 0 40% } .third-table { flex: 0 0 20% } .middle-section { flex: 0 0 5%; display: flex; justify-content: space-between; align-items: center; padding: 10px; background-color: #fafafa; border-radius: 4px } .action-buttons { display: flex; gap: 10px } .summary-row { display: flex; gap: 30px } .summary-item { display: flex; gap: 5px; font-weight: bold } .footer-buttons { display: flex; justify-content: flex-end; gap: 10px; padding: 10px; border-top: 1px solid #f0f0f0; flex: 0 0 auto } </style> 给generatePlan增加防重点击, 另外const totalPipes = computed(() => { return cExecutionTable.value?.getData()?.length || 0 }) const totalWeight = computed(() => { const data = cExecutionTable.value?.getData() || [] const sum = data.reduce((acc, item) => acc + (parseFloat(item.weight) || 0), 0) return sum.toFixed(2) }) const totalInches = computed(() => { const data = cExecutionTable.value?.getData() || [] const sum = data.reduce((acc, item) => acc + (parseFloat(item.inch) || 0), 0) return sum.toFixed(2) })这三个统计,要变更,比如现在一共5122条数据,因为是分页导致totalPipes只能显示当前页的,要求显示全内部的
10-31
# -*- coding: utf-8 -*- # 重新增加了然门控变得更快得方式:1.beta_l0更大;2.log_alpha的学习率变为2.0;3.添加熵正则化。 from __future__ import annotations import math import os import random import time from collections import deque from pathlib import Path from typing import Tuple import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.optim.lr_scheduler import CosineAnnealingLR from torch.utils.data import DataLoader from torchvision import datasets, models, transforms from sklearn.cluster import KMeans import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.metrics import ( silhouette_score, silhouette_samples, calinski_harabasz_score, davies_bouldin_score, ) from sklearn.manifold import TSNE try: import umap # 只有 umap-learn 才带 UMAP 类 HAS_UMAP = hasattr(umap, "UMAP") or hasattr(umap, "umap_") except ImportError: HAS_UMAP = False from datetime import datetime from matplotlib.patches import Rectangle import warnings # -------------------------- Global configuration -------------------------- # class CFG: # Paths data_root: str = r"D:\dataset\TILDA_8class_73" save_root: str = r"D:\SCI_exp\7_29\exp_file" # Dataset & DL batch_size: int = 128 num_workers: int = 0 # tune to your CPU img_size: int = 224 # F2013 images are 48×48; we upscale for ResNet‐18 # Model dimensions (§3.5.1) d_backbone: int = 512 d_proj: int = 128 K_max: int = 3 mem_size: int = 4096 # Optimisation (§3.5.1) lr_warmup: float = 1e-3 lr_joint: float = 3e-4 lr_ft: float = 1e-4 weight_decay: float = 5e-4 n_epochs_warmup: int = 15#5 n_epochs_joint: int = 150 #20 n_epochs_ft: int = 25 #15 # Loss hyper‑params lambda1: float = 0.5 # push–pull alpha_proto: float = 0.1 scale_ce: float = 30.0 gamma_se: float = 20 # 自表示权重 0.5 # ---------- Hard-Concrete ---------- tau0_hc: float = 1.5 # 初始温度 tau_min_hc: float = 0.15 # 最低温度 anneal_epochs_hc: int = 5 gamma_hc: float = -0.1 # stretch 下界 zeta_hc: float = 1.1 # stretch 上界 beta_l0: float = 5e-2 # L0 正则系数 5e-2 hc_threshold: float = 0.35 # Misc seed: int = 42 device: str = "cuda" if torch.cuda.is_available() else "cpu" # ---------- datetime ---------- # def get_timestamp(): """获取当前时间戳,格式:YYYYMMDD_HHMMSS""" return datetime.now().strftime("%Y%m%d_%H%M%S") # ---------- diagnostics ---------- # MAX_SAMPLED = 5_000 # None → 全量 timestamp = get_timestamp() # 获取当前时间戳 DIAG_DIR = Path(CFG.save_root) / f"diagnostics_{timestamp}" # 文件夹名包含时间戳 DIAG_DIR.mkdir(parents=True, exist_ok=True) # -------------------------- Reproducibility -------------------------- # torch.manual_seed(CFG.seed) random.seed(CFG.seed) # -------------------------- Utility functions -------------------------- # def L2_normalise(t: torch.Tensor, dim: int = 1, eps: float = 1e-12) -> torch.Tensor: return F.normalize(t, p=2, dim=dim, eps=eps) def pairwise_cosine(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: """Compute cosine similarity between all pairs in *x* and *y*.""" x = L2_normalise(x) y = L2_normalise(y) return x @ y.T # (N, M) # -------------------------- Memory bank (FIFO queue) -------------------------- # class MemoryBank: """Fixed‑size FIFO queue storing (p, q, y_c). All tensors are detached.""" def __init__(self, dim: int, size: int): self.size = size self.dim = dim self.ptr = 0 self.is_full = False # pre‑allocate self.p_bank = torch.zeros(size, dim, device=CFG.device) self.q_bank = torch.zeros_like(self.p_bank) self.y_bank = torch.zeros(size, dtype=torch.long, device=CFG.device) @torch.no_grad() def enqueue(self, p: torch.Tensor, q: torch.Tensor, y: torch.Tensor): b = p.size(0) if b > self.size: p, q, y = p[-self.size:], q[-self.size:], y[-self.size:] b = self.size idx = (torch.arange(b, device=CFG.device) + self.ptr) % self.size self.p_bank[idx] = p.detach() self.q_bank[idx] = q.detach() self.y_bank[idx] = y.detach() self.ptr = (self.ptr + b) % self.size if self.ptr == 0: self.is_full = True def get(self) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: valid = self.size if self.is_full else self.ptr return ( self.p_bank[:valid].detach(), self.q_bank[:valid].detach(), self.y_bank[:valid].detach(), ) # -------------------------- Projection heads -------------------------- # class MLPHead(nn.Module): def __init__(self, in_dim: int, out_dim: int): super().__init__() self.mlp = nn.Sequential( nn.Linear(in_dim, out_dim//2, bias=False), nn.BatchNorm1d(out_dim//2), nn.ReLU(inplace=True), nn.Linear(out_dim//2, out_dim, bias=True), ) def forward(self, x: torch.Tensor): return self.mlp(x) # -------------------------- Cosine classifier -------------------------- # class CosineLinear(nn.Module): """Cosine classifier with fixed scale *s* (Eq. CE).""" def __init__(self, in_dim: int, n_classes: int, s: float = CFG.scale_ce): super().__init__() self.s = s self.weight = nn.Parameter(torch.randn(n_classes, in_dim)) nn.init.xavier_uniform_(self.weight) def forward(self, x: torch.Tensor): # x ∈ ℝ^{B×d_p} x = L2_normalise(x) w = L2_normalise(self.weight) # logits = s * cos(θ) return self.s * (x @ w.T) # -------------------------- BaPSTO model -------------------------- # class BaPSTO(nn.Module): """Backbone + DASSER heads + BPGSNet prototypes & gates.""" def __init__(self, n_classes: int): super().__init__() # --- Backbone (ResNet‑18) ------------------------------------------------ resnet = models.resnet18(weights=models.ResNet18_Weights.IMAGENET1K_V1) pretrained_path = Path(CFG.save_root) / "resnet18_best_TILDA_8class_73_7446.pth" if pretrained_path.exists(): print(f"Loading pretrained weights from {pretrained_path}") pretrained = torch.load(pretrained_path, map_location=CFG.device, weights_only=True) # 创建临时模型来获取预训练权重的正确映射 temp_model = models.resnet18() temp_model.fc = nn.Linear(temp_model.fc.in_features, n_classes) temp_model.load_state_dict(pretrained["state_dict"], strict=False) # 复制预训练权重到我们的模型中(除了fc层) resnet_dict = resnet.state_dict() pretrained_dict = {k: v for k, v in temp_model.state_dict().items() if k in resnet_dict and 'fc' not in k} resnet_dict.update(pretrained_dict) resnet.load_state_dict(resnet_dict) print("✓ Successfully loaded pretrained backbone weights!") else: print(f"⚠️ Pretrained weights not found at {pretrained_path}. Using ImageNet weights.") # --- Backbone ------------------------------------------------ in_feat = resnet.fc.in_features # 512 resnet.fc = nn.Identity() self.backbone = resnet # project to d_backbone (512-64-128) #self.fc_backbone = nn.Linear(in_feat, CFG.d_backbone, bias=False) #nn.init.xavier_uniform_(self.fc_backbone.weight) # 这一句的 # --- Projection heads --------------------------------------------------- self.g_SA = MLPHead(CFG.d_backbone, CFG.d_proj) self.g_FV = MLPHead(CFG.d_backbone, CFG.d_proj) # Cosine classifier (coarse level) self.classifier = CosineLinear(CFG.d_proj, n_classes) # --- BPGSNet prototypes & gate logits ----------------------------------- self.prototypes = nn.Parameter( torch.randn(n_classes, CFG.K_max, CFG.d_proj) ) # (K_C, K_max, d_p) nn.init.xavier_uniform_(self.prototypes) self.log_alpha = nn.Parameter( torch.randn(n_classes, CFG.K_max) * 0.01 # 随机初始化 ) # (K_C, K_max) self.register_buffer("global_step", torch.tensor(0, dtype=torch.long)) # ---------------- Forward pass ---------------- # def forward(self, x: torch.Tensor, y_c: torch.Tensor, mem_bank: MemoryBank, use_bpgs: bool = True ) -> tuple[torch.Tensor, dict[str, float], torch.Tensor, torch.Tensor]: """Return full loss components (Section §3.3 & §3.4).""" B = x.size(0) # --- Backbone & projections ------------------------------------------- z = self.backbone(x) # (B, 512) p = L2_normalise(self.g_SA(z)) # (B, d_p) q = L2_normalise(self.g_FV(z)) # (B, d_p) bank_p, bank_q, bank_y = mem_bank.get() # ---------------- DASSER losses ---------------- # # L_SA, L_ortho, L_ce_dasser = self._dasser_losses( # p, q, y_c, bank_p, bank_q, bank_y # ) # total_loss = L_SA + L_ortho + L_ce_dasser # stats = { # "loss": total_loss.item(), # "L_SA": L_SA.item(), # "L_ortho": L_ortho.item(), # "L_ce_dasser": L_ce_dasser.item(), # } L_SA, L_ortho, L_ce_dasser, L_se = self._dasser_losses( p, q, y_c, bank_p, bank_q, bank_y ) total_loss = ( L_SA + L_ortho + L_ce_dasser + CFG.gamma_se * L_se # NEW ) stats = { "loss": total_loss.item(), "L_SA": L_SA.item(), "L_ortho": L_ortho.item(), "L_ce_dasser": L_ce_dasser.item(), "L_se": L_se.item(), # NEW } # ---------------- BPGSNet (conditional) -------- # if use_bpgs: L_ce_bpgs, L_proto, L_gate, coarse_logits = self._bpgs_losses(q, y_c) total_loss = total_loss + L_ce_bpgs + L_proto + L_gate stats.update({ "L_ce_bpgs": L_ce_bpgs.item(), "L_proto": L_proto.item(), "L_gate": L_gate.item(), }) else: coarse_logits = None return total_loss, stats, p.detach(), q.detach() # ---------------------- Internal helpers ---------------------- # def _dasser_losses( self, p: torch.Tensor, q: torch.Tensor, y_c: torch.Tensor, bank_p: torch.Tensor, bank_q: torch.Tensor, bank_y: torch.Tensor, ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: """ DASSER 损失: • 语义对齐 L_SA • 正交 L_ortho • 粗粒度 CE L_ce • 自表示 L_se (NEW) """ # ---------- 拼 batch + memory ---------- # p_all = torch.cat([p, bank_p], dim=0) if bank_p.numel() > 0 else p q_all = torch.cat([q, bank_q], dim=0) if bank_q.numel() > 0 else q y_all = torch.cat([y_c, bank_y], dim=0) if bank_y.numel() > 0 else y_c # ---------- 1) 语义对齐 (原有) ---------- # G = pairwise_cosine(p_all, p_all) # (N,N) :contentReference[oaicite:2]{index=2} G.fill_diagonal_(0.0) same = y_all.unsqueeze(0) == y_all.unsqueeze(1) diff = ~same L_SA = ((same * (1 - G)).sum() + CFG.lambda1 * (diff * G.clamp_min(0)).sum()) / (p_all.size(0) ** 2) # ---------- 2) 正交 (原有) --------------- # L_ortho = (1.0 / CFG.d_proj) * (p_all @ q_all.T).pow(2).sum() # ---------- 3) 自表示 (NEW) -------------- # C_logits = pairwise_cosine(p_all, p_all) # 再算一次以免受上一步改动 C_logits.fill_diagonal_(-1e4) # 置 −∞ → softmax≈0 C = F.softmax(C_logits, dim=1) # 行归一化 :contentReference[oaicite:3]{index=3} Q_recon = C @ q_all # 线性重构 L_se = F.mse_loss(Q_recon, q_all) # :contentReference[oaicite:4]{index=4} # ---------- 4) 粗粒度 CE (原有) ---------- # logits_coarse = self.classifier(p) L_ce = F.cross_entropy(logits_coarse, y_c) return L_SA, L_ortho, L_ce, L_se # ---------------------- 放到 BaPSTO 类里,直接替换原函数 ---------------------- # def _bpgs_losses( self, q: torch.Tensor, y_c: torch.Tensor ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: """ 计算 BPGSNet 损失(正确的 log-sum-exp 版) """ B = q.size(0) # q是batch*128的矩阵,获得批次大小 K_C, K_M = self.prototypes.size(0), self.prototypes.size(1) # K_C 是类别数,K_M 是每个类别的原型数 # (1) 欧氏距离 d = ((q.unsqueeze(1).unsqueeze(2) - self.prototypes.unsqueeze(0)) ** 2).sum(-1) # (B,K_C,K_M) s = 30.0 # ===== (2) 退火温度 τ ===== # τ 线性退火 epoch = self.global_step.item() / self.steps_per_epoch tau = max(CFG.tau_min_hc, CFG.tau0_hc - (CFG.tau0_hc - CFG.tau_min_hc) * min(1., epoch / CFG.anneal_epochs_hc)) # ----- (3) Hard- ----- log_alpha = self.log_alpha # (C,K) z, _s = self._sample_hardConcrete(log_alpha, tau) # z: (C,K) g = z.unsqueeze(0) # (1,C,K) 广播到 batch # (1,C,K) # ----- (4) coarse logits ----- mask_logits = -d * s + torch.log(g + 1e-12) # (B,C,K) coarse_logits = torch.logsumexp(mask_logits, dim=2) # (B,C) # ----- (5) losses ----- L_ce = F.cross_entropy(coarse_logits, y_c) y_hat = torch.softmax(mask_logits.detach(), dim=2) # stop-grad L_proto = CFG.alpha_proto * (y_hat * d).mean() # ---------- Hard-Concrete 的 L0 正则 ---------- temp = (log_alpha - tau * math.log(-CFG.gamma_hc / CFG.zeta_hc)) # (C,K) p_active = torch.sigmoid(temp) # 激活概率 p_active 是解析期望 pa(z大于0) # 新增加得loss pa = torch.sigmoid(log_alpha) entropy_penalty = 0.05 * (pa * torch.log(pa + 1e-8) + (1-pa) * torch.log(1-pa + 1e-8)).mean() # 新增加得loss,控制全局稀疏率 L_gate = CFG.beta_l0 * p_active.mean() - entropy_penalty # L0 正则 beta_l0 控控制全局稀疏率 return L_ce, L_proto, L_gate, coarse_logits def _sample_hardConcrete(self, log_alpha, tau): """return z ~ HardConcrete, and its stretched unclipped \tilde z""" u = torch.rand_like(log_alpha).clamp_(1e-6, 1-1e-6) s = torch.sigmoid((log_alpha + torch.log(u) - torch.log(1-u)) / tau) s = s * (CFG.zeta_hc - CFG.gamma_hc) + CFG.gamma_hc # stretch z_hard = s.clamp(0.0, 1.0) z = z_hard + (s - s.detach()) # ST estimator,让梯度穿过 return z, s # z用于前向, s用于梯度 # -------------------------- K-means++ initialisation -------------------------- # @torch.no_grad() def kmeans_init(model: BaPSTO, loader: DataLoader): """Use q‑features to initialise prototypes with K‑means++ (§3.4.1).""" print("[Init] Running K‑means++ for prototype initialisation...") model.eval() all_q, all_y = [], [] for x, y in loader: x = x.to(CFG.device) z = L2_normalise(model.g_FV(model.backbone(x))) all_q.append(z.cpu()) all_y.append(y) all_q = torch.cat(all_q) # (N, d_p) all_y = torch.cat(all_y) # (N,) for c in range(model.prototypes.size(0)): feats = all_q[all_y == c] kmeans = KMeans( n_clusters=CFG.K_max, init="k-means++", n_init=10, max_iter=100, random_state=CFG.seed, ).fit(feats.numpy()) centroids = torch.from_numpy(kmeans.cluster_centers_).to(CFG.device) centroids = L2_normalise(centroids) # (K_max, d_p) model.prototypes.data[c] = centroids print("[Init] Prototype initialisation done.") # -------------------------- Training utilities -------------------------- # def accuracy(output: torch.Tensor, target: torch.Tensor) -> float: """Compute top‑1 accuracy (coarse).""" with torch.no_grad(): pred = output.argmax(dim=1) correct = pred.eq(target).sum().item() return correct / target.size(0) @torch.no_grad() def _collect_Q_labels(model: BaPSTO, loader: DataLoader): """遍历 *loader*,返回 (Q features, coarse-ID, proto-ID);采样上限 MAX_SAMPLED.""" model.eval() qs, cls, subs = [], [], [] for x, y in loader: x = x.to(CFG.device) q = L2_normalise(model.g_FV(model.backbone(x))) # (B,d) # —— 预测最近原型 idx —— # d = ((q.unsqueeze(1).unsqueeze(2) - model.prototypes.unsqueeze(0))**2).sum(-1) # (B,C,K) proto_id = d.view(d.size(0), -1).argmin(dim=1) # flatten idx = C*K + k qs.append(q.cpu()) cls.append(y) subs.append(proto_id.cpu()) if MAX_SAMPLED and (sum(len(t) for t in qs) >= MAX_SAMPLED): break Q = torch.cat(qs)[:MAX_SAMPLED] # (N,d) Yc = torch.cat(cls)[:MAX_SAMPLED] # coarse Ysub = torch.cat(subs)[:MAX_SAMPLED] # pseudo-fine return Q.numpy(), Yc.numpy(), Ysub.numpy() def _plot_heatmap(mat: np.ndarray, title: str, path: Path, boxes: list[tuple[int,int]] | None = None): """ mat : 排好序的相似度矩阵 boxes : [(row_start,row_end), ...];坐标在排序后的索引系中 """ plt.figure(figsize=(6, 5)) ax = plt.gca() im = ax.imshow(mat, cmap="viridis", aspect="auto") plt.colorbar(im) if boxes: # 逐个 coarse-class 画框 for s, e in boxes: w = e - s rect = Rectangle((s - .5, s - .5), w, w, linewidth=1.5, edgecolor="white", facecolor="none") ax.add_patch(rect) plt.title(title) plt.tight_layout() plt.savefig(path, dpi=300) plt.close() def compute_and_save_diagnostics(model: BaPSTO, loader: DataLoader, tag: str): """ • 计算三个内部指标并保存 csv • 绘制五张图 (C heatmap, t-SNE / UMAP, Laplacian spectrum, Silhouette bars, Gate heatmap(opt)) """ print(f"[Diag] computing metrics ({tag}) ...") timestamp = get_timestamp() Q, Yc, Ysub = _collect_Q_labels(model, loader) # ========== 1) 聚类指标 ========== # sil = silhouette_score(Q, Ysub, metric="cosine") ch = calinski_harabasz_score(Q, Ysub) db = davies_bouldin_score(Q, Ysub) pd.DataFrame( {"tag":[tag], "silhouette":[sil], "calinski":[ch], "davies":[db]} ).to_csv(DIAG_DIR / f"cluster_metrics_{tag}_{timestamp}.csv", index=False) # ========== 2) C heatmap & Laplacian ========== # GRAPH_LEVEL = 'coarse' # ← 这里换 'sub' 就看细粒度--------------------------------------------------- # ① —— 相似度矩阵(始终基于所有样本,用来画热力图) —— # P_all = Q @ Q.T / np.linalg.norm(Q, axis=1, keepdims=True) / np.linalg.norm(Q, axis=1)[:, None] np.fill_diagonal(P_all, -1e4) # 取消自环 C_heat = torch.softmax(torch.tensor(P_all), dim=1).cpu().numpy() # —— 画热力图:完全沿用旧逻辑,不受 GRAPH_LEVEL 影响 —— # order = np.lexsort((Ysub, Yc)) # 先 coarse 再 sub #order = np.argsort(Yc) # 只按粗类别拍平---------------------- # —— 计算每个 coarse-class 的起止行() —— # coarse_sorted = Yc[order] bounds = [] # [(start,end),...] start = 0 for i in range(1, len(coarse_sorted)): if coarse_sorted[i] != coarse_sorted[i-1]: bounds.append((start, i)) # [start, end) start = i bounds.append((start, len(coarse_sorted))) # —— 绘图,并把边界传给 boxes 参数 —— # _plot_heatmap(C_heat[order][:, order], f"C heatmap ({tag})", DIAG_DIR / f"C_heatmap_{tag}_{timestamp}.png", boxes=bounds) # ② —— 针对 Laplacian 的图,可选按 coarse/sub 屏蔽 —— # P_graph = P_all.copy() # 从全局矩阵复制一份 if GRAPH_LEVEL == 'coarse': P_graph[Yc[:, None] != Yc[None, :]] = -1e4 # 只留同 coarse 的边 elif GRAPH_LEVEL == 'sub': P_graph[Ysub[:, None] != Ysub[None, :]] = -1e4 # 只留同子簇的边 C_graph = torch.softmax(torch.tensor(P_graph), dim=1).cpu().numpy() D = np.diag(C_graph.sum(1)) L = D - (C_graph + C_graph.T) / 2 eigs = np.sort(np.linalg.eigvalsh(L))[:30] plt.figure(); plt.plot(eigs, marker='o') plt.title(f"Laplacian spectrum ({GRAPH_LEVEL or 'global'} | {tag})") plt.tight_layout() plt.savefig(DIAG_DIR / f"laplacian_{tag}_{timestamp}.png", dpi=300); plt.close() # ========== 3) t-SNE / UMAP (带图例 & 色彩 ≤20) ========== # warnings.filterwarnings("ignore", message="n_jobs value 1") focus_cls = 1#None # ← 若只看 coarse ID=3,把它改成 3 sel = slice(None) if focus_cls is None else (Yc == focus_cls) Q_sel, Ysub_sel = Q[sel], Ysub[sel] # -- 选 UMAP 或 t-SNE -- if HAS_UMAP: # :contentReference[oaicite:2]{index=2} reducer_cls = umap.UMAP if hasattr(umap, "UMAP") else umap.umap_.UMAP reducer = reducer_cls(n_neighbors=30, min_dist=0.1, random_state=CFG.seed) method = "UMAP" else: reducer = TSNE(perplexity=30, init="pca", random_state=CFG.seed) method = "t-SNE" emb = reducer.fit_transform(Q_sel) # (N,2) # ---------- scatter ---------- # unique_sub = np.unique(Ysub_sel) try: # 新版 Matplotlib (≥3.7) cmap = plt.get_cmap("tab20", min(len(unique_sub), 20)) except TypeError: # 旧版 Matplotlib (<3.7) cmap = plt.cm.get_cmap("tab20", min(len(unique_sub), 20)) plt.figure(figsize=(5, 5)) for i, s_id in enumerate(unique_sub): pts = Ysub_sel == s_id plt.scatter(emb[pts, 0], emb[pts, 1], color=cmap(i % 20), s=6, alpha=0.7, label=str(s_id) if len(unique_sub) <= 20 else None) if len(unique_sub) <= 20: plt.legend(markerscale=2, bbox_to_anchor=(1.02, 1), borderaxespad=0.) title = f"{method} ({tag})" if focus_cls is None else f"{method} cls={focus_cls} ({tag})" plt.title(title) plt.tight_layout() plt.savefig(DIAG_DIR / f"embed_{tag}_{timestamp}.png", dpi=300) plt.close() # ========== 4) Silhouette bars ========== # sil_samples = silhouette_samples(Q, Ysub, metric="cosine") order = np.argsort(Ysub) plt.figure(figsize=(6,4)) plt.barh(np.arange(len(sil_samples)), sil_samples[order], color="steelblue") plt.title(f"Silhouette per sample ({tag})"); plt.xlabel("coefficient") plt.tight_layout(); plt.savefig(DIAG_DIR / f"silhouette_bar_{tag}_{timestamp}.png", dpi=300); plt.close() print(f"[Diag] saved to {DIAG_DIR}") def create_dataloaders() -> Tuple[DataLoader, DataLoader, int]: """Load train/val as ImageFolder and return dataloaders + K_C.""" train_dir = Path(CFG.data_root) / "train" val_dir = Path(CFG.data_root) / "test" classes = sorted([d.name for d in train_dir.iterdir() if d.is_dir()]) K_C = len(classes) transform_train = transforms.Compose( [ transforms.Grayscale(num_output_channels=3), transforms.Resize((CFG.img_size, CFG.img_size)), transforms.RandomHorizontalFlip(), transforms.RandomRotation(10), transforms.RandomResizedCrop(CFG.img_size, scale=(0.8, 1.0)), transforms.ToTensor(), transforms.Normalize(mean=[0.5] * 3, std=[0.5] * 3), ] ) transform_val = transforms.Compose( [ transforms.Grayscale(num_output_channels=3), transforms.Resize((CFG.img_size, CFG.img_size)), transforms.ToTensor(), transforms.Normalize(mean=[0.5] * 3, std=[0.5] * 3), ] ) train_ds = datasets.ImageFolder(str(train_dir), transform=transform_train) val_ds = datasets.ImageFolder(str(val_dir), transform=transform_val) train_loader = DataLoader( train_ds, batch_size=CFG.batch_size, shuffle=True, num_workers=CFG.num_workers, pin_memory=True, drop_last=True, ) val_loader = DataLoader( val_ds, batch_size=CFG.batch_size, shuffle=False, num_workers=CFG.num_workers, pin_memory=True, ) return train_loader, val_loader, K_C # -------------------------- Main training routine -------------------------- # def train(): best_ckpt_path = None # 记录最佳 joint 权重的完整文件名 best_acc = 0.0 best_epoch = -1 train_loader, val_loader, K_C = create_dataloaders() model = BaPSTO(K_C).to(CFG.device) model.steps_per_epoch = len(train_loader) #print(model) mb = MemoryBank(dim=CFG.d_proj, size=CFG.mem_size) warmup_weights_path = Path(CFG.save_root) / "bapsto_warmup_complete.pth" # 检查是否存在预保存的warm-up权重 if warmup_weights_path.exists(): print(f"找到预训练的warm-up权重,正在加载: {warmup_weights_path}") checkpoint = torch.load(warmup_weights_path, map_location=CFG.device,weights_only=True) model.load_state_dict(checkpoint["state_dict"]) print("✓ 成功加载warm-up权重,跳过warm-up阶段!") else: # ---------- Phase 1: DASSER warm‑up (backbone frozen) ---------- # print("\n==== Phase 1 | DASSER warm‑up ====") for p in model.backbone.parameters(): p.requires_grad = False # —— 冻结 prototypes 和 gate_logits —— # model.prototypes.requires_grad = False model.log_alpha.requires_grad = False # —— 冻结 prototypes 和 gate_logits —— # optimizer = optim.AdamW( filter(lambda p: p.requires_grad, model.parameters()), lr=CFG.lr_warmup, weight_decay=CFG.weight_decay, betas=(0.9, 0.95), ) scheduler = CosineAnnealingLR(optimizer, T_max=len(train_loader) * CFG.n_epochs_warmup) for epoch in range(CFG.n_epochs_warmup): run_epoch(train_loader, model, mb, optimizer, scheduler, epoch, phase="warmup") # 保存warm-up完成后的权重 torch.save( {"epoch": CFG.n_epochs_warmup, "state_dict": model.state_dict()}, warmup_weights_path ) print(f"✓ Warm-up完成,模型权重已保存至: {warmup_weights_path}") # after warm‑up loop, before Phase 2 header kmeans_init(model, train_loader) # <─ 新增 print("K‑means initialisation done. Prototypes are now ready.") compute_and_save_diagnostics(model, train_loader, tag="after_kmeans") # ---------- Phase 2: Joint optimisation (all params trainable) ---------- # print("\n==== Phase 2 | Joint optimisation ====") for p in model.backbone.parameters(): p.requires_grad = True # —— 解冻 prototypes 和 gate logits —— # model.prototypes.requires_grad = True model.log_alpha.requires_grad = True # —— 解冻 prototypes 和 gate logits —— # param_groups = [ {"params": [p for n,p in model.named_parameters() if n!='log_alpha'], "lr": CFG.lr_joint}, {"params": [model.log_alpha], "lr": CFG.lr_joint * 2.0} ] optimizer = optim.AdamW( param_groups, weight_decay=CFG.weight_decay, betas=(0.9, 0.95), ) scheduler = CosineAnnealingLR(optimizer, T_max=len(train_loader) * CFG.n_epochs_joint) best_acc = 0.0 best_epoch = -1 epochs_no_improve = 0 for epoch in range(CFG.n_epochs_joint): stats = run_epoch(train_loader, model, mb, optimizer, scheduler, epoch, phase="joint") # ─────────────────────────────────────────── if (epoch + 1) % 1 == 0: # 每个 epoch 都跑验证 # —— 每 5 个 epoch 额外保存 Gate & 聚类诊断 —— # if (epoch + 1) % 5 == 0: timestamp = get_timestamp() gate_prob = torch.sigmoid(model.log_alpha.detach().cpu()) _plot_heatmap( gate_prob, f"Gate prob (ep{epoch+1})", DIAG_DIR / f"gate_ep{epoch+1}_{timestamp}.png", ) compute_and_save_diagnostics( model, train_loader, tag=f"joint_ep{epoch+1}" ) # ---------- 统计指标 ---------- val_loss, val_acc, per_cls_acc, auc = metrics_on_loader(val_loader, model) train_acc = metrics_on_loader (train_loader, model)[1] # 只取整体训练准确率 print(f"[Val] ep {epoch+1:02d} | loss {val_loss:.3f} | " f"acc {val_acc:.3f} | train-acc {train_acc:.3f} |\n" f" per-cls-acc {np.round(per_cls_acc, 2)} |\n" f" AUC {np.round(auc, 2)}") # —— checkpoint —— # if val_acc > best_acc: best_acc = val_acc best_epoch = epoch epochs_no_improve = 0 best_ckpt_path = save_ckpt(model, epoch, tag="best_joint", acc=val_acc, optimizer=optimizer, scheduler=scheduler) # ← 传进去 else: epochs_no_improve += 1 # —— gate 修剪 —— # if epoch+1 >= 10: # 先训练 10 个 epoch 再剪 prune_gates(model, threshold=0.25, min_keep=1, hc_threshold=CFG.hc_threshold) # —— early stopping —— # if epochs_no_improve >= 50: print("Early stopping triggered in joint phase.") break # ─────────────────────────────────────────── model.global_step += 1 print(model.prototypes.grad.norm()) # 非零即可证明 L_proto 对原型确实有更新压力 model.global_step.zero_() # Joint训练结束后,重命名最佳模型文件,添加准确率 best_acc_int = round(best_acc * 1e4) # 将0.7068转换为7068 joint_ckpt_path = Path(CFG.save_root) / "bapsto_best_joint.pth" renamed_path = Path(CFG.save_root) / f"bapsto_best_joint_{best_acc_int}.pth" if joint_ckpt_path.exists(): joint_ckpt_path.rename(renamed_path) best_ckpt_path = renamed_path # ★ 同步路径,供 fine-tune 使用 print(f"✓ 最优联合训练模型已重命名: {renamed_path.name} " f"(epoch {best_epoch+1}, ACC: {best_acc:.4f})") # ---------- Phase 3: Fine‑tune (prototypes & gates frozen) ---------- # print("\n==== Phase 3 | Fine‑tuning ====") best_ft_acc = 0.0 best_ft_epoch = -1 # 若有最佳 joint 权重则加载 if best_ckpt_path is not None and Path(best_ckpt_path).exists(): ckpt = torch.load(best_ckpt_path, map_location=CFG.device, weights_only=True) model.load_state_dict(ckpt["state_dict"]) epoch_loaded = ckpt["epoch"] + 1 # 以 1 为起点的人类可读轮次 acc_loaded = ckpt.get("acc", -1) # 若早期代码没存 acc,给个占位 print(f"✓ loaded best joint ckpt (epoch {epoch_loaded}, ACC {acc_loaded:.4f})") else: print("⚠️ best_ckpt_path 未找到,继续沿用上一轮权重。") for param in [model.prototypes, model.log_alpha]: param.requires_grad = False for p in model.parameters(): if p.requires_grad: p.grad = None # clear any stale gradients optimizer = optim.AdamW( filter(lambda p: p.requires_grad, model.parameters()), lr=CFG.lr_ft, weight_decay=CFG.weight_decay, betas=(0.9, 0.95), ) scheduler = CosineAnnealingLR(optimizer, T_max=len(train_loader) * CFG.n_epochs_ft) for epoch in range(CFG.n_epochs_ft): run_epoch(train_loader, model, mb, optimizer, scheduler, epoch, phase="finetune") if (epoch + 1) % 1 == 0: # 每个 epoch 都评估 val_acc = evaluate(val_loader, model) print(f"[FT] ep {epoch+1:02d} | acc {val_acc:.4f}") # ① 按 epoch 保存快照(可选) save_ckpt(model, epoch, tag="ft") # ② 维护 “fine-tune 最佳” if val_acc > best_ft_acc: best_ft_acc = val_acc best_ft_epoch = epoch best_ft_acc_int = round(best_ft_acc * 1e4) # 将0.7068转换为7068 best_ft_ckpt_path = Path(CFG.save_root) / f"bapsto_best_ft_{best_ft_acc_int}.pth" save_ckpt(model, epoch, tag="best_ft", acc=val_acc) # 只保留一个最新 best_ft # 重命名保存文件 if best_ft_ckpt_path.exists(): best_ft_ckpt_path.rename(best_ft_ckpt_path) print(f"✓ Fine-tune最佳模型已重命名: {best_ft_ckpt_path.name} (epoch {best_ft_epoch+1}, ACC: {best_ft_acc:.4f})") print(f"Training completed. Best FT ACC {best_ft_acc:.4f}") # -------------------------- Helper functions -------------------------- # def run_epoch(loader, model, mem_bank: MemoryBank, optimizer, scheduler, epoch, phase:str): model.train() running = {"loss": 0.0} use_bpgs = (phase != "warmup") for step, (x, y) in enumerate(loader): x, y = x.to(CFG.device), y.to(CFG.device) optimizer.zero_grad() loss, stats, p_det, q_det = model(x, y, mem_bank, use_bpgs=use_bpgs) loss.backward() optimizer.step() scheduler.step() mem_bank.enqueue(p_det, q_det, y.detach()) # accumulate for k, v in stats.items(): running[k] = running.get(k, 0.0) + v # ★★★★★ Hard-Concrete 梯度健康检查 ★★★★★ if phase == "joint" and step % 100 == 0: # ─── Hard-Concrete 监控 ─── tau_now = max( CFG.tau_min_hc, CFG.tau0_hc - (CFG.tau0_hc - CFG.tau_min_hc) * min(1.0, model.global_step.item() / (model.steps_per_epoch * CFG.anneal_epochs_hc)) ) pa = torch.sigmoid(model.log_alpha) # (C,K) p_act = pa.mean().item() alive = (pa > 0.4).float().sum().item() # 0.4 与 prune 阈值一致 total = pa.numel() # = C × K grad_nm = (model.log_alpha.grad.detach().norm().item() if model.log_alpha.grad is not None else 0.0) pa = torch.sigmoid(model.log_alpha) print(f"[DBG] τ={tau_now:.3f} p̄={pa.mean():.3f} " f"min={pa.min():.2f} max={pa.max():.2f} " f"alive={(pa>0.25).sum().item()}/{pa.numel()} " f"‖∇α‖={grad_nm:.2e}") # ★★★★★ 监控段结束 ★★★★★ if (step + 1) % 50 == 0: avg_loss = running["loss"] / (step + 1) print( f"Epoch[{phase} {epoch+1}] Step {step+1}/{len(loader)} | " f"loss: {avg_loss:.4f}", end="\r", ) # epoch summary print(f"Epoch [{phase} {epoch+1}]: " + ', '.join(f"{k}: {running[k]:.4f}" for k in running)) return running @torch.no_grad() def evaluate(loader, model): model.eval() total_correct, total_samples = 0, 0 K_C, K_M = model.prototypes.size(0), model.prototypes.size(1) gate_hard = (model.log_alpha > 0).float() # (K_C,K_M) for x, y in loader: x, y = x.to(CFG.device), y.to(CFG.device) b = x.size(0) # --- 特征 & 距离 --- q = L2_normalise(model.g_FV(model.backbone(x))) # (b,d_p) d = ((q.unsqueeze(1).unsqueeze(2) - model.prototypes.unsqueeze(0))**2).sum(-1) # (b,K_C,K_M) s = 30.0 # scale for logits # --- 子簇 logit & 粗 logit --- mask_logits = -d * s + torch.log(gate_hard + 1e-12) # (b,K_C,K_M) # 这里由于是log,所以二者相加 coarse_logits = torch.logsumexp(mask_logits, dim=2) # (b,K_C) # --- 统计准确率 --- total_correct += coarse_logits.argmax(1).eq(y).sum().item() total_samples += b return total_correct / total_samples @torch.no_grad() def metrics_on_loader(loader, model): """ 返回: loss_avg – 均值交叉熵 acc – overall top-1 per_cls_acc (C,) – 每个 coarse 类别准确率 auc (C,) – 每类 one-vs-rest ROC-AUC """ model.eval() n_cls = model.prototypes.size(0) total_loss, total_correct, total_samples = 0., 0, 0 # —— 用来存储全量 logits / labels —— # logits_all, labels_all = [], [] ce_fn = nn.CrossEntropyLoss(reduction="sum") # 累加再除 for x, y in loader: x, y = x.to(CFG.device), y.to(CFG.device) # 前向 with torch.no_grad(): q = L2_normalise(model.g_FV(model.backbone(x))) d = ((q.unsqueeze(1).unsqueeze(2) - model.prototypes.unsqueeze(0))**2).sum(-1) logits = torch.logsumexp(-d*30 + torch.log((model.log_alpha>0).float()+1e-12), dim=2) total_loss += ce_fn(logits, y).item() total_correct += logits.argmax(1).eq(y).sum().item() total_samples += y.size(0) logits_all.append(logits.cpu()) labels_all.append(y.cpu()) # —— overall —— # loss_avg = total_loss / total_samples acc = total_correct / total_samples # —— 拼接 & 转 numpy —— # logits_all = torch.cat(logits_all).numpy() labels_all = torch.cat(labels_all).numpy() # —— per-class ACC —— # per_cls_acc = np.zeros(n_cls) for c in range(n_cls): mask = labels_all == c if mask.any(): per_cls_acc[c] = (logits_all[mask].argmax(1) == c).mean() # —— per-class AUC —— # try: from sklearn.metrics import roc_auc_score prob = torch.softmax(torch.from_numpy(logits_all), dim=1).numpy() auc = roc_auc_score(labels_all, prob, multi_class="ovr", average=None) except Exception: # 组数太少或只有 1 类样本时会报错 auc = np.full(n_cls, np.nan) return loss_avg, acc, per_cls_acc, auc def save_ckpt(model, epoch:int, tag:str, acc:float|None=None, optimizer=None, scheduler=None): """ 通用保存函数 • 返回 ckpt 文件完整路径,方便上层记录 • 可选把 opt / sched state_dict 一起存进去,便于 resume """ save_dir = Path(CFG.save_root) save_dir.mkdir(parents=True, exist_ok=True) # -------- 路径策略 -------- # if tag == "best_joint": # 只保留一个最新最优 joint ckpt_path = save_dir / "bapsto_best_joint.pth" else: # 其他阶段带时间戳 ckpt_path = save_dir / f"bapsto_{tag}_epoch{epoch+1}_{get_timestamp()}.pth" # -------- 组装 payload -------- # # • vars(CFG) 可以拿到用户自己在 CFG 里写的字段 # • 再过滤掉 __ 开头的内部键、防止把 Python meta-data 也 dump 进去 cfg_dict = {k: v for k, v in vars(CFG).items() if not k.startswith("__")} payload = { "epoch": epoch, "state_dict": model.state_dict(), "cfg": cfg_dict, # ← 改在这里 } if acc is not None: payload["acc"] = acc if optimizer is not None: payload["optimizer"] = optimizer.state_dict() if scheduler is not None: payload["scheduler"] = scheduler.state_dict() torch.save(payload, ckpt_path) print(f"✓ checkpoint saved to {ckpt_path}") return ckpt_path @torch.no_grad() def prune_gates(model: BaPSTO, threshold=0.05, min_keep=2, hc_threshold=0.35): """ Disable sub-clusters whose mean gate probability < threshold. After setting them to -10, we do another **row normalization**: Each coarse class row is subtracted by the max logit of that row, ensuring the maximum logit for active clusters is 0 and inactive clusters ≈ -10 → softmax(-10) ≈ 0. Also check for Hard-Concrete (HC) weights below a threshold (e.g., 0.35) to disable sub-clusters. """ # softmax probabilities (K_C, K_max) p_active = torch.sigmoid(model.log_alpha) # Activation probability mask = (p_active < threshold) # Check HC thresholds and disable low weight clusters low_weight_mask = (p_active < hc_threshold) # Find sub-clusters with low HC weight mask = mask | low_weight_mask # Combine with existing mask # Ensure at least `min_keep` sub-clusters are kept per coarse class keep_mask = (mask.cumsum(1) >= (CFG.K_max - min_keep)) mask = mask & ~keep_mask pruned = mask.sum().item() if pruned == 0: return model.log_alpha.data[mask] = -10.0 # Set log_alpha of pruned sub-clusters to a very low value print(f"Pruned {pruned} sub-clusters (ḡ<{threshold}, keep≥{min_keep}/class)") # Reassign samples from pruned sub-clusters to active sub-clusters if pruned > 0: # Find the indices of the pruned sub-clusters pruned_clusters = mask.sum(dim=1) > 0 # (K_C,) for c in range(model.prototypes.size(0)): # Loop through each coarse class if pruned_clusters[c]: pruned_indices = mask[c] # Get indices of pruned sub-clusters for class `c` active_indices = ~pruned_indices # Get indices of active sub-clusters active_prototypes = model.prototypes[c][active_indices] # Get active prototypes q = model.q # Get features # Reassign samples from pruned clusters to active clusters d_active = pairwise_cosine(q, active_prototypes) # Compute distance to active prototypes best_active = d_active.argmin(dim=1) # Assign samples to the nearest active sub-cluster # Update the model with reallocated samples (you can implement reallocation logic here) print(f"Reassigning samples from pruned sub-clusters of class {c} to active clusters.") # -------------------------- Entrypoint -------------------------- # if __name__ == "__main__": os.makedirs(CFG.save_root, exist_ok=True) start = time.time() train() print(f"Total runtime: {(time.time() - start) / 3600:.2f} h") 逐行详细解释代码
09-05
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值