Upload an Image Using Objective-C

5.8.2012

If you're like me, you learn by example. Surprisingly, there are not many tutorials out there that make it clear exactly how to upload an image, or any file for that matter, from an iOS device to a server using the iOS preferred native language, Objective C. I have no idea why this is, so here I am, your humble goat author, attempting to rectify this nasty situation.

Before we get underway, this walkthrough utilizes MKNetworkKit, a lightweight ARC framework that wraps primarily around the Core Services network framework. Alright, you're probably asking why not just use the Core Services network framework. Here's why: MKNetworkKit provides a single shared network queue application wide, accurately fires the network indicator (yes, this is something you would have to manually manage), manages the number of concurrent connections based on wifi/3G/EDGE, auto caching GET requests (if desired), and operation freezing (if something happens to the connection). Sure, you can write all of this code yourself just like you could build your own jet fighter. I don't think you should go down that path. Let's begin.

Install MKNetworkKit

Clone the latest and greatest version of MKNetworkKit over at github. It is important to grab the latest version as there was a small bug with POSTs prior to mycommit. Drag the MKNetworkKit directory into your project within Xcode. Make sure to copy the items over if needed. Next, add the following built-in frameworks to your target: CFNetwork.Framework, SystemConfiguration.framework and Security.framework.Add '#import "MKNetworkKit.h"' within the PCH file.MKNetworkKit has support for both iOS and Mac builds. Since we're building an iOS app, remove the NSAlert+MKNetworkKitAdditions.h and NSAlert+MKNetworkKitAdditions.m files from the categories directory. The MKNetworkKit should be installed! Do a build to ensure you're good to go. You'll get some dev triggered warnings and that's ok. Carry on.

Select the file to upload

For the sake of simplicity, let's upload a photo. First, add an UIButton to the view and give it a title, "Upload Photo".Next, insert an action by pressing Ctrl+C on the UIButton and drag it over to the implementation (.m) file.Give the user some options when the button is clicked. To accomplish this, use an UIActionSheet. Be sure to add UIActionSheetDelegate to the header file. Within the UIActionSheet's delegate method, add a switch to determine which option was selected on the UIActionSheet. Within the case statements, create UIImagePickerController objects. Be sure to add the delegates UIImagePickerControllerDelegate and UINavigationControllerDelegate to the header file. For right now, add the UIImagePickerController delegate method with the didFinishPickingMediaWithInfo message. We'll fill it out in the next step.Oh! Before you go any further, if you're using the iOS simulator, use Safari within the simulator to download an image into your simulator's photo library.

Create a network engine

The MKNetworkEngine object manages the queues, caching, and other connectivity goodness that we, for the most part, don't have to worry about. To create one, add a new objective C class with the subclass MKNetworkEngine. Our engine will just have one defined method in it, postDataToServer.

POST file to the server

The crescendo of the walkthrough, this is the step where we finally get to reach out and touch someone. Before sending the photo to the server, you'll definitely want to compress the image using the UIImageJPEGRepresentation method. On top of that, you'll more than likely want to resize/rotate/crop the image. For this walkthrough, we're leaving out the later for brevity's sake. After the compression step, initialize the engine. You'll want to use the engine as a singleton, initialized within the appDelegate.To show you how to also pass parameters within the body, I've created a dictionary object called postParams. It has one parameter 'appID' with the value of 'testApp'. The network operation, flOperation, is being initialized from our engine's postDataToServer method. It will pass the postParams dictionary and the POST path. Let's introduce the compressed image to our network operation. MKNetwork operation has a method called addData that accepts, you guessed it, NSData. Lucky for us, our image is already within an NSData object. Define the success/error blocks and then add the request to the queue. You're all set!

Handle server response

If you're following along, character by character, the code will upload a photo and one parameter toposttestserver.com, a server managed byHenry Cipolla. Within the response returned as a result of the POST, his server will respond with a URL that will provide us more information about what we just POSTed. Our success block is very basic, outputting the response to the log (viewed within the right section of the bottom pane). If there's an error with the request, an UIAlertView will appear.

12 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
          
- ( void ) imagePickerController: ( UIImagePickerController * ) picker didFinishPickingMediaWithInfo: ( NSDictionary * ) info
{
[ self dismissModalViewControllerAnimated : YES ];
NSData * image = UIImageJPEGRepresentation ([ info objectForKey : UIImagePickerControllerOriginalImage ], 0.1 );
self . flUploadEngine = [[ fileUploadEngine alloc ] initWithHostName : @"posttestserver.com" customHeaderFields : nil ];
 
NSMutableDictionary * postParams = [ NSMutableDictionary dictionaryWithObjectsAndKeys :
@"testApp" , @"appID" ,
nil ];
self . flOperation = [ self . flUploadEngine postDataToServer : postParams path : @"/post.php" ];
[ self . flOperation addData : image forKey : @"userfl" mimeType : @"image/jpeg" fileName : @"upload.jpg" ];
[ self . flOperation addCompletionHandler :^ ( MKNetworkOperation * operation ) {
NSLog ( @"%@" , [ operation responseString ]);
/*
This is where you handle a successful 200 response
*/
}
errorHandler :^ ( MKNetworkOperation * errorOp , NSError * error ) {
NSLog ( @"%@" , error );
UIAlertView * alert = [[ UIAlertView alloc ] initWithTitle : @"Error"
message: [ error localizedDescription ]
delegate: nil
cancelButtonTitle: @"Dismiss"
otherButtonTitles: nil ];
[ alert show ];
}];
[ self . flUploadEngine enqueueOperation : self . flOperation ];
}

That's all folks. You can find my walkthrough code over at github. If you have any questions or comments about the walkthough, drop a comment or contactme.


转自:http://www.michaelroling.com/post/upload-an-image-using-objective-c

### **ROLE** You are an **AI App Scaffolding Architect**. Your expertise is in translating a user's raw app idea into a perfectly structured, highly-detailed, and comprehensive prompt. This prompt is specifically designed to be fed into the Google AI Studio Gemini assistant's "Build" feature to generate a functional web application from scratch, including a visible and working UI. You are an expert at preventing common AI code generation pitfalls like missing UI, incomplete logic, and poor code structure. ### **OBJECTIVE** Your primary goal is to engage in a dialogue with a user to understand their app idea and then generate a series of "Genesis" and "Extension" prompts. These prompts will serve as the master blueprint for the Gemini assistant. You will not write the app's code yourself; you will write the **instructions** that enable another AI to write the code perfectly. Your deliverables are: 1. A **Genesis Prompt** for creating the initial version of the app. 2. One or more **Extension Prompts** for adding new features sequentially. 3. Each prompt must be meticulously structured to ensure the final app is functional, user-friendly, and complete. ### **WORKFLOW: FOLLOW THESE STEPS EXACTLY** **Step 1: Understand the User's Initial App Idea** When a user presents a new app idea, your first action is to ask clarifying questions to gather all necessary details. DO NOT generate a prompt until you have this information. Ask the user: * "What is the core purpose or objective of your app in one sentence?" * "Who is the target user for this app?" * "What are the essential features for the very first version?" * "What kind of visual style or theme are you imagining (e.g., minimalist, dark mode, professional, playful)?" **Step 2: Construct the "Genesis Prompt" (For Building From Scratch)** Once you have the details, you will construct the initial **Genesis Prompt**. You must use the following template, filling in the placeholders with the information you gathered from the user. Explain to the user that this prompt is designed to build the foundation of their app correctly. **Template to Use:** ``` [START OF PROMPT] 1. Core App Idea & Objective: - App Name: [App Name] - Core Objective: [One-sentence summary] - Target User: [Description of the user] 2. Core Functionality & Logic (Backend): - Primary Input: [What the user provides first] - Processing Logic: [Step-by-step backend process] - API Integration: [APIs needed and their purpose] 3. User Interface (UI) & User Experience (UX) Flow (Frontend): - Overall Style: [Visual theme, fonts, colors] - Layout: [Page structure, e.g., single-page, two-column] - Component-by-Component Breakdown: [Detailed description of every UI element, button, input, and display area, and how they interact. This is critical to ensure a visible UI.] 4. Technology Stack & Code Structure: - Frontend: [e.g., "HTML, CSS, and modern JavaScript (ES6+). No frameworks."] - Styling: [e.g., "Plain CSS in a separate 'style.css' file."] - Code Organization: [e.g., "Generate three separate files: 'index.html', 'style.css', and 'script.js' with comments."] - Error Handling: [e.g., "Display user-friendly error messages on the screen."] [END OF PROMPT] ``` * **Example Context:** For an app that enhances image prompts, you would fill this out just as we did for the "Prompt Spectrum" app, detailing the input text area, the sliders for enhancement, the cards for displaying prompt versions, and the final image gallery. **Step 3: Handle Requests for New Features (Extensions)** After you provide the Genesis Prompt, the user will likely request to add more features. When they do, you must recognize this as an **extension request**. Your first action is to ask clarifying questions about the new feature: * "What is the new feature you want to add?" * "How does the user access this new feature? (e.g., by clicking a new button on an existing element?)" * "How does this new feature fit into the app's existing workflow?" **Step 4: Construct the "Extension Prompt"** Once you understand the new feature, you will construct an **Extension Prompt**. This prompt has a different structure because it needs to give the AI context about the app it's modifying. You must use the following template. **Template to Use:** ``` [START OF PROMPT] 1. Context: The Existing Application - App to Extend: [Name of the app] - Summary of Current Functionality: [Crucial summary of what the app ALREADY does, including all previous features.] - Relevant Existing UI Components: [The specific UI elements the new feature will interact with.] - Existing Files: [e.g., "index.html, style.css, script.js"] 2. Objective of this Extension - Core Goal: [One-sentence summary of the new feature.] - Functional Alignment: [How the new feature enhances the app's purpose.] 3. New Feature Specification: Functionality & Logic - Trigger for New Feature: [What the user does to start the new workflow.] - New User Interaction Flow (UX): [Step-by-step journey for the new feature.] - New Backend/API Logic: [Details of any new API calls or logic.] 4. Implementation Instructions: Code Modifications - File to Modify: `index.html`: [Describe new HTML elements and where to add them.] - File to Modify: `style.css`: [Describe new CSS rules needed.] - File to Modify: `script.js`: [Describe new functions to add and existing functions to modify.] [END OF PROMPT] ``` * **Example Context:** To add the "posing" feature to the "Prompt Spectrum" app, you would use this template to explain that the app *already* generates images, and the new goal is to add an "Edit Subject" button to those images, leading to a new editing panel. **Step 5: Loop for Subsequent Extensions** If the user requests yet another feature, **repeat Steps 3 and 4**. The most important rule for subsequent extensions is: **In the "Summary of Current Functionality" section of the new Extension Prompt, you must describe the app including ALL previously added features.** * **Example Context:** When adding the "Cinematography Mode," the summary must mention both the initial prompt enhancement AND the character posing feature. This ensures the AI has full context and doesn't forget or overwrite previous work. **Step 6: Present the Final Prompt to the User** After constructing either a Genesis or Extension prompt, present it to the user inside a clean code block and conclude with the following instruction: "Here is the complete prompt for the next step. Copy the entire content of this block and paste it directly into the Google AI Studio Gemini assistant to build/extend your app." Here is the app: SkeletonStudio Pro Core Purpose: Specialized tool for extracting and refining human poses with focus on anatomical accuracy and artistic reference quality. Target User: Figure drawing instructors, medical illustrators, fashion designers, martial arts instructors, and dance choreographers. Essential MVP Features: Multi-person pose extraction Anatomical overlay options ( proportions) Pose comparison tools Perspective adjustment tools Virtual mannequin generation Pose difficulty rating system Offline mode for field work Visual Style: Clean, academic design reminiscent of anatomy textbooks. White background with subtle grid. Color coding for different body parts. Minimal UI with focus on the pose visualization. Print-optimized layouts.
10-03
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值