Apple Window Programming Guide Manuel Apple Window Programming Guide Manuel Apple sur Fnac.com - Pour voir la liste complète des manuels APPLE, cliquez ici
ou juste avant la balise de fermeture -->
ou juste avant la balise de fermeture -->

 

 

TELECHARGER LE PDF sur :

https://developer.apple.com/library/mac/documentation//Cocoa/Conceptual/WinPanel/WinPanel.pdf

Commander un produit Apple sur Fnac.com

 

 

Voir également d'autres Guides et documentation APPLE :

Apple-TV_2nd_gen_Setup_Guide.pdf-manuel

Apple-Archives-and-Serializations-Programming-manuel

Apple-SafariWebContent.pdf-Guide-manuel

Apple-iTunes_ExtrasandiTunes_LPTestGuide1.1.pdf-manuel

Apple-Text-System-User-Interface-Layer-Programming-manuel

Apple-CocoaTextArchitecture.pdf-manuel

Apple-Key-Value-Observing-Programming-Guide-manuel

Apple-Location-Awareness-Programming-Guide-manuel

Apple-SharkUserGuide.pdf-manuel

Apple-drawingprintingios.pdf-manuel

Apple-QuickTime7_User_Guide.pdf-manuel

Apple-Event-Handling-Guide-for-iOS-manuel

Apple-ipod_nano_3rd_gen_features_guide.pdf-manuel

Apple-iTunes_VideoandAudio_Asset_Guide5.0.pdf-manuel

Apple-ARD3_AdminGuide.pdf-manuel

Apple-SafariWebContent.pdf-Guide-manue

Apple-iphone_3gs_finger_tips.pdf-manuel

Apple-InstrumentsUserGuide.pdf-manuel

Apple-Logic-Pro-9-TDM-Guide-manuel

Apple-macbook_air_users_guide.pdf-manuel

Apple-macbook_air-13-inch_mid-2012-qs_ta.pdf-manuel

Apple-AppStoreMarketingGuidelines-JP.pdf-Japon-manuel

Apple-macbook_pro_retina_qs_ta.pdf-manuel

Apple-ipad_user_guide_tu.pdf-manuel

Apple-ipad_user_guide_th.pdf-manuel

Apple-iphone_user_guide_gr.pdf-manuel

Apple-Nike_Plus_iPod_Sensor_UG_2A.pdf-manuel

Apple-ipad_manual_del_usuario.pdf-manuel

Apple-ipad_uzivatelska_prirucka.pdf-manuel

Apple-ipad_wifi_informations_importantes.pdf-manuel

Apple-Xsan_2_Admin_Guide_v2.3.pdf-manuel

Apple-macbook_pro-13-inch-late-2012-quick_start.pdf-manuel

Apple-CocoaDrawingGuide.pdf-manuel

Apple-Cryptographic-Services-Guide-manuel

Apple-Resource-Programming-Guide-manuel

AppleSafariVisualEffectsProgGuide.pdf-manuel

/Apple-WorkingWithUSB.pdf-manuel

Apple-macbook_pro-retina-mid-2012-important_product_info_f.pdf-manuel

Apple-iOS_Security_May12.pdf-manue

Apple-Mac-Pro-2008-Performance-and-Productivity-for-Creative-Pros

Apple-iPod_shuffle_4thgen_Manuale_utente.pdf-Italie-Manuel

Apple-KernelProgramming.pdf-manuel

Apple-Core-Data-Model-Versioning-and-Data-Migration-Programming-Guide-manuel

Apple-RED_Workflows_with_Final_Cut_Pro_X.pdf-manuel

Apple-Transitioning-to-ARC-Release-Notes-manuel

Apple-iTunes-Connect-Sales-and-Trends-Guide-manuel

Apple-App-Sandbox-Design-Guide-manuel

Apple-String-Programming-Guide-manuel

Apple-Secure-Coding-Guide-manuel

Apple_AirPort_Networks_Early2009.pdf-manuel

Apple-TimeCapsule_SetupGuide_TA.pdf-manuel

Apple-time_capsule_4th_gen_setup.pdf-manuel

Apple-TimeCapsule_SetupGuide.pdf-manuel

Apple-TimeCapsule_SetupGuide_CH.pdf-Chinois-manuel

Apple-CodeSigningGuide.pdf-manuel

Apple-ViewControllerPGforiOS.pdf-manuel

Apple-KeyValueObserving.pdf-manuel

Apple-mac_mini-late-2012-quick_start.pdf-manuel

Apple-OS-X-Mountain-Lion-Core-Technologies-Overview-June-2012-manuel

Apple-OS-X-Server-Product-Overview-June-2012-manuel

Apple-Apple_Server_Diagnostics_UG_109.pdf-manuel

Apple-PackageMaker_UserGuide.pdf-manuel

Apple-InstrumentsUserGuide.pdf-manuel

Apple-Logic-Pro-9-TDM-Guide-manuel

Apple-macbook_air_users_guide.pdf-manuel

Apple-macbook_air-13-inch_mid-2012-qs_ta.pdf-manuel

Apple-AppStoreMarketingGuidelines-JP.pdf-Japon-manuel

Apple-macbook_pro_retina_qs_ta.pdf-manuel

Apple-ipad_user_guide_tu.pdf-manuel

Apple-ipad_user_guide_th.pdf-manuel

Apple-iphone_user_guide_gr.pdf-manuel

Apple-Nike_Plus_iPod_Sensor_UG_2A.pdf-manuel

Apple-ipad_manual_del_usuario.pdf-manuel

Apple-ipad_uzivatelska_prirucka.pdf-manuel

Apple-ipad_wifi_informations_importantes.pdf-manuel

Apple-Xsan_2_Admin_Guide_v2.3.pdf-manuel

Apple-macbook_pro-13-inch-late-2012-quick_start.pdf-manuel

Apple-CocoaDrawingGuide.pdf-manuel

Apple-Cryptographic-Services-Guide-manuel

Apple-Resource-Programming-Guide-manuel

AppleSafariVisualEffectsProgGuide.pdf-manuel

/Apple-WorkingWithUSB.pdf-manuel

Apple-macbook_pro-retina-mid-2012-important_product_info_f.pdf-manuel

Apple-iOS_Security_May12.pdf-manue

Apple-Mac-Pro-2008-Performance-and-Productivity-for-Creative-Pros

Apple-iPod_shuffle_4thgen_Manuale_utente.pdf-Italie-Manuel

Apple-KernelProgramming.pdf-manuel

Apple-Core-Data-Model-Versioning-and-Data-Migration-Programming-Guide-manuel

Apple-RED_Workflows_with_Final_Cut_Pro_X.pdf-manuel

Apple-Transitioning-to-ARC-Release-Notes-manuel

Apple-iTunes-Connect-Sales-and-Trends-Guide-manuel

Apple-App-Sandbox-Design-Guide-manuel

Apple-String-Programming-Guide-manuel

Apple-Secure-Coding-Guide-manuel

Apple_AirPort_Networks_Early2009.pdf-manuel

Apple-TimeCapsule_SetupGuide_TA.pdf-manuel

Apple-time_capsule_4th_gen_setup.pdf-manuel

Apple-TimeCapsule_SetupGuide.pdf-manuel

Apple-TimeCapsule_SetupGuide_CH.pdf-Chinois-manuel

Apple-CodeSigningGuide.pdf-manuel

Apple-ViewControllerPGforiOS.pdf-manuel

Apple-KeyValueObserving.pdf-manuel

Apple-mac_mini-late-2012-quick_start.pdf-manuel

Apple-OS-X-Mountain-Lion-Core-Technologies-Overview-June-2012-manuel

Apple-OS-X-Server-Product-Overview-June-2012-manuel

Apple-Apple_Server_Diagnostics_UG_109.pdf-manuel

Apple-PackageMaker_UserGuide.pdf-manuel

Apple-Instrumentos_y_efectos_de_Logic_Studio.pdf-Manuel

Apple-ipod_nano_kayttoopas.pdf-Finlande-Manuel

Apple_ProRes_White_Paper_October_2012.pdf-Manuel

Apple-wp_osx_configuration_profiles.pdf-Manuel

Apple-UsingiTunesProducerFreeBooks.pdf-Manuel

Apple-ipad_manual_do_usuario.pdf-Portugais-Manuel

Apple-Instruments_et_effets_Logic_Studio.pdf-Manuel

Apple-ipod_touch_gebruikershandleiding.pdf-Neerlandais-Manuel

AppleiPod_shuffle_4thgen_Manual_del_usuario.pdf-Espagnol-Manuel

Apple-Premiers-contacts-avec-votre-PowerBook-G4-Manuel

Apple_Composite_AV_Cable.pdf-Manuel

Apple-iPod_shuffle_3rdGen_UG_DK.pdf-Danemark-Manuel

Apple-iPod_classic_160GB_Benutzerhandbuch.pdf-Allemand-Manuel

Apple-VoiceOver_GettingStarted-Manuel

Apple-iPod_touch_2.2_Benutzerhandbuch.pdf-Allemand-Manuel

Apple-Apple_TV_Opstillingsvejledning.pdf-Allemand-Manuel

Apple-iPod_shuffle_4thgen_Manuale_utente.pdf-Italie-Manuel

Apple-iphone_prirucka_uzivatela.pdf-Manuel

Apple-Aan-de-slag-Neerlandais-Manuel

Apple-airmac_express-80211n-2nd-gen_setup_guide.pdf-Thailande-Manuel

Apple-ipod_nano_benutzerhandbuch.pdf-Allemand-Manuel

Apple-aperture3.4_101.pdf-Manuel

Apple-Pages09_Anvandarhandbok.pdf-Manuel

Apple-nike_plus_ipod_sensor_ug_la.pdf-Mexique-Manuel

Apple-ResEdit-Reference-For-ResEdit02.1-Manuel

Apple-ipad_guide_de_l_utilisateur.pdf-Manuel

Apple-Compressor-4-Benutzerhandbuch-Allemand-Manuel

Apple-AirPort_Networks_Early2009_DK.pdf-Danemark-Manuel

Apple-MacBook_Pro_Mid2007_2.4_2.2GHz_F.pdf-Manuel

Apple-MacBook_13inch_Mid2010_UG_F.pdf-Manuel

Apple-Xserve-RAID-Presentation-technologique-Janvier-2004-Manuel

Apple-MacBook_Pro_15inch_Mid2010_F.pdf-Manuel

Apple-AirPort_Express-opstillingsvejledning.pdf-Danemark-Manuel

Apple-DEiPod_photo_Benutzerhandbuch_DE0190269.pdf-Allemand-Manuel

Apple-Final-Cut-Pro-X-Logic-Effects-Reference-Manuel

Apple-iPod_touch_2.1_Brugerhandbog.pdf-Danemark-Manuel

Apple-Remote-Desktop-Administratorhandbuch-Version-3.1-Allemand-Manuel

Apple-Qmaster-4-User-Manual-Manuel

Apple-Server_Administration_v10.5.pdf-Manuel

Apple-ipod_classic_features_guide.pdf-Manuel

Apple-Lecteur-Optique-Manuel

Apple-Carte-AirPort-Manuel

Apple-iPhone_Finger_Tips_Guide.pdf-Anglais-Manuel

Apple-Couvercle-Manuel

Apple-battery.cube.pdf-Manuel

Apple-Boitier-de-l-ordinateur-Manuel

Apple-Pile-Interne-Manuel

Apple-atacable.pdf-Manuel

Apple-videocard.pdf-Manuel

Apple-Guide_de_configuration_de_l_Airport_Express_5.1.pdf-Manuel

Apple-iMac_Mid2010_UG_F.pdf-Manuel

Apple-MacBook_13inch_Mid2009_F.pdf-Manuel

Apple-MacBook_Mid2007_UserGuide.F.pdf-Manuel

Apple-Designing_AirPort_Networks_10.5-Windows_F.pdf-Manuel

Apple-Administration_de_QuickTime_Streaming_et_Broadcasting_10.5.pdf-Manuel

Apple-Opstillingsvejledning_til_TimeCapsule.pdf-Danemark-Manuel

Apple-iPod_nano_5th_gen_Benutzerhandbuch.pdf-Manuel

Apple-iOS_Business.pdf-Manuel

Apple-AirPort_Extreme_Installationshandbuch.pdf-Manuel

Apple-Final_Cut_Express_4_Installation_de_votre_logiciel.pdf-Manuel

Apple-MacBook_Pro_15inch_2.53GHz_Mid2009.pdf-Manuel

Apple-Network_Services.pdf-Manuel

Apple-Aperture_Performing_Adjustments_f.pdf-Manuel

Apple-Supplement_au_guide_Premiers_contacts.pdf-Manuel

Apple-Administration_des_images_systeme_et_de_la_mise_a_jour_de_logiciels_10.5.pdf-Manuel

Apple-Mac_OSX_Server_v10.6_Premiers_contacts.pdf-Francais-Manuel

Apple-Designing_AirPort_Networks_10.5-Windows_F.pdf-Manuel

Apple-Mise_a_niveau_et_migration_v10.5.pdf-Manue

Apple-MacBookPro_Late_2007_2.4_2.2GHz_F.pdf-Manuel

Apple-Mac_mini_Late2009_SL_Server_F.pdf-Manuel

Apple-Mac_OS_X_Server_10.5_Premiers_contacts.pdf-Manuel

Apple-iPod_touch_2.0_Guide_de_l_utilisateur_CA.pdf-Manuel

Apple-MacBook_Pro_17inch_Mid2010_F.pdf-Manuel

Apple-Comment_demarrer_Leopard.pdf-Manuel

Apple-iPod_2ndGen_USB_Power_Adapter-FR.pdf-Manuel

Apple-Feuille_de_operations_10.4.pdf-Manuel

Apple-Time_Capsule_Installationshandbuch.pdf-Allemand-Manuel

Apple-F034-2262AXerve-grappe.pdf-Manuel

Apple-Mac_Pro_Early2009_4707_UG_F

Apple-imacg5_17inch_Power_Supply

Apple-Logic_Studio_Installieren_Ihrer_Software_Retail

Apple-IntroductionXserve1.0.1

Apple-Aperture_Getting_Started_d.pdf-Allemand

Apple-getting_started_with_passbook

Apple-iPod_mini_2nd_Gen_UserGuide.pdf-Anglais

Apple-Deploiement-d-iPhone-et-d-iPad-Reseaux-prives-virtuels

Apple-F034-2262AXerve-grappe

Apple-Mac_OS_X_Server_Glossaire_10.5

Apple-FRLogic_Pro_7_Guide_TDM

Apple-iphone_bluetooth_headset_userguide

Apple-Administration_des_services_reseau_10.5

Apple-imacg5_17inch_harddrive

Apple-iPod_nano_4th_gen_Manuale_utente

Apple-iBook-G4-Getting-Started

Apple-XsanGettingStarted

Apple-Mac_mini_UG-Early2006

Apple-Guide_des_fonctionnalites_de_l_iPod_classic

Apple-Guide_de_configuration_d_Xsan_2

Apple-MacBook_Late2006_UsersGuide

Apple-sur-Fnac.com

Apple-Mac_mini_Mid2010_User_Guide_F.pdf-Francais

Apple-PowerBookG3UserManual.PDF.Anglais

Apple-Installation_de_votre_logiciel_Logic_Studio_Retail

Apple-Pages-Guide-de-l-utilisateur

Apple-MacBook_Pro_13inch_Mid2009.pdf.Anglais

Apple-MacBook_Pro_15inch_Mid2009

Apple-Installation_de_votre_logiciel_Logic_Studio_Upgrade

Apple-FRLogic_Pro_7_Guide_TDM

Apple-airportextreme_802.11n_userguide

Apple-iPod_shuffle_3rdGen_UG

Apple-iPod_classic_160GB_User_Guide

Apple-iPod_nano_5th_gen_UserGuide

Apple-ipod_touch_features_guide

Apple-Wireless_Mighty_Mouse_UG

Apple-Advanced-Memory-Management-Programming-Guide

Apple-iOS-App-Programming-Guide

Apple-Concurrency-Programming-Guide

Apple-MainStage-2-User-Manual-Anglais

Apple-iMacG3_2002MultilingualUserGuide

Apple-iBookG3_DualUSBUserGuideMultilingual.PDF.Anglais

Apple-imacG5_20inch_AirPort

Apple-Guide_de_l_utilisateur_de_Mac_Pro_Early_2008

Apple-Installation_de_votre_logiciel_Logic_Express_8

Apple-iMac_Guide_de_l_utilisateur_Mid2007

Apple-imacg5_20inch_OpticalDrive

Apple-FCP6_Formats_de_diffusion_et_formats_HD

Apple-prise_en_charge_des_surfaces_de_controle_logic_pro_8

Apple-Aperture_Quick_Reference_f

Apple-Shake_4_User_Manual

Apple-aluminumAppleKeyboard_wireless2007_UserGuide

Apple-ipod_shuffle_features_guide

Apple-Color-User-Manual

Apple-XsanGettingStarted

Apple-Migration_10.4_2e_Ed

Apple-MacBook_Air_SuperDrive

Apple-MacBook_Late2007-f

ApplePowerMacG5_(Early_2005)_UserGuide

Apple-iSightUserGuide

Apple-MacBook_Pro_Early_2008_Guide_de_l_utilisateur

Apple-Nouvelles-fonctionnalites-aperture-1.5

Apple-premiers_contacts_2e_ed_10.4.pdf-Mac-OS-X-Server

Apple-premiers_contacts_2e_ed_10.4

Apple-eMac_2005UserGuide

Apple-imacg5_20inch_Inverter

Apple-Keynote2_UserGuide.pdf-Japon

Apple-Welcome_to_Tiger.pdf-Japon

Apple-XsanAdminGuide_j.pdf-Japon

Apple-PowerBookG4_UG_15GE.PDF-Japon

Apple-Xsan_Migration.pdf-Japon

Apple-Xserve_Intel_DIY_TopCover_JA.pdf-Japon

Apple-iPod_nano_6thgen_User_Guide_J.pdf-Japon

Apple-Aperture_Photography_Fundamentals.pdf-Japon

Apple-nikeipod_users_guide.pdf-Japon

Apple-QuickTime71_UsersGuide.pdf-Japon

Apple-iMacG5_iSight_UG.pdf-Japon

Apple-Aperture_Performing_Adjustments_j.pdf-Japon

Apple-iMacG5_17inch_HardDrive.pdf-Japon

Apple-iPod_shuffle_Features_Guide_J.pdf-Japon

Apple-MacBook_Air_User_Guide.pdf-Japon

Apple-MacBook_UsersGuide.pdf-Japon

Apple-iPad_iOS4_Brukerhandbok.pdf-Norge-Norvege

Apple-Apple_AirPort_Networks_Early2009_H.pd-Norge-Norvege

Apple-iPod_classic_120GB_no.pdf-Norge-Norvege

Apple-StoreKitGuide.pdf-Japon

Apple-Xserve_Intel_DIY_ExpansionCardRiser_JA.pdf-Japon

Apple-iMacG5_Battery.pdf-Japon

Apple-Logic_Pro_8_Getting_Started.pdf-Japon

Apple-PowerBook-handbok-Norge-Norveg

Apple-iWork09_formler_og_funksjoner.pdf-Norge-Norvege

Apple-MacBook_Pro_15inch_Mid2010_H.pdf-Norge-Norvege

Apple-MacPro_HardDrive_DIY.pdf-Japon

Apple-iPod_Fifth_Gen_Funksjonsoversikt.pdf-Norge-Norvege

Apple-MacBook_13inch_white_Early2009_H.pdf-Norge-Norvege

Apple-GarageBand_09_Komme_i_gang.pdf-Norge-Norvege

Apple-MacBook_Pro_15inch_Mid2009_H.pdf-Norge-Norvege

Apple-imac_mid2011_ug_h.pdf-Norge-Norvege

Apple-iDVD_08_Komme_i_gang.pdf-Norge-Norvege

Apple-MacBook_Air_11inch_Late2010_UG_H.pdf-Norge-Norvege

Apple-iMac_Mid2010_UG_H.pdf-Norge-Norvege

Apple-MacBook_13inch_Mid2009_H.pdf-Norge-Norvege

/Apple-iPhone_3G_Viktig_produktinformasjon_H-Norge-Norvege

Apple-MacBook_13inch_Mid2010_UG_H.pdf-Norge-Norvege

Apple-macbook_air_13inch_mid2011_ug_no.pdf-Norge-Norvege

Apple-Mac_mini_Early2009_UG_H.pdf-Norge-Norvege

Apple-ipad2_brukerhandbok.pdf-Norge-Norvege

Apple-iPhoto_08_Komme_i_gang.pdf-Norge-Norvege

Apple-MacBook_Air_Brukerhandbok_Late2008.pdf-Norge-Norvege

Apple-Pages09_Brukerhandbok.pdf-Norge-Norvege

Apple-MacBook_13inch_Late2009_UG_H.pdf-Norge-Norvege

Apple-iPhone_3GS_Viktig_produktinformasjon.pdf-Norge-Norvege

Apple-MacBook_13inch_Aluminum_Late2008_H.pdf-Norge-Norvege

Apple-Wireless_Keyboard_Aluminum_2007_H-Norge-Norvege

Apple-NiPod_photo_Brukerhandbok_N0190269.pdf-Norge-Norvege

Apple-MacBook_Pro_13inch_Mid2010_H.pdf-Norge-Norvege

Apple-MacBook_Pro_17inch_Mid2010_H.pdf-Norge-Norvege

Apple-Velkommen_til_Snow_Leopard.pdf-Norge-Norvege.htm

Apple-TimeCapsule_Klargjoringsoversikt.pdf-Norge-Norvege

Apple-iPhone_3GS_Hurtigstart.pdf-Norge-Norvege

Apple-Snow_Leopard_Installeringsinstruksjoner.pdf-Norge-Norvege

Apple-iMacG5_iSight_UG.pdf-Norge-Norvege

Apple-iPod_Handbok_S0342141.pdf-Norge-Norvege

Apple-ipad_brukerhandbok.pdf-Norge-Norvege

Apple-GE_Money_Bank_Handlekonto.pdf-Norge-Norvege

Apple-MacBook_Air_11inch_Late2010_UG_H.pdf-Norge-Norvege

Apple-iPod_nano_6thgen_Brukerhandbok.pdf-Norge-Norvege

Apple-iPod_touch_iOS4_Brukerhandbok.pdf-Norge-Norvege

Apple-MacBook_Air_13inch_Late2010_UG_H.pdf-Norge-Norvege

Apple-MacBook_Pro_15inch_Early2011_H.pdf-Norge-Norvege

Apple-Numbers09_Brukerhandbok.pdf-Norge-Norvege

Apple-Welcome_to_Leopard.pdf-Japon

Apple-PowerMacG5_UserGuide.pdf-Norge-Norvege

Apple-iPod_touch_2.1_Brukerhandbok.pdf-Norge-Norvege

Apple-Boot_Camp_Installering-klargjoring.pdf-Norge-Norvege

Apple-MacOSX10.3_Welcome.pdf-Norge-Norvege

Apple-iPod_shuffle_3rdGen_UG_H.pdf-Norge-Norvege

Apple-iPhone_4_Viktig_produktinformasjon.pdf-Norge-Norvege

Apple_TV_Klargjoringsoversikt.pdf-Norge-Norvege

Apple-iMovie_08_Komme_i_gang.pdf-Norge-Norvege

Apple-iPod_classic_160GB_Brukerhandbok.pdf-Norge-Norvege

Apple-Boot_Camp_Installering_10.6.pdf-Norge-Norvege

Apple-Network-Services-Location-Manager-Veiledning-for-nettverksadministratorer-Norge-Norvege

Apple-iOS_Business_Mar12_FR.pdf

Apple-PCIDualAttachedFDDICard.pdf

Apple-Aperture_Installing_Your_Software_f.pdf

Apple-User_Management_Admin_v10.4.pdf

Apple-Compressor-4-ユーザーズマニュアル Japon

Apple-Network_Services_v10.4.pdf

Apple-iPod_2ndGen_USB_Power_Adapter-DE

Apple-Mail_Service_v10.4.pdf

Apple-AirPort_Express_Opstillingsvejledning_5.1.pdf

Apple-MagSafe_Airline_Adapter.pdf

Apple-L-Apple-Multiple-Scan-20-Display

Apple-Administration_du_service_de_messagerie_10.5.pdf

Apple-System_Image_Admin.pdf

Apple-iMac_Intel-based_Late2006.pdf-Japon

Apple-iPhone_3GS_Finger_Tips_J.pdf-Japon

Apple-Power-Mac-G4-Mirrored-Drive-Doors-Japon

Apple-AirMac-カード取り付け手順-Japon

Apple-iPhone開発ガイド-Japon

Apple-atadrive_pmg4mdd.j.pdf-Japon

Apple-iPod_touch_2.2_User_Guide_J.pdf-Japon

Apple-Mac_OS_X_Server_v10.2.pdf

Apple-AppleCare_Protection_Plan_for_Apple_TV.pdf

Apple_Component_AV_Cable.pdf

Apple-DVD_Studio_Pro_4_Installation_de_votre_logiciel

Apple-Windows_Services

Apple-Motion_3_New_Features_F

Apple-g4mdd-fw800-lowerfan

Apple-MacOSX10.3_Welcome

Apple-Print_Service

Apple-Xserve_Setup_Guide_F

Apple-PowerBookG4_17inch1.67GHzUG

Apple-iMac_Intel-based_Late2006

Apple-Installation_de_votre_logiciel

Apple-guide_des_fonctions_de_l_iPod_nano

Apple-Administration_de_serveur_v10.5

Apple-Mac-OS-X-Server-Premiers-contacts-Pour-la-version-10.3-ou-ulterieure

Apple-boot_camp_install-setup

Apple-iBookG3_14inchUserGuideMultilingual

Apple-mac_pro_server_mid2010_ug_f

Apple-Motion_Supplemental_Documentation

Apple-imac_mid2011_ug_f

Apple-iphone_guide_de_l_utilisateur

Apple-macbook_air_11inch_mid2011_ug_fr

Apple-NouvellesfonctionnalitesdeLogicExpress7.2

Apple-QT_Streaming_Server

Apple-Web_Technologies_Admin

Apple-Mac_Pro_Early2009_4707_UG

Apple-guide_de_l_utilisateur_de_Numbers08

Apple-Decouverte_d_Aperture_2

Apple-Guide_de_configuration_et_d'administration

Apple-mac_integration_basics_fr_106.

Apple-iPod_shuffle_4thgen_Guide_de_l_utilisateur

Apple-ARA_Japan

Apple-081811_APP_iPhone_Japanese_v5.4.pdf-Japan

Apple-Recycle_Contract120919.pdf-Japan

Apple-World_Travel_Adapter_Kit_UG

Apple-iPod_nano_6thgen_User_Guide

Apple-RemoteSupportJP

Apple-Mac_mini_Early2009_UG_F.pdf-Manuel-de-l-utilisateur

Apple-Compressor_3_Batch_Monitor_User_Manual_F.pdf-Manuel-de-l-utilisateur

Apple-Premiers__contacts_avec_iDVD_08

Apple-Mac_mini_Intel_User_Guide.pdf

Apple-Prise_en_charge_des_surfaces_de_controle_Logic_Express_8

Apple-mac_integration_basics_fr_107.pdf

Apple-Final-Cut-Pro-7-Niveau-1-Guide-de-preparation-a-l-examen

Apple-Logic9-examen-prep-fr.pdf-Logic-Pro-9-Niveau-1-Guide-de-preparation-a-l-examen

Apple-aperture_photography_fundamentals.pdf-Manuel-de-l-utilisateu

Apple-emac-memory.pdf-Manuel-de-l-utilisateur

Apple-Apple-Installation-et-configuration-de-votre-Power-Mac-G4

Apple-Guide_de_l_administrateur_d_Xsan_2.pdf

Apple-premiers_contacts_avec_imovie6.pdf

Apple-Tiger_Guide_Installation_et_de_configuration.pdf

Apple-Final-Cut-Pro-7-Level-One-Exam-Preparation-Guide-and-Practice-Exam

Apple-Open_Directory.pdf

Apple-Nike_+_iPod_User_guide

Apple-ard_admin_guide_2.2_fr.pdf

Apple-systemoverviewj.pdf-Japon

Apple-Xserve_TO_J070411.pdf-Japon

Apple-Mac_Pro_User_Guide.pdf

Apple-iMacG5_iSight_UG.pdf

Apple-premiers_contacts_avec_iwork_08.pdf

Apple-services_de_collaboration_2e_ed_10.4.pdf

Apple-iPhone_Bluetooth_Headset_Benutzerhandbuch.pdf

Apple-Guide_de_l_utilisateur_de_Keynote08.pdf

APPLE/Apple-Logic-Pro-9-Effectsrfr.pdf

Apple-Logic-Pro-9-Effectsrfr.pdf

Apple-iPod_shuffle_3rdGen_UG_F.pdf

Apple-iPod_classic_160Go_Guide_de_l_utilisateur.pdf

Apple-iBookG4GettingStarted.pdf

Apple-Administration_de_technologies_web_10.5.pdf

Apple-Compressor-4-User-Manual-fr

Apple-MainStage-User-Manual-fr.pdf

Apple-Logic_Pro_8.0_lbn_j.pdf

Apple-PowerBookG4_15inch1.67-1.5GHzUserGuide.pdf

Apple-MacBook_Pro_15inch_Mid2010_CH.pdf

Apple-LED_Cinema_Display_27-inch_UG.pdf

Apple-MacBook_Pro_15inch_Mid2009_RS.pdf

Apple-macbook_pro_13inch_early2011_f.pdf

Apple-iMac_Mid2010_UG_BR.pdf

Apple-iMac_Late2009_UG_J.pdf

Apple-iphone_user_guide-For-iOS-6-Software

Apple-iDVD5_Getting_Started.pdf

Apple-guide_des_fonctionnalites_de_l_ipod_touch.pdf

Apple_iPod_touch_User_Guide

Apple_macbook_pro_13inch_early2011_f

Apple_Guide_de_l_utilisateur_d_Utilitaire_RAID

Apple_Time_Capsule_Early2009_Setup_F

Apple_iphone_4s_finger_tips_guide_rs

Apple_iphone_upute_za_uporabu

Apple_ipad_user_guide_ta

Apple_iPod_touch_User_Guide

apple_earpods_user_guide

apple_iphone_gebruikershandleiding

apple_iphone_5_info

apple_iphone_brukerhandbok

apple_apple_tv_3rd_gen_setup_tw

apple_macbook_pro-retina-mid-2012-important_product_info_ch

apple_Macintosh-User-s-Guide-for-Macintosh-PowerBook-145

Apple_ipod_touch_user_guide_ta

Apple_TV_2nd_gen_Setup_Guide_h

Apple_ipod_touch_manual_del_usuario

Apple_iphone_4s_finger_tips_guide_tu

Apple_macbook_pro_retina_qs_th

Apple-Manuel_de_l'utilisateur_de_Final_Cut_Server

Apple-iMac_G5_de_lutilisateur

Apple-Cinema_Tools_4.0_User_Manual_F

Apple-Personal-LaserWriter300-User-s-Guide

Apple-QuickTake-100-User-s-Guide-for-Macintosh

Apple-User-s-Guide-Macintosh-LC-630-DOS-Compatible

Apple-iPhone_iOS3.1_User_Guide

Apple-iphone_4s_important_product_information_guide

Apple-iPod_shuffle_Features_Guide_F

Liste-documentation-apple

Apple-Premiers_contacts_avec_iMovie_08

Apple-macbook_pro-retina-mid-2012-important_product_info_br

Apple-macbook_pro-13-inch-mid-2012-important_product_info

Apple-macbook_air-11-inch_mid-2012-qs_br

Apple-Manuel_de_l_utilisateur_de_MainStage

Apple-Compressor_3_User_Manual_F

Apple-Color_1.0_User_Manual_F

Apple-guide_de_configuration_airport_express_4.2

Apple-TimeCapsule_SetupGuide

Apple-Instruments_et_effets_Logic_Express_8

Apple-Manuel_de_l_utilisateur_de_WaveBurner

Apple-Macmini_Guide_de_l'utilisateur

Apple-PowerMacG5_UserGuide

Disque dur, ATA parallèle Instructions de remplacement

Apple-final_cut_pro_x_logic_effects_ref_f

Apple-Leopard_Installationshandbok

Manuale Utente PowerBookG4

Apple-thunderbolt_display_getting_started_1e

Apple-Compressor-4-Benutzerhandbuch

Apple-macbook_air_11inch_mid2011_ug

Apple-macbook_air-mid-2012-important_product_info_j

Apple-iPod-nano-Guide-des-fonctionnalites

Apple-iPod-nano-Guide-des-fonctionnalites

Apple-iPod-nano-Guide-de-l-utilisateur-4eme-generation

Apple-iPod-nano-Guide-de-l-utilisateur-4eme-generation

Apple-Manuel_de_l_utilisateur_d_Utilitaire_de_reponse_d_impulsion

Apple-Aperture_2_Raccourcis_clavier

AppleTV_Setup-Guide

Apple-livetype_2_user_manual_f

Apple-imacG5_17inch_harddrive

Apple-macbook_air_guide_de_l_utilisateur

Apple-MacBook_Early_2008_Guide_de_l_utilisateur

Apple-Keynote-2-Guide-de-l-utilisateur

Apple-PowerBook-User-s-Guide-for-PowerBook-computers

Apple-Macintosh-Performa-User-s-Guide-5200CD-and-5300CD

Apple-Macintosh-Performa-User-s-Guide

Apple-Workgroup-Server-Guide

Apple-iPod-nano-Guide-des-fonctionnalites

Apple-iPad-User-Guide-For-iOS-5-1-Software

Apple-Boot-Camp-Guide-d-installation-et-de-configuration

Apple-iPod-nano-Guide-de-l-utilisateur-4eme-generation

Power Mac G5 Guide de l’utilisateur APPLE

Guide de l'utilisateur PAGE '08 APPLE

Guide de l'utilisateur KEYNOTE '09 APPLE

Guide de l'Utilisateur KEYNOTE '3 APPLE

Guide de l'Utilisateur UTILITAIRE RAID

Guide de l'Utilisateur Logic Studio

Power Mac G5 Guide de l’utilisateur APPLE

Guide de l'utilisateur PAGE '08 APPLE

Guide de l'utilisateur KEYNOTE '09 APPLE

Guide de l'Utilisateur KEYNOTE '3 APPLE

Guide de l'Utilisateur UTILITAIRE RAID

Guide de l'Utilisateur Logic Studio

Guide de l’utilisateur ipad Pour le logiciel iOS 5.1

PowerBook G4 Premiers Contacts APPLE

Guide de l'Utilisateur iphone pour le logiciel ios 5.1 APPLE

Guide de l’utilisateur ipad Pour le logiciel iOS 4,3

Guide de l’utilisateur iPod nano 5ème génération

Guide de l'utilisateur iPod Touch 2.2 APPLE

Guide de l’utilisateur QuickTime 7  Mac OS X 10.3.9 et ultérieur Windows XP et Windows 2000

Guide de l'utilisateur MacBook 13 pouces Mi 2010

Guide de l’utilisateur iPhone (Pour les logiciels iOS 4.2 et 4.3)

Guide-de-l-utilisateur-iPod-touch-pour-le-logiciel-ios-4-3-APPLE

Guide-de-l-utilisateur-iPad-2-pour-le-logiciel-ios-4-3-APPLE

Guide de déploiement en entreprise iPhone OS

Guide-de-l-administrateur-Apple-Remote-Desktop-3-1

Guide-de-l-utilisateur-Apple-Xserve-Diagnostics-Version-3X103

Guide-de-configuration-AirPort-Extreme-802.11n-5e-Generation

Guide-de-configuration-AirPort-Extreme-802-11n-5e-Generation

Guide-de-l-utilisateur-Capteur-Nike-iPod

Guide-de-l-utilisateur-iMac-21-5-pouces-et-27-pouces-mi-2011-APPLE

Guide-de-l-utilisateur-Apple-Qadministrator-4

Guide-d-installation-Apple-TV-3-eme-generation

User-Guide-iPad-For-ios-5-1-Software

Window Programming GuideContents Introduction 6 Organization of This Document 6 See Also 8 How Windows Work 9 How a Window is Displayed 11 How Modal Windows Work 12 How Panels Work 14 How Window Controllers Work 15 Window Closing Behavior 16 Opening and Closing Windows 17 Window Layering and Types of Windows 18 Window Layering 18 Key and Main Windows 19 The Key Window 20 The Main Window 20 Changing a Window’s Status 21 Window Layers and Levels 22 Window Levels 22 Setting Ordering and Level Programmatically 22 Setting Window Collection Behavior 24 Spaces Collection Behavior 24 Exposé Collection Behavior 24 Window Cycling Behavior 25 Sizing and Placing Windows 26 Setting a Window’s Size and Location 26 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 2Window Cascading 27 Window Zooming 27 Constraining a Window’s Size and Location 27 Saving a Window’s Position into the User’s Defaults 29 Minimizing Windows 30 Using the Window Menu 31 Setting a Window’s Appearance 32 Setting a Window’s Style 32 Setting a Window’s Color and Transparency 33 Setting a Window’s Color Space 33 Setting a Window’s Content Border Thickness 33 Setting a Window’s Title and Represented File 34 Setting Attributes for the Window’s Image 35 Specifying How To Store the Window’s Image 35 Specifying Where To Store the Window’s Image 36 Specifying When the Window’s Image Is Created 36 Specifying Whether the Window’s Image Persists When Offscreen 37 Specifying the Depth Limit for the Window’s Image 37 Specifying Whether the Depth Limit Changes to the Screen’s Capacity 37 Specifying Whether Window Content Can Be Read or Written by Another Process 37 Handling Events in Windows 38 Using Keyboard Interface Control in Windows 39 Using the Window’s Field Editor 40 Using Window Notifications and Delegate Methods 41 Dragging Images to and from Windows 42 Updating the Cursor Image in a Window 43 Caching Window Images 44 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 3Document Revision History 45 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 4Figures and Listings Window Layering and Types of Windows 18 Figure 1 Main, key, and inactive windows 19 Saving a Window’s Position into the User’s Defaults 29 Listing 1 Saving a window’s frame automatically 29 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 5An application displays windows on the screen that must be managed and coordinated. A window object corresponds to at most one on-screen window. The two principal functions of windows are to provide an area in which views can be placed and to accept and distribute events the user sends through actions with the mouse and keyboard. The term window sometimes refers to the Application Kit object and sometimes to the window server’s window device; which meaning is intended is made clear in context. Panels are a special kind of window, typically serving an auxiliary function in an application, such as utility windows. This document is intended for Cocoa developers who need to work with windows and panels in their applications. Organization of This Document This programming topic describes how to use windows and panels. These articles give you basic information on the different types of windows and how they work: ● “How Windows Work” (page 9) describes the classes that define objects that manage and coordinate the windows an application displays. ● “How a Window is Displayed” (page 11) describes how window drawing is accomplished. ● “How Modal Windows Work” (page 12) describes the behavior of modal windows. ● “How Panels Work” (page 14) describes the various uses of panels. ● “How Window Controllers Work” (page 15) describesthe relationship between a window and its controller. ● “Window Layering and Types of Windows” (page 18) describes window layering and the concepts of key and main windows, and how a window can avoid becoming key or main. ● “Window Layers and Levels” (page 22) describes window levels, and how to place a window in a specific level, such as the level for document windows, palettes, or tear-off menus. ● “Setting Window Collection Behavior” (page 24) describes how to set a window’s behavior with Spaces, Exposé, and window cycles. These articles describe how to use windows: ● “Opening and Closing Windows” (page 17) describes how to open and close, or just show and hide, a window. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 6 Introduction● “Sizing and Placing Windows” (page 26) describes how to control a window’s size and position, including how to set its minimum and maximum size, how to constrain it to the screen, how to cascade it so its title bar remains visible, how to zoom it as though the user pressed the zoom button, and how to center it on the screen. ● “Saving a Window’s Position into the User’s Defaults” (page 29) describes how to store a window’s position in the user defaults system, so that it appears in the same location the next time the user starts the application. ● “Minimizing Windows” (page 30) describes how to replace a window with a smaller counterpart in the Dock. ● “Using the Window Menu” (page 31) describes how to place a window’s name in the Windows menu that appears in most Cocoa applications. These articles describe how to change what a window looks like: ● “Setting a Window’s Appearance” (page 32) describes how to choose whether to display a window’s peripheral elements, including its title bar, close box, zoom box, or size box. It also describes how to set a window’s background color and transparency, ● “Setting a Window’s Title and Represented File” (page 34) describes how to set a window’s title with either a string or the filename of the window’s represented file. ● “Setting Attributes for the Window’s Image” (page 35) describes how to set attributes for the window’s device, which stores the window’s image, including how the image is stored, when the image is created, and the image’s color depth. These articles describe how to handle a window’s events: ● “Handling Events in Windows” (page 38) gives basic information on how a window handles events. ● “Using Keyboard Interface Control in Windows” (page 39) describes how to navigate between a window’s fields using the Tab key and how to use the Return and Escape keys to select default buttons. ● “Using the Window’s Field Editor” (page 40) describes how to use the window’s text object, which is shared for light editing tasks. These articles describe some advanced features of windows: ● “Using Window Notifications and Delegate Methods” (page 41) describes the notifications and delegate methods used when a window gains or loses key or main window status, minimizes, moves or resizes, becomes exposed, or closes. ● “Dragging Images to and from Windows” (page 42) describes what happens when the user wants to drag an object into or out of a window. Introduction Organization of This Document 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 7● “Updating the Cursor Image in a Window” (page 43) directs you to information on how to change the cursor image when the cursor is over a specified area in a view. ● “Caching Window Images” (page 44) describes how to temporarily cache a portion of a window’s image so that it can be restored later. Thisis useful when highly dynamic drawing must be done over an otherwise static image of the window. See Also For additional information on specific types of windows and panels, you can also see the following programming topics: ● Sheet Programming Topics describes a dialog attached to a specific window, ensuring that a user never loses track of which window the dialog belongs to. ● Drawer Programming Topics describes a type of view that slides out from one side of a window. ● Toolbar Programming Topics for Cocoa describes a standard way to display a toolbar for a titled window below its title bar and provide users with a way to customize toolbars and save those customizations. ● Dialogs and Special Panels describes alert panels and other specialized types of panels, such as Font, Save, and Print panels. ● Document-Based App Programming Guide for Mac describes how to use the architecture supplied by AppKit to create applications that can create, open, load, and save multiple document files. ● Cocoa Event Handling Guide discusses the variety of ways your application objects can handle the events they receive. Introduction See Also 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 8The NSWindow class defines objects that manage and coordinate the windows an application displays on the screen. A single NSWindow object corresponds to at most one onscreen window. The two principal functions of an NSWindow object are to provide an area in which NSView objects can be placed and to accept and distribute, to the appropriate views, events the user instigates through actions with the mouse and keyboard. Note that the term window sometimes refers to the Application Kit object and sometimes to the window server’s display device; which meaning is intended is made clear in context. AppKit also defines an abstract subclass of NSWindow—NSPanel—that adds behavior more appropriate for auxiliary windows. An NSWindow object is defined by a frame rectangle that encloses the entire window, including its title bar, border, and other peripheral elements (such as the resize control), and by a content rectangle that encloses just its content area. Both rectangles are specified in the screen coordinate system and are restricted to integer values. The frame rectangle establishesthe window’s base coordinate system. This coordinate system is always aligned with and measured in the same increments as the screen coordinate system (in other words, the base coordinate system can’t be rotated or scaled). The origin of the base coordinate system is the bottom-left corner of the window’s frame rectangle. Typically, you create windows using Interface Builder, which allows you to position them, set many of their attributes, and lay out their views. The programmatic work you do with windows more often involves bringing them on and off the screen; changing dynamic attributes such as the window’s title; running modal windows to restrict user input; and assigning a delegate that can monitor certain of the window’s actions,such as closing, zooming, and resizing. You can also create a window programmatically with one of itsinitializers by specifying, among other attributes, the size and location of its content rectangle. The frame rectangle is derived from the dimensions of the content rectangle. When it’s created, a window automatically createstwo views: an opaque frame view that fillsthe frame rectangle and draws the border, title bar, other peripheral elements, and background, and a transparent content view that fills the content rectangle. The frame view and its peripheral elements are private objects that your application can’t access directly. The content view is the “highest” accessible view in the window; you can replace the default content view with a view of your own creation using the setContentView: method. The window determines the placement of the content view; you can’t position it using the NSView methods that begin with setFrame; you must use the NSWindow class’s placement methods, as described in “Opening and Closing Windows” (page 17). 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 9 How Windows WorkYou add other views to the window as subviews of the content view or as subviews of any of the content view’s subviews, and so on, via the addSubview: method of NSView. This tree of views is called the window’s view hierarchy. When a window is told to display itself, it does so by sending display... messages to the top-level view in its view hierarchy. Because displaying is carried out in a determined order, the content view (which is drawn first) may be wholly or partially obscured by itssubviews, and these subviews may be obscured by their subviews (and so on). How Windows Work 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 10Displaying an NSWindow object begins with the drawing performed by its view objects, which accumulates in the window’s display buffer or appears immediately on the screen. Windows, like NSView objects, can be displayed unconditionally or merely marked as needing display, using the display and setViewsNeedDisplay: methods, respectively. A displayIfNeeded message causes the window’s views to display only if they’ve been marked as needing display. Normally, any time a view is marked as needing display, the window makes note of this fact and automatically displays itself shortly thereafter. This automatic display is typically performed on each pass through the event loop, but can be turned off using the setAutodisplay: method. If you turn off autodisplay for a window, you’re then responsible for displaying it whenever necessary. A window’s views can be drawn concurrently. You can use the methods allowsConcurrentViewDrawing and setAllowsConcurrentViewDrawing: to determine and set, respectively, whether or not a window draws its views concurrently. By default, a window’s views are drawn concurrently. On each passthrough the event loop, the application object invokesits updateWindows method, which sends an update message to each window. Subclasses of NSWindow can override this method to examine the state of the application and change their own state or appearance accordingly—enabling or disabling menus, buttons, and other controls based on the object that’s selected, for example. In addition to displaying itself on the screen, a window can print itself in its entirety, just as a view can. The print: method runs the application’s Print panel and causes the window’s frame view to print itself. dataWithEPSInsideRect: behaves similarly. For additional information see Printing Programming Guide for OS X . 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 11 How a Window is DisplayedYou can make a whole window or panel run in application-modal fashion, using the application’s normal event loop machinery but restricting input to the modal window or panel. Modal operation is useful for windows and panels that require the user’s attention before an action can proceed. Examples include error messages and warnings, as well as operations that require input, such as open dialogs, or dialogs that apply to multiple windows. There are two mechanisms for operating an application-modal window or panel. The first, and simpler, is to invoke the runModalForWindow: method of NSApplication, which monopolizes events for the specified window until one of stopModal, abortModal, or stopModalWithCode: is invoked, typically by a button’s action method. The stopModal method ends the modal status of the window or panel from within the event loop. It doesn’t work if invoked from a method invoked by a timer or by a distributed object because those mechanisms operate outside of the event loop. To terminate the modal loop in these situations, you can use abortModal. The stopModal method is typically invoked when the user clicks the OK button (or equivalent), abortModal when the user clicks the Cancel button (or presses the Escape key). These two methods are equivalent to stopModalWithCode: with the appropriate argument. The second mechanism for operating a modal window or panel, called a modal session, allowsthe application to perform a long operation while it still sends events to the window or panel. Modal sessions are particularly useful for panels that allow the user to cancel or modify an operation. To begin a modal session, invoke beginModalSessionForWindow: on the application, which sets the window up for the session and returns an identifier used for other session-controlling methods. At this point, the application can run in a loop that performsthe operation, invoking runModalSession: on the application object on each passso that pending events can be dispatched to the modal window. This method returns a code indicating whether the operation should continue, stop, or abort, which is typically established by the methods described above for runModalForWindow:. After the loop concludes, you can remove the window from the screen and invoke endModalSession: on the application to restore the normal event loop. Note: You can write a modal event loop for a view object so that the object has access to all events pertaining to a particular task, such as tracking the mouse in the view. For an example, see “Responding to User Events and Actions” in “Creating a Custom View”. The normal behavior of a modal window or session is to exclude all other windows and panels from receiving events. For windows and panels that serve as general auxiliary controls, such as menus and the Font panel, this behavior is overly restrictive. The user must be able to use menu key equivalents (such as those for Cut 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 12 How Modal Windows Workand for Paste) and change the font of text in the modal window, and this requires that non-modal panels be able to receive events. To support this behavior, an NSWindow subclass overridesthe worksWhenModal method to return YES. This allows the window to receive mouse and keyboard events even when a modal window is present. If a subclass needs to work when a modal window is present, it should generally be a subclass of NSPanel, not of NSWindow. Modal windows and modal sessions provide different levels of control to the application and the user. Modal windows restrict all action to the window itself and any methods invoked from the window. Modal sessions allow the application to continue an operation while accepting input only through the modal session window. Beyond this, you can use distributed objects to perform background operations in a separate thread, while allowing the user to perform other actions with any part of the application. The background thread can communicate with the main thread, allowing the application to display the status of the operation in a non-modal panel, perhaps including controls to stop or affect the operation as it occurs. Note that because AppKit isn’t thread-safe, the background thread should communicate with a designated object in the main thread that in turn interacts with the AppKit. Before OS X version 10.6, if a modal window was open, application termination would be prevented if the user attempted to terminate that window’s application. Beginning in OS X version 10.6, you can call setPreventsApplicationTerminationWhenModal: with a value of NO, and the window will not prevent application termination when modal. The current value of this property may be accessed by calling preventsApplicationTerminationWhenModal. The default value is NO. How Modal Windows Work 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 13A panel is a special kind of window, typically serving an auxiliary function in an application. The NSPanel subclass of NSWindow adds a few special behaviors to windows in support of the role panels play: ● By default panels are not released when they’re closed, because they’re usually lightweight and often reused. ● Onscreen panels, except for alert dialogs, are removed from the screen when the application isn’t active and are restored when the application again becomes active. This reduces screen clutter. Specifically, the NSWindow implementation of the hidesOnDeactivate method returns NO, but the NSPanel implementation of the same method returns YES. ● Panels can become the key window, but they cannot become the main window. ● If a panel is the key window and has a close button, it closes itself when the user presses the Escape key. In addition to these automatic behaviors, the NSPanel class allows you to configure certain other behaviors common to some kinds of panels: ● You can prevent a panel from becoming the key window unless the user clicks in a view that responds to typing. This prevents the key window from shifting to the panel unnecessarily. The setBecomesKeyOnlyIfNeeded: method controls this behavior. ● Palettes and similar panels can be made to float above standard windows and other panels. This prevents them from being covered and keepsthem readily available to the user. The setFloatingPanel: method controls this behavior. ● A panel can be made to receive mouse and keyboard events even when another window or panel is being run modally or in a modal session. This permits actions in the panel to affect the modal window or panel. The setWorksWhenModal: method controls this behavior. See “How Modal Windows Work” (page 12) for more information on modal windows and panels. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 14 How Panels WorkA controller object (in this case, an instance of the NSWindowController class) manages a window; this object is usually stored in a nib file. This management entails the following: ● Loading and displaying the window ● Closing the window when appropriate ● Customizing the window’s title ● Storing the window’s frame (size and location) in the defaults database ● Cascading the window in relation to other document windows of the application A window controller can manage a window by itself or as a participant in AppKit’s document-based architecture, which also includes the NSDocument and NSDocumentController classes. In this architecture, a window controller is created and managed by a document (an instance of an NSDocument subclass) and, in turn, keeps a reference to the document. For a discussion of this architecture, see Document-Based App Programming Guide for Mac . The relationship between a window controller and a nib file is important. Although a window controller can manage a programmatically created window, it usually manages a window in a nib file. The nib file can contain other top-level objects, including other windows, but the window controller’s responsibility is this primary window. The window controller is usually the owner of the nib file, even when it is part of a document-based application. For simple documents—that is, documents with only one nib file containing a window—you need do little directly with NSWindowController objects. AppKit creates one for you. However, if the default window controller is not sufficient, you can create a custom subclass of NSWindowController. For documents with multiple windows or panels, your document must create separate instances of NSWindowController (or of custom subclasses of NSWindowController), one for each window or panel. An example is a CAD application that has different windows for side, top, and front views of drawn objects. What you do in your NSDocument subclass determines whether the default NSWindowController object or separately created and configured NSWindowController objects are used. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 15 How Window Controllers WorkWindow Closing Behavior When a window is closed and it is part of a document-based application, the document removes the window’s window controller from itslist of window controllers. Thisresultsin the system deallocating the window controller and the window, and possibly the NSDocument object itself. When a window controller is not part of a document-based application, closing the window does not by default result in the deallocation of the window or window controller. This is the desired behavior for a window controller that manages something like an inspector; you shouldn’t have to load the nib file again and re-create the objectsthe next time the user requests the inspector. If you want the closing of a window to make both window and window controller go away when it isn’t part of a document, yoursubclass of NSWindowController can observe the NSWindowWillCloseNotification notification or, as the window delegate, implement the windowWillClose: method. How Window Controllers Work Window Closing Behavior 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 16This article describes how to open and close a window. Opening a window—that is, making a window visible—is normally accomplished by placing the window into the application's window list by invoking one of the methods makeKeyAndOrderFront:, orderFront:, etc., in NSWindow, and so on. Also, with certain bits set in Interface Builder, the window is shown when the nib file is loaded in some cases. Closing a window involves explicit use of either the close method, which simply removes the window from the screen, or performClose:, which highlights the close button as though the user clicked it. Closing a window involves at least removing it from the screen but may include disposing of it altogether. The setReleasedWhenClosed: method specifies whether a window releases itself when it receives a close message. A window’s delegate is also notified when it’s about to close, as described in “Using Window Notifications and Delegate Methods” (page 41). These methods hide a window without closing it. The method orderOut: removes a window from the screen. You can also set a window to be removed from the screen automatically when its application isn’t active using setHidesOnDeactivate:. The isVisible method returns whether a window is on or off the screen. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 17 Opening and Closing WindowsEach window is placed on the screen by a particular application, and each application typically owns a variety of windows. Windows have numerous characteristics. They can be located onscreen or offscreen. Onscreen windows are placed on the screen in levels managed by the window server. Windows onscreen are ordered from front to back. Like sheets of paper loosely stacked together, windows in front can overlap, or even completely cover, those behind them. Each window has a unique position in the order. When two windows are placed side-by-side, one is still technically in front of the other. If any window could be in front of any other window, then small but important windows—like menus and tool palettes—might get lost behind larger ones. Windows that require user action, like attention panels and pop-up lists, might disappear behind another window and go unnoticed. To prevent this, all the windows onscreen are organized into levels. When two windows belong to the same level, either one can be in front. When two windows belong to different levels, however, the one in the higher level will always be above the other. Onscreen windows can also carry a status: main or key . Offscreen windows are hidden or minimized on Dock, and do not carry either status. Onscreen windows that are neither main nor key are inactive. Window Layering Each application and document window exists in its own layer, so documents from different applications can be interleaved. Clicking a window to bring it to the front doesn’t disturb the layering order of any other window. A window’s depth in the layers is determined by when the window was last accessed. When a user clicks an inactive document or chooses it from the Window menu, only that document, and any open utility windows, should be brought to the front. Users can bring all windows of an application forward by clicking its icon in the Dock or by choosing Bring All to Front in the application’s Window menu. These actions should bring forward all of the application’s open windows, maintaining their onscreen location, size, and layering order within the application. For more information, see “UI Element Guidelines: Menus” in OS X Human Interface Guidelines. Utility windows are alwaysin the same layer: the top layer. They are visible only when their application is active. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 18 Window Layering and Types of WindowsKey and Main Windows Windows have different looks based on how the user is interacting with them. The foremost document or application window that is the focus of the user’s attention is referred to as the main window. Each application also has only one main window at a given time. This main window often has key status, as well. The main window is the principal focus of user actions for an application. Often, user actions in a modal key window (typically a panel such as the Font window or an Info window) have a direct effect on the main window. Main and key windows are both active windows. Active windows are visually distinct from inactive windows in that their controls have color, while the controls in inactive windows do not have color. Inactive windows are windows the user has open but that are not in the foreground. Main and key windows are always in the foreground and their controls always have color. If the main and key window are different windows, they are distinguished from one another by the look of their title bars. Note the visual distinctions between main, key, and inactive windows in Figure 1. Figure 1 Main, key, and inactive windows Inactive window Main window Key window A good example of the difference between key and main windows can be seen in most well-behaved Mac apps. Selecting “Save As...” in a text document, for example, displays a panel with a field to type the document’s name and a pull-down menu of locations to save it. The panel represents the key window. It will accept your Window Layering and Types of Windows Key and Main Windows 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 19keyboard input (the file name), but will directly affect the main window under it (by saving it to the location you specified). Once you save the document, the save panel disappears, the main window becomes key again, and will accept keyboard input once more. The Key Window The key window responds to user input, whether from the keyboard, mouse, or alternative input devices, for an application and is the primary recipient of messages from menus and panels. Usually, a window is made key when the user clicks it. Each application can have only one key window at a given time. Users expect to see their actions on the keyboard and mouse take effect not only in a particular application, but also in a particular window of that application. Each user action is associated with a window by the window server and AppKit. Before acting, the user needs to know which window will be affected; there should be no surprises. Since the mouse controls the pointer, it's quite easy for the user to determine which window a mouse action is associated with. It's whatever window the pointer is over. But the keyboard doesn’t have a pointer, so there’s no natural way to determine where typed characters will appear. To mark the key window for users, AppKit highlights its title bar. You can think of the highlighting as a kind of pointer for the keyboard. It shifts from window to window as the key window changes. Key-window status also moves from application to application as the active application changes. Only one window on the screen is marked at a time, and it is in the active application. There’s just one key window on the Desktop. Even a system that has two screens, but only one keyboard, has at most one key window. Note: A window doesn’t have to become the key window to receive, and act on, keyboard shortcuts. It does, however, have to be a window in the active application. Since the key window belongs to the active application, its highlighted title bar has the secondary effect of helping to show which application is currently active. The key window isthe most prominently marked window in the active application, making it “key” in a second sense: it’s the main focus of the user’s attention on the screen. The Main Window The main window is the standard window where the user is currently working. The main window is not always the key window. There are times when a window other than the main window takes the focus of the input device, while the main window still remains the focus of the user’s attention and of user actions carried out in panels and menus. For example, when a person is using an inspector, a Find dialog, or the Fonts or Colors windows, the document is the main window and the other window is the key window. The Find panel requires Window Layering and Types of Windows Key and Main Windows 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 20the user to supply information by typing it. Since the panel is the destination of the user’s keystrokes, it’s marked as the key window. But the panel is just an instrument through which users can do work in another window—the main window. In a document-based application, the main window isthe window for the current document. Whenever a standard window becomesthe key window, it also becomesthe main window. When key-window status shifts from a standard window to a panel, main-window status remains with the standard window. So that users can pick out the main window when it’s not the key window, the Application Kit highlights its title bar and colorsthe window buttons. If the main window is also the key window, it has only the highlighting of the key window. A menu command might affect either the key window or the main window, depending on the command. For example, the Paste command can be used to enter text in a Find panel. But the Save command saves the document displayed in the main window, and the Bold command turns the current selection in the main window bold. For this reason, user actions in a panel or menu are associated with both the key window and the main window: ● An action is first associated with the key window. ● If the key window is a panel and it can’t handle the action, the action is next associated with the main window. Note that this order of precedence is reflected in the way windows are highlighted: The key window is always marked, but the main window is marked only when it’s not the key window. The main window is always in the same application as the key window, the active application. Changing a Window’s Status Windows that are already onscreen automatically change their status as the key or main window based on the user’s actions with the mouse and on how clicked views handle those mouse events. You can also set the key and main windows programmatically by sending the relevant windows a makeKeyWindow or makeMainWindow message. Setting the key and main windows programmatically is particularly useful when creating a new window. Because making a window key is often combined with ordering the window to the front of the screen, the NSWindow class defines a convenience method, makeKeyAndOrderFront:, that performs both operations. Not all windows are suitable as key or main windows. For example, a window that merely displays information and contains no objectsthat need to respond to events or action messages can completely forgo ever becoming the key window. Similarly, a window that acts as a floating palette of itemsthat are only dragged out by mouse actions never needs to be the key window. Such a window can be defined as a subclass of NSWindow that overridesthe methods canBecomeKeyWindow and canBecomeMainWindow to return NO instead of the default of YES. Defining a window this way prevents it from ever becoming the key or main window. Although the NSWindow class defines these methods, only subclasses of NSPanel typically refuse to accept key or main window status. Window Layering and Types of Windows Key and Main Windows 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 21Windows can be placed on the screen in three dimensions. Besides horizontal and vertical placement, windows are layered back-to-front within distinct levels. Each application and document window exists in its own layer, so documents from different applications can be interleaved. Clicking a window to bring it to the front doesn’t disturb the layering order of any other window. A window’s depth in the layers is determined by when the window was last accessed. When a user clicks an inactive document or chooses it from the Window menu, only that document and any open utility windows should be brought to the front. Window Levels Windows are ordered within several distinct levels. Window levels group windows ofsimilar type and purpose so that the more “important” ones(such as alert panels) appear in front of those lesser importance. A window’s level serves as a high-order bit to determine its position with regard to other windows. Windows can be reordered with respect to each other within a given level; a given window, however, cannot be layered above other windows in a higher level. There are a number of predefined window levels, specified by constants defined by the NSWindow class. The levels you typically use are: NSNormalWindowLevel, which specifies the default level; NSFloatingWindowLevel, which specifiesthe level for floating palettes; and NSScreenSaverWindowLevel, which specifies the level for a screen saver window. You might also use NSStatusWindowLevel for a status window, or NSModalPanelWindowLevel for a modal panel. If you need to implement your own popup menus you use NSPopUpMenuWindowLevel. The remaining two levels, NSTornOffMenuWindowLevel and NSMainMenuWindowLevel, are reserved for system use. Setting Ordering and Level Programmatically You can use the orderWindow:relativeTo: method to order a window within its level in front of or in back of another window. You more typically use convenience methods to specify ordering, such as makeKeyAndOrderFront: (which also affectsstatus), orderFront:, and orderBack:, as well as orderOut:, which removes a window from the screen. You use the isVisible method to determine whether a window is on or off the screen. You can also set a window to be removed from the screen automatically when its application isn’t active using setHidesOnDeactivate:. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 22 Window Layers and LevelsTypically you should have no need to programmatically set the level of a window, since Cocoa automatically determines the appropriate level for a window based on its characteristics. A utility panel, for example, is automatically assigned to NSFloatingWindowLevel. You can nevertheless set a window’s level using the setLevel: method; for example, you can set the level of a standard window to NSFloatingWindowLevel if you want a utility window that looks like a standard window (for example to act as an inspector). This has two disadvantages, however: firstly, it may violate the human interface guidelines; secondly, if you assign a window to a floating level, you must ensure that you also set it to hide on deactivation of your application or reset its level when your application is hidden. Cocoa automatically takes care of the latter aspect for you if you use default window configurations. There is currently no level specified to allow you to place a window above a screen saver window. If you need to do this (for example, to show an alert while a screen saver is running), you can set the window’s level to be greater than that of the screen saver, as shown in the following example. [aWindow setLevel:NSScreenSaverWindowLevel + 1]; Other than this specific case, you are discouraged from setting windows in custom levels since this may lead to unexpected behavior. Window Layers and Levels Setting Ordering and Level Programmatically 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 23The are a number of different options that can be set regarding the window collection behavior of a window. They include a window’s behavior when using Spaces, Exposé, and the “Cycle Through Windows” command. These options can be set using the setCollectionBehavior: method of NSWindow, by passing in at most one constant from each group, combined using bitwise or operators. The current options may be accessed via the collectionBehavior method. Spaces Collection Behavior There are three options that can be set for a window’s Spaces collection behavior. The default is NSWindowCollectionBehaviorDefault, which allows the window to be associated with one space at a time. The second option is NSWindowCollectionBehaviorCanJoinAllSpaces. This option causes the window to appear on all spaces, like the menu bar. The third option is NSWindowCollectionBehaviorMoveToActiveSpace. This causesthe window to switch to the active space when it is made active. Only one of these options may be used at a time. If a window is currently associated with the active space, isOnActiveSpace returns YES. Otherwise, it returns NO. Additionally, you can get an array of the window numbers of windows on one or all spaces using the method windowNumbersWithOptions: and specified your desired options. The possible options are specified by NSWindowNumberListOptions. Exposé Collection Behavior There are also three options that can be set for a window’s Exposé collection behavior. If a window has a window level of NSNormalWindowLevel, the default behavior is NSWindowCollectionBehaviorManaged, which causes the window to participate in both Spaces and Exposé. NSWindowCollectionBehaviorTransient causes the window to float in Spaces and be hidden in Exposé. This is the default behavior if the window level is not NSNormalWindowLevel. The final option is NSWindowCollectionBehaviorStationary, which causes the window to be unaffected by Exposé; i.e. it stays visible and does not move, like the desktop window. Only one of these options may be used at a time. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 24 Setting Window Collection BehaviorWindow Cycling Behavior There are two options: NSWindowCollectionBehaviorParticipatesInCycle and NSWindowCollectionBehaviorIgnoresCycle. These options cause the window to participate in the window cycle for the “Cycle Through Windows” menu option or not participate in it, respectively. Setting Window Collection Behavior Window Cycling Behavior 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 25This article describes how to control a window’s size and position, including how to set a window’s minimum and maximum size, how to constrain a window to the screen, how to cascade windowsso their title barsremain visible, how to zoom a window as though the user pressed the zoom button, and how to center a window on the screen. Setting a Window’s Size and Location The center method places a window in the most prominent location on the screen, one suitable for important messages and alert dialogs. You can resize or reposition a window using setFrame:display: or setFrame:display:animate:—the former is equivalent to the latter with the animate flag NO. You might use these methodsin particular to expand or contract a window to show or hide a subview (such as a control that may be exposed by clicking a disclosure triangle). If the animate argument in setFrame:display:animate: is YES, the method performs a smooth resize of the window, where the total time for the resize can be obtained by calling animationResizeTime:. The user can resize windows by clicking and dragging on the bottom right corner of the window. While the user is resizing the window, inLiveResize will return YES. Otherwise, it returns NO. The user can generally reposition windows by dragging only the title bar. If you want usersto be able to drag your window by clicking elsewhere, you should override mouseDownCanMoveWindow so that it returns YES in any views that you want to be draggable window regions. The methods isMovable and setMovable: determine whether the user can move the window by clicking in its title bar or background. To keep the window’s top-left hand corner fixed when resizing, you must typically also reposition the origin, as illustrated in the following example. - (IBAction)showAdditionalControls:sender { NSRect frame = [myWindow frame]; if (frame.size.width <= MIN_WIDTH_WITH_ADDITIONS) frame.size.width = MIN_WIDTH_WITH_ADDITIONS; frame.size.height += ADDITIONS_HEIGHT; frame.origin.y -= ADDITIONS_HEIGHT; 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 26 Sizing and Placing Windows[myWindow setFrame:frame display:YES animate:YES]; // implementation continues... Note that the window’s delegate does not receive windowWillResize:toSize: messages when the window is resized in this way. It is your responsibility to ensure that the window’s new size is acceptable. The window’s delegate doesreceive windowDidResize: messages. You can implement windowDidResize: to add or remove subviews at suitable junctures. There are no additional flags to denote that the window is performing an animated resize operation (as distinct from a user-initiated resize). It is therefore up to you to capture relevant state information so that you can update the window contents appropriately in windowDidResize:. Window Cascading If you use the Cocoa document architecture, you can use the setShouldCascadeWindows: method of NSWindowController to set whether the window, when it is displayed, should cascade in relation to other document windows(that is, have a slightly offset location so that the title bars of previously displayed windows are still visible). The default is true, so typically you have no additional work to perform. If you are not using the document architecture, you can use the cascadeTopLeftFromPoint: method of NSWindow to cascade windows yourself. The method returns a point shifted from the top-left corner of the window that can be passed to a subsequent invocation of cascadeTopLeftFromPoint: to position the next window so the title bars of both windows are fully visible. Window Zooming You use the zoom: method to toggle the size and location of a window between its standard state, as determined by the application, and its user state: a new size and location the user may have set by moving or resizing the window. Constraining a Window’s Size and Location You can use setContentMinSize: and setContentMaxSize: to limit the user’s ability to resize the window—note that you can still set it to any size programmatically. Similarly, you can use setContentAspectRatio: to keep a window’s width and height at the same proportions as the user resizes it, and setContentResizeIncrements: to make the window resize in discrete amountslarger than a single pixel. (Aspect ratio and resize increments are mutually exclusive attributes.) In general, you should use the Sizing and Placing Windows Constraining a Window’s Size and Location 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 27setContent... methodsinstead of those that affect the window’sframe (setAspectRatio:, setMaxSize:, and so on). These are preferred because they avoid confusion for windows with toolbars, and also are typically a better model since you control the content of the window but not the frame. You can use the constrainFrameRect:toScreen: method to adjust a proposed frame rectangle so that it lies on the screen in such a way that the user can move and resize a window. However, you should make sure your window fits onscreen before display. Note that any NSWindow with a title bar automatically constrains itself to the screen. The cascadeTopLeftFromPoint: method shifts the top left point by an amount that allows one window to be placed relative to another so that both their title bars are visible. Additionally, when a window is about to be resized, the window’s delegate will be sent a windowWillResize:toSize: message. You can implement that method in your delegate to easily control your window’s size. Sizing and Placing Windows Constraining a Window’s Size and Location 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 28A window can store its placement in the user defaults system, so that it appears in the same location the next time the user starts the application. The saveFrameUsingName: method stores the frame rectangle, and setFrameUsingName: setsit from the value in user defaults. You can also use the setFrameAutosaveName: method to have a window save the frame rectangle any time it changes. However, for the correct frame to be saved, you must ensure that the window controller for the window in question doesn’t cascade the windows under its charge. You accomplish this task by sending setShouldCascadeWindows:NO to the controller, as shown in Listing 1. Listing 1 Saving a window’s frame automatically NSWindow *window = // the window in question [[window windowController] setShouldCascadeWindows:NO]; // Tell the controller to not cascade its windows. [window setFrameAutosaveName:[window representedFilename]]; // Specify the autosave name for the window. To expunge a frame rectangle from the defaults system, use the class method removeFrameUsingName:. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 29 Saving a Window’s Position into the User’s DefaultsWhen a user minimizes a window, it’s removed from the screen and replaced with a smaller counterpart in the Dock. The miniaturize: and deminiaturize: methods reduce and reconstitute a window, and performMiniaturize: simulatesthe user clicking the window’s minimize button. You can also set the image and title displayed in a freestanding mini-window by sending setMiniwindowImage: and setMiniwindowTitle: messages to the NSWindow object. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 30 Minimizing WindowsMost Cocoa applications include the Window menu, which displays the titles of various of the application’s windows. When you change a window’s title, this change is automatically reflected in the Window menu. This menu automatically lists windowsthat have a title bar and are resizable and that can become the main window (as described in “Window Layering and Types of Windows” (page 18)). Typically you can rely on the automatic updating provided by Cocoa. In rare circumstances, however, you might want to modify the default behavior. You can exclude a window that would otherwise be listed in the Window menu by sending it a setExcludedFromWindowsMenu:YES message. Since they cannot become main, NSPanel objects are excluded from the Windows menu. Instances of subclasses of NSPanel can be included in the menu by returning NO from its isExcludedFromWindowsMenu method and YES from its canBecomeMainWindow method. If you change a window’s configuration such that it should be added to or removed from the Window menu, you can update the Window menu by sending the shared application instance addWindowsItem:title:filename: or removeWindowsItem:. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 31 Using the Window MenuYou usually configure most aspects of a window’s appearance in Interface Builder. Sometimes, however, you may need to create a window programmatically, or alter its appearance after it has been created. Setting a Window’s Style The peripheral elements that a window displays define its style. Though you can’t access and manipulate them directly, you can determine at initialization whether a window has them by providing a style mask to the initializer. There are four possible style elements,specifiable by combining their mask values using the C bitwise OR operator: Element Mask Value A title bar NSTitledWindowMask A close button NSClosableWindowMask A minimize button NSMiniaturizableWindowMask A resize bar, border, or box NSResizableWindowMask You can also specify NSBorderlessWindowMask, in which case none of these style elements is used. Typically, you set a window’s appearance once, when it is first created. Sometimes, however, you want to enable or disable a button in the title bar to reflect changed context. To do this, you first retrieve the button from the window using the standardWindowButton: of NSWindow method and then set its enabled state, as in the following example. NSButton *closeButton = [window standardWindowButton:NSWindowCloseButton]; [closeButton setEnabled:NO]; The constants required to access standard title bar widgets are defined in the API reference for NSWindow. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 32 Setting a Window’s AppearanceSetting a Window’s Color and Transparency You can set a window’s background color and transparency using the methods setBackgroundColor: and setAlphaValue:, respectively. You can set a window’s background color to a non-opaque color. This does not affect the window’s title bar; it only makes the background itself transparent if the window is not opaque, as illustrated in the following example. [myWindow setOpaque:NO]; // YES by default NSColor *semiTransparentBlue = [NSColor colorWithDeviceRed:0.0 green:0.0 blue:1.0 alpha:0.5]; [myWindow setBackgroundColor:semiTransparentBlue]; Views placed on a non-opaque window with a transparent background color retain their own opacity. If you want to make the entire window (including the title bar and views placed on the window) transparent, you should use setAlphaValue:. Setting a Window’s Color Space You can set a window’s colorspace using setColorSpace: and can retrieve the window’s current colorspace using colorSpace. NSColorSpace objects for use with setColorSpace: may be obtained using the class methods documented in NSColorSpace Class Reference . Setting a Window’s Content Border Thickness Beginning in OS X version 10.5, windows automatically have a textured gradient applied to their backgrounds. The area on which the gradient is drawn is determined automatically. At times, however, this may not work correctly. If your window does not look correct with automatic gradient calculation, disable it by calling setAutorecalculatesContentBorderThickness:forEdge: with a value of NO and the edge to disable automatic calculation for. The value of this property may be accessed using the method autorecalculatesContentBorderThicknessForEdge:. You can also set and access the content border thickness manually using setContentBorderThickness:forEdge: and contentBorderThicknessForEdge:, respectively. Setting a Window’s Appearance Setting a Window’s Color and Transparency 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 33A titled window can display an arbitrary title or one derived from a filename. The setTitle: method puts an arbitrary string on the title bar. The setTitleWithRepresentedFilename: method formats a filename in the title bar in a readable format and associates the window with that file. You can set the associated file without changing the title using setRepresentedFilename:. You can use the association between the window and the file in any way you see fit. One convenience offered by the NSWindow class is marking the file as having been changed, so that the user is prompted to save it on closing the window. The method for marking the document as having been changed is setDocumentEdited:. When the window closes, its delegate can check if the files has been changed using isDocumentEdited to see whether the document needs to be saved. Additionally, starting in OS X version 10.5, you can set a window’s represented document by URL using the setRepresentedURL: method. You can get the URL of the document currently represented by a window using the representedURL method. The window will automatically use the known icon for the file type of the specified file, if one exists. To customize the document icon, you can use the following code segment: [[NSWindow standardWindowButton:NSWindowDocumentIconButton] setImage:customImage]. By default, a Command-click or Control-click on the rectangle containing a window’s document icon button and title will show a path popup. To customize this behavior, you can implement window:shouldPopUpDocumentPathMenu: in your window’s delegate. You can return NO from this method to stop the window from showing the path popup. You can also customize the document icon’s default drag behavior by implementing the window:shouldDragDocumentWithEvent:from:withPasteboard: in the window’s delegate. You can return NO to prohibit dragging the document icon. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 34 Setting a Window’s Title and Represented FileNearly every window has a corresponding display window device in the window server. The window device holdsthe window’s drawn image, and hastwo attributes determined by the window server and many attributes that the window controls. The window server assigns the window device a unique identifier (within an application). This is the window number, and it can be accessed using the windowNumber method. Each window also has a graphics state that most of its views share for drawing (views can create their own as well). The gState method returns its identifier. The attributes under direct window control are the following: ● Backing store type, described in “Specifying How To Store the Window’s Image” (page 35) ● Backing location, described in “Specifying Where To Store the Window’s Image” (page 36) ● Window device creation, described in “Specifying When the Window’s Image Is Created” (page 36) ● One shot, described in “Specifying Whether the Window’s Image Persists When Offscreen” (page 37) ● Depth limit, described in “Specifying the Depth Limit for the Window’s Image” (page 37) ● Dynamic depth limit, described in “Specifying Whether the Depth Limit Changes to the Screen’s Capacity” (page 37) ● Content sharing, described in “Specifying Whether Window Content Can Be Read or Written by Another Process” (page 37). Specifying How To Store the Window’s Image A window device’s backing store type determines how the window’simage isstored. It’sset when the window is initialized and can be one of three types. A buffered window device renders all drawing into a display buffer and then flushes it to the screen. Always drawing to the buffer produces very smooth display, but can require significant amounts of memory. Buffered windows are best for displaying material that must be redrawn often, such as text. You must also use buffered windows if you want your windows to support transparency. A retained window device also uses a buffer, but draws directly to the screen where possible and to the buffer for any portions that are obscured. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 35 Setting Attributes for the Window’s ImageA nonretained window device has no buffer at all, and must redraw portions as they’re exposed. Further, this redrawing is suspended when the window’s display mechanism is preempted. For example, if the user drags a window across a nonretained window, the nonretained window is “erased” and isn’t redrawn until the user releases the mouse. Both retained and nonretained windows are also subject to a flashing effect as individual drawing operations are performed, but their results do get to the screen more quickly than those of buffered windows. You can change the backing store type between buffered and retained after initialization using the setBackingType: method. Specifying Where To Store the Window’s Image The window server chooses whether to place the backing store for a buffered window in main memory or video memory. It will choose the location that providesthe best overall performance. You can query the window server to determine where your window’s backing store is located using the preferredBackingLocation method. You may choose to set a preferred location for a Window’s backing store using the setPreferredBackingLocation: method. While the window server is not required to respect this preferred backing location, it will attempt to do so. You should not change the preferred backing location without testing how it affects the performance of your application. Specifying When the Window’s Image Is Created The defer argument to the initializer specifies whether the window creates its window device immediately or only when it’s moved on screen. Deferring creation of the window device can offer some performance gain for windows that aren’t displayed immediately because it reduces the amount of work that needs to be performed up front. Deferring creation of the window device is particularly useful when creation of the window itself can’t be deferred or when an window is needed for purposes other than displaying content. Submenus with key equivalents, for example, must exist for the key equivalents to work, but may never actually be displayed. Setting Attributes for the Window’s Image Specifying Where To Store the Window’s Image 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 36Specifying Whether the Window’s Image Persists When Offscreen Memory can also be saved by destroying the window device when the window is removed from the screen. The setOneShot: method controls this behavior. One-shot window devices exist only when their windows are onscreen. Specifying the Depth Limit for the Window’s Image Like the display hardware, a window device’s buffer has a depth, or a limit to the memory allotted each pixel. Buffered and retained windows start out with the same depth as the main display or 16 bits, whichever is deeper. These settings stay in effect unless changed using the setDepthLimit: method, which takes as an argument a window depth limit created using the NSBestDepth function. SpecifyingWhether the Depth Limit Changesto the Screen’s Capacity Keeping a window’s depth at its richest preserves the displayed image, but may incur unnecessary memory overhead when the window buffer depth is deeper than the screen depth. You can use the setDynamicDepthLimit: method to tell a window to match the depth of the screen it’s on. When it’s moved to a new screen, a window with a dynamic depth limit adjusts its buffer to the new depth before redrawing. Making a window’s depth limit dynamic overrides the limit set using setDepthLimit:, and removing the dynamic limit reverts the window to the default limit. Specifying Whether Window Content Can Be Read or Written by Another Process The contents of your window can be made available to other processes. By default, the contents of your window can be read but not written to by other processes. This allows system services to work with your window’s contents and also allows other applications to capture a snapshot of your windows contents. You can override the default behavior using the setSharingType: method. Changing the sharing type to NSWindowSharingNone prevents other systems from capturing your window’s image data. If you do this, however, your window will not be able to participate in a number of system services; therefore, this setting should be used with caution. If you set your window’s sharing type to NSWindowSharingReadWrite, other processes can both read and modify the window’s content. Setting Attributes for the Window’s Image Specifying Whether the Window’s Image Persists When Offscreen 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 37As described in NSResponder Class Reference , most events coming into an application make their way to a window in a sendEvent: message. A key event is directed at the key window, while a mouse event is directed at whatever window lies under the pointer. If an event affects the window directly—resizing or moving it, for example—it performs the appropriate operation itself and sends messages to its delegate informing it of its intentions, thus allowing your application to intercede. The window sends other events up its responder chain from the appropriate starting point: the first responder for a key event, the view under the pointer for a mouse event. These events are then typically handled by some view object in the window. See Cocoa Event Handling Guide for more information on how to intercept and handle events. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 38 Handling Events in WindowsA window’s first responder is often a view object selected by the user clicking it. For text fields and other view objects (mainly subclasses of NSControl), the user can select the first responder with the keyboard using the Tab and Shift keys. The NSView class defines the methods for setting up and examining the loop of objects that the user can select in this manner. A view that’s the first responder is called the key view, and the views that can become the key view in a window are linked together in the window’s key view loop. You normally set up the key view loop using Interface Builder, establishing connections between the nextKeyView outlets of views in the window and setting the window’s initialFirstResponder outlet to the view that you want selected when the window is first placed onscreen. If you do not set this outlet, the window sets a key loop (not necessarily the same as the one you would have specified!) and picks a default initial first responder for you. In addition to the key view loop, a window can have a default button cell, which uses the Return (or Enter) key as its key equivalent. The setDefaultButtonCell: method establishes this button cell; you can also set it in Interface Builder by setting a button cell’s key equivalent to '\r'. The default button cell draws itself as a focal element for keyboard interface control unless another button cell is focused on. In this case, it temporarily draws itself as normal and disables its key equivalent. Another default key established by the NSWindow class is the Escape key, which immediately aborts a modal loop (described in “How Modal Windows Work” (page 12)). See NSResponder Class Reference for more information on keyboard interface control. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 39 Using Keyboard Interface Control in WindowsEach window has a text object that is shared for light editing tasks. This object, the window’s field editor, is inserted in the view hierarchy when an object needsto editsome text and removed when the object isfinished. The field editor is used by NSTextField objects and other controls, for example, to edit the text that they display. The fieldEditor:forObject: method returns a window’s field editor, after asking the delegate for a substitute using windowWillReturnFieldEditor:toObject:. You can override the fieldEditor:forObject: method of NSWindow in subclasses or provide a delegate to substitute a class of text object different from the NSTextView default, thereby customizing text editing in your application. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 40 Using the Window’s Field EditorThe NSWindow class offers observers a rich set of notifications, which it broadcasts on such occurrences as gaining or losing key or main window status, minimizing, moving or resizing, becoming exposed, and closing. Each notification is matched to a delegate method, so a window’s delegate is automatically registered for all notifications that it has methods for. The NSWindow class also offers its delegate a few other methods, such as windowShouldClose:, which requests approval to close, windowWillResize:toSize:, which allows the delegate to constrain the window’ssize, windowWillUseStandardFrame:defaultFrame:, which allows the delegate to set the window frame for zooming, and windowWillReturnFieldEditor:toObject:, which gives the delegate a chance to modify the field editor or substitute a different editor. See the individual notification and delegate method descriptions for more information. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 41 Using Window Notifications and Delegate MethodsThe NSWindow class defines some methods for image dragging, in case the user wants to drag an object into or out of a window. Although most dragging operations are initiated by and occur between view objects, the NSWindow class also defines an image-dragging method, dragImage:at:offset:event:pasteboard:source:slideBack:. A window can also serve as the destination for dragging operations, registering the types it accepts with registerForDraggedTypes: and unregisterDraggedTypes. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 42 Dragging Images to and from WindowsYou can change the cursor image when the cursor is within a specified area of a view in a window. To do this, use the NSTrackingArea class, along with the cursorUpdate: method of the NSResponder class. For specifics, read “Using Tracking-Area Objects” in Cocoa Event Handling Guide . For details on the NSTrackingArea class itself, refer to NSTrackingArea Class Reference . 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 43 Updating the Cursor Image in a WindowTo support transitory drawing by views, the NSWindow class defines methods that temporarily cache a portion of its raster image so that it can be restored later. This feature is useful for situations where highly dynamic drawing must be done over the otherwise static image of the window. For example, in a drawing program where the user drags lines and other shapes directly onto a canvas, it’s more efficient to restore the window’s cached image and draw anew over that than to have all the views send display instructions to the window server. For more information,see the method descriptionsfor cacheImageInRect:, restoreCachedImage, and discardCachedImage. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 44 Caching Window ImagesThis table describes the changes to Window Programming Guide . Date Notes Revised the article “Updating the Cursor Image in a Window” (page 43), previously titled “Setting Pointer Rectangles for Windows.” 2009-11-27 2009-05-15 Updated for OS X v10.6. Added information on the use of backing locations to improve performance. 2009-02-04 2008-10-15 Provided links to delegate methods. Clarified the behavior of the setFrameAutosaveName: method in conjunction with a window's window controller. 2006-10-03 Added window-controller requirement for the NSWindow setFrameAutosaveName: method to “Saving a Window’s Position into the User’s Defaults” (page 29). Made correction to "Using the Windows Menu" article. Changed title from "Windows and Panels." 2005-09-08 Updated “Setting a Window’s Appearance” (page 32) to cover enabling and disabling buttons in the title bar, and to discuss setting a window’s background color and transparency. 2004-08-31 “Setting a Window’s Level” renamed “Window Layers and Levels” (page 22) and augmented. “Changing the Key and Main Windows” renamed to “Window Layering and Types of Windows” (page 18) and augmented. 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 45 Document Revision HistoryDate Notes Augmented “Sizing and Placing Windows” (page 26) to discuss animated resizing, window cascading, and constraining window size and position. Minor changes to “Using the Window Menu” (page 31). Clarified the concepts of key and main windowsin “Window Layering and Types of Windows” (page 18)“. 2003-06-05 2002-11-12 Revision history was added to existing topic. Document Revision History 2009-11-27 | © 2002, 2009 Apple Inc. All Rights Reserved. 46Apple Inc. © 2002, 2009 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, Exposé, Mac, Numbers, OS X, and Spaces are trademarks of Apple Inc., registered in the U.S. and other countries. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. AV Foundation Programming GuideContents About the AV Foundation Framework 4 At a Glance 5 Representing and Using Media with AV Foundation 5 Concurrent Programming with AV Foundation 7 Prerequisites 8 Using Assets 9 Creating an Asset Object 9 Options for Initializing an Asset 9 Accessing the User’s Assets 10 Preparing an Asset for Use 11 Getting Still Images From a Video 12 Generating a Single Image 13 Generating a Sequence of Images 14 Trimming and Transcoding a Movie 15 Reading and Writing Assets 17 Playback 18 Playing Assets 18 Handling Different Types of Asset 20 Playing an Item 21 Changing the Playback Rate 21 Seeking—Repositioning the Playhead 22 Playing Multiple Items 23 Monitoring Playback 23 Responding to a Change in Status 24 Tracking Readiness for Visual Display 25 Tracking Time 25 Reaching the End of an Item 26 Putting it all Together: Playing a Video File Using AVPlayerLayer 26 The Player View 27 A Simple View Controller 27 Creating the Asset 28 Responding to the Player Item’s Status Change 30 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 2Playing the Item 31 Media Capture 32 Use a Capture Session to Coordinate Data Flow 33 Configuring a Session 34 Monitoring Capture Session State 35 An AVCaptureDevice Object Represents an Input Device 35 Device Characteristics 36 Device Capture Settings 36 Configuring a Device 40 Switching Between Devices 41 Use Capture Inputs to Add a Capture Device to a Session 41 Use Capture Outputs to Get Output from a Session 42 Saving to a Movie File 43 Processing Frames of Video 46 Capturing Still Images 47 Showing the User What’s Being Recorded 49 Video Preview 49 Showing Audio Levels 50 Putting it all Together: Capturing Video Frames as UIImage Objects 50 Create and Configure a Capture Session 51 Create and Configure the Device and Device Input 51 Create and Configure the Data Output 52 Implement the Sample Buffer Delegate Method 52 Starting and Stopping Recording 53 Time and Media Representations 54 Representation of Assets 54 Representations of Time 55 CMTime Represents a Length of Time 55 CMTimeRange Represents a Time Range 57 Representations of Media 58 Converting a CMSampleBuffer to a UIImage 59 Document Revision History 62 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 3 ContentsAV Foundation is one of several frameworks that you can use to play and create time-based audiovisual media. It provides an Objective-C interface you use to work on a detailed level with time-based audiovisual data. For example, you can use it to examine, create, edit, or reencode media files. You can also get input streams from devices and manipulate video during realtime capture and playback. Core Audio UIKit Media Player AV Foundation iOS 3 Audio classes Core Media Core Animation You should typically use the highest-level abstraction available that allows you to perform the tasks you want. For example, in iOS: ● If you simply want to play movies, you can use the Media Player Framework (MPMoviePlayerController or MPMoviePlayerViewController), or for web-based media you could use a UIWebView object. ● To record video when you need only minimal control over format, use the UIKit framework (UIImagePickerController). Note, however, thatsome of the primitive data structuresthat you use in AV Foundation—including time-related data structures and opaque objectsto carry and describemedia data—are declared in the Core Media framework. AV Foundation is available in iOS 4 and later, and OS X 10.7 and later. This document describes AV Foundation as introduced in iOS 4.0. To learn about changes and additions to the framework in subsequent versions, you should also read the appropriate release notes: ● AV Foundation Release Notes describe changes made for iOS 5. ● AV Foundation Release Notes (iOS 4.3) describe changes made for iOS 4.3 and included in OS X 10.7. 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 4 About the AV Foundation FrameworkRelevant Chapters: “Time and Media Representations” (page 54) At a Glance There are two facets to the AV Foundation framework—API related just to audio, which was available prior to iOS 4; and API introduced in iOS 4 and later. The older audio-related classes provide easy ways to deal with audio. They are described in Multimedia Programming Guide , not in this document. ● To play sound files, you can use AVAudioPlayer. ● To record audio, you can use AVAudioRecorder. You can also configure the audio behavior of your application using AVAudioSession; this is described in Audio Session Programming Guide . Representing and Using Media with AV Foundation The primary class that the AV Foundation framework uses to represent media is AVAsset. The design of the framework is largely guided by this representation. Understanding its structure will help you to understand how the framework works. An AVAsset instance is an aggregated representation of a collection of one or more pieces of media data (audio and video tracks). It provides information about the collection as a whole, such as its title, duration, natural presentation size, and so on. AVAsset is not tied to particular data format. AVAsset is the superclass of other classes used to create asset instances from media at a URL (see “Using Assets” (page 9)) and to create new compositions (see “Editing” (page 7)). Each of the individual pieces of media data in the asset is of a uniform type and called a track. In a typical simple case, one track represents the audio component, and another represents the video component; in a complex composition, however, there may be multiple overlapping tracks of audio and video. Assets may also have metadata. A vital concept in AV Foundation is that initializing an asset or a track does not necessarily mean that it is ready for use. It may require some time to calculate even the duration of an item (an MP3 file, for example, may not contain summary information). Rather than blocking the current thread while a value is being calculated, you ask for values and get an answer back asynchronously through a callback that you define using a block. About the AV Foundation Framework At a Glance 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 5Relevant Chapters: “Using Assets” (page 9) “Time and Media Representations” (page 54) Playback AVFoundation allows you to manage the playback of asset in sophisticated ways. To support this, it separates the presentation state of an asset from the asset itself. This allows you to, for example, play two different segments of the same asset at the same time rendered at different resolutions. The presentation state for an asset is managed by a player item object; the presentation state for each tracks within an asset is managed by a player item track objects. Using the player item and player item tracks you can, for example, set the size at which the visual portion of the item is presented by the player, set the audio mix parameters and video composition settings to be applied during playback, or disable components of the asset during playback. You play player items using a player object, and direct the output of a player to Core Animation layer. On iOS 4.1 and later, you can use a player queue to schedule playback of a collection of player items in sequence. Relevant Chapters: “Playback” (page 18) Reading, Writing, and Reencoding Assets AV Foundation allows you to create new representations of an asset in several ways. You can simply reencode an existing asset, or—on iOS 4.1 and later—you can perform operations on the contents of an asset and save the result as a new asset. You use an export session to reencode an existing asset into a format defined by one of a small number of commonly-used presets. If you need more control over the transformation, on iOS 4.1 and later you can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects you can, for example, choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process. To produce a visual representation of the waveform, you use an asset reader to read the audio track of an asset. About the AV Foundation Framework At a Glance 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 6Relevant Chapters: “Using Assets” (page 9) Thumbnails To create thumbnail images of video presentations, you initialize an instance of AVAssetImageGenerator using the asset from which you want to generate thumbnails. AVAssetImageGenerator uses the default enabled video track(s) to generate images. Relevant Chapters: “Using Assets” (page 9) Editing AV Foundation uses compositions to create new assets from existing pieces of media (typically, one or more video and audio tracks). You use a mutable composition to add and remove tracks, and adjust their temporal orderings. You can also set the relative volumes and ramping of audio tracks; and set the opacity, and opacity ramps, of video tracks. A composition is an assemblage of pieces of media held in memory. When you export a composition using an export session, it's collapsed to a file. On iOS 4.1 and later, you can also create an asset from media such as sample buffers or still images using an asset writer. Media Capture and Access to Camera Recording input from cameras and microphonesis managed by a capture session. A capture session coordinates the flow of data from input devices to outputs such as a movie file. You can configure multiple inputs and outputs for a single session, even when the session is running. You send messages to the session to start and stop data flow. In addition, you can use an instance of preview layer to show the user what a camera is recording. Relevant Chapters: “Media Capture” (page 32) Concurrent Programming with AV Foundation Callouts from AV Foundation—invocations of blocks, key-value observers, or notification handlers—are not guaranteed to be made on any particular thread or queue. Instead, AV Foundation invokes these handlers on threads or queues on which it performs its internal tasks. You are responsible for testing whether the thread or queue on which a handler isinvoked is appropriate for the tasks you want to perform. If it’s not (for example, if you want to update the user interface and the callout is not on the main thread), you must redirect the execution of your tasks to a safe thread or queue that you recognize, or that you create for the purpose. About the AV Foundation Framework At a Glance 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 7If you’re writing amultithreaded application, you can use the NSThreadmethod isMainThread or [[NSThread currentThread] isEqual:<#A stored thread reference#>] to test whether the invocation thread is a thread you expect to perform your work on. You can redirect messages to appropriate threads using methods such as performSelectorOnMainThread:withObject:waitUntilDone: and performSelector:onThread:withObject:waitUntilDone:modes:. You could also use dispatch_async(3) OS X Developer Tools Manual Page to “bounce”to your blocks on an appropriate queue, either the main queue for UI tasks or a queue you have up for concurrent operations. For more about concurrent operations, see Concurrency Programming Guide ; for more about blocks, see Blocks Programming Topics. Prerequisites AV Foundation is an advanced Cocoa framework. To use it effectively, you must have: ● A solid understanding of fundamental Cocoa development tools and techniques ● A basic grasp of blocks ● A basic understanding of key-value coding and key-value observing ● For playback, a basic understanding of Core Animation (see Core Animation Programming Guide ) About the AV Foundation Framework Prerequisites 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 8Asset can come from a file or from media in the user’s iPod Library or Photo library. Simply creating an asset object, though, does not necessarily mean that all the information that you might want to retrieve for that item is immediately available. Once you have a movie asset, you can extract still images from it, transcode it to another format, or trim the contents. Creating an Asset Object To create an asset to represent any resource that you can identify using a URL, you use AVURLAsset. The simplest case is creating an asset from a file: NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>; AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil]; Options for Initializing an Asset AVURLAsset’s initialization methods take as their second argument an options dictionary. The only key used in the dictionary is AVURLAssetPreferPreciseDurationAndTimingKey. The corresponding value is a boolean (contained in an NSValue object) that indicates whether the asset should be prepared to indicate a precise duration and provide precise random access by time. Getting the exact duration of an asset may require significant processing overhead. Using an approximate duration is typically a cheaper operation and sufficient for playback. Thus: ● If you only intend to play the asset, either pass nil instead of a dictionary, or pass a dictionary that contains the AVURLAssetPreferPreciseDurationAndTimingKey key and a corresponding value of NO (contained in an NSValue object). ● If you want to add the asset to a composition (AVMutableComposition), you typically need precise randomaccess. Pass a dictionary that containsthe AVURLAssetPreferPreciseDurationAndTimingKey key and a corresponding value of YES (contained in an NSValue object—recall that NSNumber inherits from NSValue): 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 9 Using AssetsNSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>; NSDictionary *options = @{ AVURLAssetPreferPreciseDurationAndTimingKey : @YES }; AVURLAsset *anAssetToUseInAComposition = [[AVURLAsset alloc] initWithURL:url options:options]; Accessing the User’s Assets To access the assets managed the iPod Library or by the Photos application, you need to get a URL of the asset you want. ● To access the iPod Library, you create an MPMediaQuery instance to find the item you want, then get its URL using MPMediaItemPropertyAssetURL. For more about the Media Library, see Multimedia Programming Guide . ● To access the assets managed by the Photos application, you use ALAssetsLibrary. The following example shows how you can get an asset to represent the first video in the Saved Photos Album. ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init]; // Enumerate just the photos and videos group by using ALAssetsGroupSavedPhotos. [library enumerateGroupsWithTypes:ALAssetsGroupSavedPhotos usingBlock:^(ALAssetsGroup *group, BOOL *stop) { // Within the group enumeration block, filter to enumerate just videos. [group setAssetsFilter:[ALAssetsFilter allVideos]]; // For this example, we're only interested in the first item. [group enumerateAssetsAtIndexes:[NSIndexSet indexSetWithIndex:0] options:0 usingBlock:^(ALAsset *alAsset, NSUInteger index, BOOL *innerStop) { // The end of the enumeration is signaled by asset == nil. if (alAsset) { Using Assets Creating an Asset Object 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 10ALAssetRepresentation *representation = [alAsset defaultRepresentation]; NSURL *url = [representation url]; AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil]; // Do something interesting with the AV asset. } }]; } failureBlock: ^(NSError *error) { // Typically you should handle an error more gracefully than this. NSLog(@"No groups"); }]; Preparing an Asset for Use Initializing an asset (or track) does not necessarily mean that all the information that you might want to retrieve for that item is immediately available. It may require some time to calculate even the duration of an item (an MP3 file, for example, may not contain summary information). Rather than blocking the current thread while a value is being calculated, you should use the AVAsynchronousKeyValueLoading protocol to ask for values and get an answer back later through a completion handler you define using a block. (AVAsset and AVAssetTrack conform to the AVAsynchronousKeyValueLoading protocol.) You test whether a value is loaded for a property using statusOfValueForKey:error:. When an asset is first loaded, the value of most or all of its properties is AVKeyValueStatusUnknown. To load a value for one or more properties, you invoke loadValuesAsynchronouslyForKeys:completionHandler:. In the completion handler, you take whatever action is appropriate depending on the property’s status. You should always be prepared for loading to not complete successfully, either because it failed for some reason such as a network-based URL being inaccessible, or because the load was canceled. . NSURL *url = <#A URL that identifies an audiovisual asset such as a movie file#>; AVURLAsset *anAsset = [[AVURLAsset alloc] initWithURL:url options:nil]; NSArray *keys = @[@"duration"]; [asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() { Using Assets Preparing an Asset for Use 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 11NSError *error = nil; AVKeyValueStatus tracksStatus = [asset statusOfValueForKey:@"duration" error:&error]; switch (tracksStatus) { case AVKeyValueStatusLoaded: [self updateUserInterfaceForDuration]; break; case AVKeyValueStatusFailed: [self reportError:error forAsset:asset]; break; case AVKeyValueStatusCancelled: // Do whatever is appropriate for cancelation. break; } }]; If you want to prepare an asset for playback, you should load its tracks property. For more about playing assets, see “Playback” (page 18). Getting Still Images From a Video To get still images such as thumbnails from an asset for playback, you use an AVAssetImageGenerator object. You initialize an image generator with your asset. Initialization may succeed, though, even if the asset possesses no visual tracks at the time of initialization, so if necessary you should test whether the asset has any tracks with the visual characteristic using tracksWithMediaCharacteristic:. AVAsset anAsset = <#Get an asset#>; if ([anAsset tracksWithMediaCharacteristic:AVMediaTypeVideo]) { AVAssetImageGenerator *imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:anAsset]; // Implementation continues... Using Assets Getting Still Images From a Video 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 12You can configure several aspects of the image generator, for example, you can specify the maximum dimensions for the images it generates and the aperture mode using maximumSize and apertureMode respectively.You can then generate a single image at a given time, or a series of images. You must ensure that you keep a strong reference to the image generator until it has generated all the images. Generating a Single Image You use copyCGImageAtTime:actualTime:error: to generate a single image at a specific time. AV Foundation may not be able to produce an image at exactly the time you request, so you can pass as the second argument a pointer to a CMTime that upon return contains the time at which the image was actually generated. AVAsset *myAsset = <#An asset#>]; AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:myAsset]; Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]); CMTime midpoint = CMTimeMakeWithSeconds(durationSeconds/2.0, 600); NSError *error; CMTime actualTime; CGImageRef halfWayImage = [imageGenerator copyCGImageAtTime:midpoint actualTime:&actualTime error:&error]; if (halfWayImage != NULL) { NSString *actualTimeString = (NSString *)CMTimeCopyDescription(NULL, actualTime); NSString *requestedTimeString = (NSString *)CMTimeCopyDescription(NULL, midpoint); NSLog(@"Got halfWayImage: Asked for %@, got %@", requestedTimeString, actualTimeString); // Do something interesting with the image. CGImageRelease(halfWayImage); } Using Assets Getting Still Images From a Video 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 13Generating a Sequence of Images To generate a series of images, you send the image generator a generateCGImagesAsynchronouslyForTimes:completionHandler:message. The first argument is an array of NSValue objects, each containing a CMTime, specifying the asset times for which you want images to be generated. The second argument is a block that serves as a callback invoked for each image that is generated. The block arguments provide a result constant that tells you whether the image was created successfully or if the operation was canceled, and, as appropriate: ● The image. ● The time for which you requested the image and the actual time for which the image was generated. ● An error object that describes the reason generation failed. In your implementation of the block, you should check the result constant to determine whether the image was created. In addition, you must ensure that you keep a strong reference to the image generator until it has finished creating the images. AVAsset *myAsset = <#An asset#>]; // Assume: @property (strong) AVAssetImageGenerator *imageGenerator; self.imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:myAsset]; Float64 durationSeconds = CMTimeGetSeconds([myAsset duration]); CMTime firstThird = CMTimeMakeWithSeconds(durationSeconds/3.0, 600); CMTime secondThird = CMTimeMakeWithSeconds(durationSeconds*2.0/3.0, 600); CMTime end = CMTimeMakeWithSeconds(durationSeconds, 600); NSArray *times = @[NSValue valueWithCMTime:kCMTimeZero], [NSValue valueWithCMTime:firstThird], [NSValue valueWithCMTime:secondThird], [NSValue valueWithCMTime:end]]; [imageGenerator generateCGImagesAsynchronouslyForTimes:times completionHandler:^(CMTime requestedTime, CGImageRef image, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error) { NSString *requestedTimeString = (NSString *) CFBridgingRelease(CMTimeCopyDescription(NULL, requestedTime)); Using Assets Getting Still Images From a Video 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 14NSString *actualTimeString = (NSString *) CFBridgingRelease(CMTimeCopyDescription(NULL, actualTime)); NSLog(@"Requested: %@; actual %@", requestedTimeString, actualTimeString); if (result == AVAssetImageGeneratorSucceeded) { // Do something interesting with the image. } if (result == AVAssetImageGeneratorFailed) { NSLog(@"Failed with error: %@", [error localizedDescription]); } if (result == AVAssetImageGeneratorCancelled) { NSLog(@"Canceled"); } }]; You can cancel the generation of the image sequence by sending the image generator a cancelAllCGImageGeneration message. Trimming and Transcoding a Movie You can transcode a movie from one format to another, and trim a movie, using an AVAssetExportSession object. An export session is a controller object that manages asynchronous export of an asset. You initialize the session using the asset you want to export and the name of a export preset that indicates the export options you want to apply (see allExportPresets). You then configure the export session to specify the output URL and file type, and optionally other settings such as the metadata and whether the output should be optimized for network use. Asset Export preset AVAssetExportSession URL You can check whether you can export a given asset using a given preset using exportPresetsCompatibleWithAsset: as illustrated in this example: Using Assets Trimming and Transcoding a Movie 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 15AVAsset *anAsset = <#Get an asset#>; NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:anAsset]; if ([compatiblePresets containsObject:AVAssetExportPresetLowQuality]) { AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:anAsset presetName:AVAssetExportPresetLowQuality]; // Implementation continues. } You complete configuration of the session by providing the output URL (The URL must be a file URL.) AVAssetExportSession can infer the output file type from the URL’s path extension; typically, however, you set it directly using outputFileType. You can also specify additional properties such as the time range, a limit for the output file length, whether the exported file should be optimized for network use, and a video composition. The following example illustrates how to use the timeRange property to trim the movie: exportSession.outputURL = <#A file URL#>; exportSession.outputFileType = AVFileTypeQuickTimeMovie; CMTime start = CMTimeMakeWithSeconds(1.0, 600); CMTime duration = CMTimeMakeWithSeconds(3.0, 600); CMTimeRange range = CMTimeRangeMake(start, duration); exportSession.timeRange = range; To create the new file you invoke exportAsynchronouslyWithCompletionHandler:. The completion handler block is called when the export operation finishes; in your implementation of the handler, you should check the session’s status to determine whether the export was successful, failed, or was canceled: [exportSession exportAsynchronouslyWithCompletionHandler:^{ switch ([exportSession status]) { case AVAssetExportSessionStatusFailed: NSLog(@"Export failed: %@", [[exportSession error] localizedDescription]); break; case AVAssetExportSessionStatusCancelled: NSLog(@"Export canceled"); Using Assets Trimming and Transcoding a Movie 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 16break; default: break; } }]; You can cancel the export by sending the session a cancelExport message. The export will fail if you try to overwrite an existing file, or write a file outside of the application’s sandbox. It may also fail if: ● There is an incoming phone call ● Your application is in the background and another application starts playback In these situations, you should typically inform the user that the export failed, then allow the user to restart the export. Reading and Writing Assets You use an AVAssetReader when you want to perform an operation on the contents of an asset. For example, you might read the audio track of an asset to produce a visual representation of the waveform. To produce an asset from media such as sample buffers or still images, you use an AVAssetWriter object. You can use an asset reader and asset writer object in tandem to convert an asset from one representation to another. Using these objects you have more control over the conversion than you do with AVExportSession. For example of you want to choose which of the tracks you want to be represented in the output file, specify your own output format, or modify the asset during the conversion process. Using Assets Reading and Writing Assets 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 17To control the playback of assets, you use an AVPlayer object. During playback, you can use an AVPlayerItem object to manage the presentation state of an asset as a whole, and an AVPlayerItemTrack to manage the presentation state of an individual track. To display video, you use an AVPlayerLayer object. Playing Assets A player is a controller object that you use to manage playback of an asset, for example starting and stopping playback, and seeking to a particular time. You use an instance of AVPlayer to play a single asset. On iOS 4.1 and later, you can use an AVQueuePlayer object to play a number of items in sequence (AVQueuePlayer is a subclass of AVPlayer). A player provides you with information about the state of the playback so, if you need to, you can synchronize your user interface with the player’s state. You typically direct the output of a player to specialized Core Animation Layer (an instance of AVPlayerLayer or AVSynchronizedLayer). To learn more about layers, see Core Animation Programming Guide . 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 18 PlaybackMultiple player layers: You can create arbitrarily many AVPlayerLayer objects from a single AVPlayer instance, but only the most-recently-created such layer will display any video content on-screen. Although ultimately you want to play an asset, you don’t provide assets directly to an AVPlayer object. Instead, you provide an instance of AVPlayerItem. A player item manages the presentation state of an asset with which it is associated. A player item contains player item tracks—instances of AVPlayerItemTrack—that correspond to the tracks in the asset. AVAsset AVAssetTrack AVAssetTrack AVPlayerItem AVPlayerItemTrack AVPlayer AVPlayerLayer AVPlayerItemTrack This abstraction means that you can play a given asset using different players simultaneously, but rendered in different ways by each player. Using the item tracks, you can, for example, disable a particular track during playback (you might not want to play the sound component). AVAsset AVPlayer 1 AVPlayer 2 • Video • Audio R • Audio L AVPlayerItem 1 AVPlayerItem 2 AVPlayerItemTracks time = 4:15 time = 2:10 Video Audio R Audio L Off Off Playback Playing Assets 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 19You can initialize a player item with an existing asset, or you can initialize a player item directly from a URL so that you can play a resource at a particular location (AVPlayerItem will then create and configure an asset for the resource). As with AVAsset, though, simply initializing a player item doesn’t necessarily mean it’s ready for immediate playback. You can observe (using key-value observing) an item’s status property to determine if and when it’s ready to play. Handling Different Types of Asset The way you configure an asset for playback may depend on the sort of asset you want to play. Broadly speaking, there are two main types: file-based assets, to which you have random access (such as from a local file, the camera roll, or the Media Library), and stream-based (HTTP Live Stream format). To load and play a file-based asset. There are several steps to playing a file-based asset: ● Create an asset using AVURLAsset and load its tracks using loadValuesAsynchronouslyForKeys:completionHandler:. ● When the asset has loaded its tracks, create an instance of AVPlayerItem using the asset. ● Associate the item with an instance of AVPlayer. ● Wait until the item’s status indicatesthat it’sready to play (typically you use key-value observing to receive a notification when the status changes). This approach is illustrated in “Putting it all Together: Playing a Video File Using AVPlayerLayer” (page 26). To create and prepare an HTTP live stream for playback. Initialize an instance of AVPlayerItem using the URL. (You cannot directly create an AVAsset instance to represent the media in an HTTP Live Stream.) NSURL *url = [NSURL URLWithString:@"<#Live stream URL#>]; // You may find a test stream at . self.playerItem = [AVPlayerItem playerItemWithURL:url]; [playerItem addObserver:self forKeyPath:@"status" options:0 context:&ItemStatusContext]; self.player = [AVPlayer playerWithPlayerItem:playerItem]; When you associate the player item with a player, it starts to become ready to play. When it is ready to play, the player item createsthe AVAsset and AVAssetTrack instances, which you can use to inspect the contents of the live stream. Playback Handling Different Types of Asset 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 20If you simply want to play a live stream, you can take a shortcut and create a player directly using the URL: self.player = [AVPlayer playerWithURL:<#Live stream URL#>]; [player addObserver:self forKeyPath:@"status" options:0 context:&PlayerStatusContext]; As with assets and items, initializing the player does not mean it’s ready for playback. You should observe the player’s status property, which changes to AVPlayerStatusReadyToPlay when it is ready to play. You can also observe the currentItem property to access the player item created for the stream. If you don’t know what kind of URL you have. Follow these steps: 1. Try to initialize an AVURLAsset using the URL, then load its tracks key. If the tracks load successfully, then you create a player item for the asset. 2. If 1 fails, create an AVPlayerItem directly from the URL. Observe the player’s status property to determine whether it becomes playable. If either route succeeds, you end up with a player item that you can then associate with a player. Playing an Item To start playback, you send a play message to the player. - (IBAction)play:sender { [player play]; } In addition to simply playing, you can manage various aspects of the playback,such asthe rate and the location of the playhead. You can also monitor the play state of the player; this is useful if you want to, for example, synchronize the user interface to the presentation state of the asset—see “Monitoring Playback” (page 23). Changing the Playback Rate You change the rate of playback by setting the player’s rate property. aPlayer.rate = 0.5; aPlayer.rate = 2.0; Playback Playing an Item 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 21A value of 1.0 means “play at the natural rate of the current item”. Setting the rate to 0.0 is the same as pausing playback—you can also use pause. Seeking—Repositioning the Playhead To move the playhead to a particular time, you generally use seekToTime:. CMTime fiveSecondsIn = CMTimeMake(5, 1); [player seekToTime:fiveSecondsIn]; The seekToTime: method, however, is tuned for performance rather than precision. If you need to move the playhead precisely, instead you use seekToTime:toleranceBefore:toleranceAfter:. CMTime fiveSecondsIn = CMTimeMake(5, 1); [player seekToTime:fiveSecondsIn toleranceBefore:kCMTimeZero toleranceAfter:kCMTimeZero]; Using a tolerance of zero may require the framework to decode a large amount of data. You should only use zero if you are, for example, writing a sophisticated media editing application that requires precise control. After playback, the player’s head is set to the end of the item, and further invocations of play have no effect. To position the play head back at the beginning of the item, you can register to receive an AVPlayerItemDidPlayToEndTimeNotification from the item. In the notification’s callback method, you invoke seekToTime: with the argument kCMTimeZero. // Register with the notification center after creating the player item. [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playerItemDidReachEnd:) name:AVPlayerItemDidPlayToEndTimeNotification object:<#The player item#>]; - (void)playerItemDidReachEnd:(NSNotification *)notification { [player seekToTime:kCMTimeZero]; } Playback Playing an Item 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 22Playing Multiple Items On iOS 4.1 and later, you can use an AVQueuePlayer object to play a number of items in sequence. AVQueuePlayer is a subclass of AVPlayer. You initialize a queue player with an array of player items: NSArray *items = <#An array of player items#>; AVQueuePlayer *queuePlayer = [[AVQueuePlayer alloc] initWithItems:items]; You can then play the queue using play, just as you would an AVPlayer object. The queue player plays each item in turn. If you want to skip to the next item, you send the queue player an advanceToNextItem message. You can modify the queue using insertItem:afterItem:, removeItem:, and removeAllItems. When adding a new item, you should typically check whether it can be inserted into the queue, using canInsertItem:afterItem:. You pass nil as the second argument to test whether the new item can be appended to the queue: AVPlayerItem *anItem = <#Get a player item#>; if ([queuePlayer canInsertItem:anItem afterItem:nil]) { [queuePlayer insertItem:anItem afterItem:nil]; } Monitoring Playback You can monitor a number of aspects of the presentation state of a player and the player item being played. This is particularly useful for state changes that are not under your direct control, for example: ● If the user uses multitasking to switch to a different application, a player’s rate property will drop to 0.0. ● If you are playing remotemedia, a playeritem’s loadedTimeRanges and seekableTimeRanges properties will change as more data becomes available. These properties tell you what portions of the player item’s timeline are available. ● A player’s currentItem property changes as a player item is created for an HTTP live stream. ● A player item’s tracks property may change while playing an HTTP live stream. This may happen if the stream offers different encodings for the content; the tracks change if the player switches to a different encoding. ● A player or player item’s status may change if playback fails for some reason. You can use key-value observing to monitor changes to values of these properties. Playback Playing Multiple Items 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 23Important: You should register for KVO change notifications and unregister from KVO change notifications on the main thread. This avoids the possibility of receiving a partial notification if a change is being made on another thread. AV Foundation invokes observeValueForKeyPath:ofObject:change:context: on the main thread, even if the change operation is made on another thread. Responding to a Change in Status When a player or player item’s status changes, it emits a key-value observing change notification. If an object is unable to play for some reason (for example, if the media services are reset), the status changes to AVPlayerStatusFailed or AVPlayerItemStatusFailed as appropriate. In thissituation, the value of the object’s error property is changed to an error object that describes why the object is no longer be able to play. AV Foundation does not specify what thread that the notification is sent on. If you want to update the user interface, you must make sure that any relevant code is invoked on the main thread. This example uses dispatch_async(3) OS X Developer Tools Manual Page to execute code on the main thread. - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if (context == <#Player status context#>) { AVPlayer *thePlayer = (AVPlayer *)object; if ([thePlayer status] == AVPlayerStatusFailed) { NSError *error = [<#The AVPlayer object#> error]; // Respond to error: for example, display an alert sheet. return; } // Deal with other status change if appropriate. } // Deal with other change notifications if appropriate. [super observeValueForKeyPath:keyPath ofObject:object change:change context:context]; return; } Playback Monitoring Playback 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 24Tracking Readiness for Visual Display You can observe an AVPlayerLayer object’s readyForDisplay property to be notified when the layer has user-visible content. In particular, you might insert the player layer into the layer tree only when there is something for the user to look at, and perform a transition from Tracking Time To track changes in the position of the playhead in an AVPlayer object, you can use addPeriodicTimeObserverForInterval:queue:usingBlock: or addBoundaryTimeObserverForTimes:queue:usingBlock:. You might do this to, for example, update your user interface with information about time elapsed or time remaining, or perform some other user interface synchronization. ● With addPeriodicTimeObserverForInterval:queue:usingBlock:,the block you provide isinvoked at the interval you specify, and if time jumps, and when playback starts or stops. ● With addBoundaryTimeObserverForTimes:queue:usingBlock:, you pass an array of CMTimes contained in NSValue objects. The block you provide is invoked whenever any of those times is traversed. Both of the methods return an opaque object that serves as an observer. You must keep a strong reference to the returned object as long as you want the time observation block to be invoked by the player. You must also balance each invocation of these methods with a corresponding call to removeTimeObserver:. With both of these methods, AV Foundation does not guarantee to invoke your block for every interval or boundary passed. AV Foundation does not invoke a block if execution of a previously-invoked block has not completed. You must make sure, therefore, that the work you perform in the block does not overly tax the system. // Assume a property: @property (strong) id playerObserver; Float64 durationSeconds = CMTimeGetSeconds([<#An asset#> duration]); CMTime firstThird = CMTimeMakeWithSeconds(durationSeconds/3.0, 1); CMTime secondThird = CMTimeMakeWithSeconds(durationSeconds*2.0/3.0, 1); NSArray *times = @[[NSValue valueWithCMTime:firstThird], [NSValue valueWithCMTime:secondThird]]; self.playerObserver = [<#A player#> addBoundaryTimeObserverForTimes:times queue:NULL usingBlock:^{ NSString *timeDescription = (NSString *) Playback Monitoring Playback 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 25CFBridgingRelease(CMTimeCopyDescription(NULL, [self.player currentTime])); NSLog(@"Passed a boundary at %@", timeDescription); }]; Reaching the End of an Item You can register to receive an AVPlayerItemDidPlayToEndTimeNotification notification when a player item has completed playback: [[NSNotificationCenter defaultCenter] addObserver:<#The observer, typically self#> selector:@selector(<#The selector name#>) name:AVPlayerItemDidPlayToEndTimeNotification object:<#A player item#>]; Putting it all Together: Playing a Video File Using AVPlayerLayer This brief code example to illustrates how you can use an AVPlayer object to play a video file. It shows how to: ● Configure a view to use an AVPlayerLayer layer ● Create an AVPlayer object ● Create an AVPlayerItem object for a file-based asset, and use key-value observing to observe its status ● Respond to the item becoming ready to play by enabling a button ● Play the item, then restore the player’s head to the beginning. Note: To focus on the most relevant code, this example omits several aspects of a complete application,such as memory management, and unregistering as an observer (for key-value observing or for the notification center). To use AV Foundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces. For a conceptual introduction to playback, skip to “Playing Assets” (page 18). Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 26The Player View To play the visual component of an asset, you need a view containing an AVPlayerLayer layer to which the output of an AVPlayer object can be directed. You can create a simple subclass of UIView to accommodate this: #import #import @interface PlayerView : UIView @property (nonatomic) AVPlayer *player; @end @implementation PlayerView + (Class)layerClass { return [AVPlayerLayer class]; } - (AVPlayer*)player { return [(AVPlayerLayer *)[self layer] player]; } - (void)setPlayer:(AVPlayer *)player { [(AVPlayerLayer *)[self layer] setPlayer:player]; } @end A Simple View Controller Assume you have a simple view controller, declared as follows: @class PlayerView; @interface PlayerViewController : UIViewController @property (nonatomic) AVPlayer *player; @property (nonatomic) AVPlayerItem *playerItem; @property (nonatomic, weak) IBOutlet PlayerView *playerView; @property (nonatomic, weak) IBOutlet UIButton *playButton; - (IBAction)loadAssetFromFile:sender; Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 27- (IBAction)play:sender; - (void)syncUI; @end The syncUI method synchronizes the button’s state with the player’s state: - (void)syncUI { if ((self.player.currentItem != nil) && ([self.player.currentItem status] == AVPlayerItemStatusReadyToPlay)) { self.playButton.enabled = YES; } else { self.playButton.enabled = NO; } } You can invoke syncUI in the view controller’s viewDidLoad method to ensure a consistent user interface when the view is first displayed. - (void)viewDidLoad { [super viewDidLoad]; [self syncUI]; } The other properties and methods are described in the remaining sections. Creating the Asset You create an asset from a URL using AVURLAsset. Creating the asset, however, does not necessarily mean that it’s ready for use. To be used, an asset must have loaded its tracks. To avoid blocking the current thread, you load the asset’s tracks asynchronously using loadValuesAsynchronouslyForKeys:completionHandler:. (The following example assumes your project contains a suitable video resource.) - (IBAction)loadAssetFromFile:sender { Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 28NSURL *fileURL = [[NSBundle mainBundle] URLForResource:<#@"VideoFileName"#> withExtension:<#@"extension"#>]; AVURLAsset *asset = [AVURLAsset URLAssetWithURL:fileURL options:nil]; NSString *tracksKey = @"tracks"; [asset loadValuesAsynchronouslyForKeys:@[tracksKey] completionHandler: ^{ // The completion block goes here. }]; } In the completion block, you create an instance of AVPlayerItem for the asset, and set it as the player for the player view. As with creating the asset, simply creating the player item does not mean it’s ready to use. To determine when it’s ready to play, you can observe the item’s status. You trigger its preparation to play when you associate it with the player. // Define this constant for the key-value observation context. static const NSString *ItemStatusContext; // Completion handler block. dispatch_async(dispatch_get_main_queue(), ^{ NSError *error; AVKeyValueStatus status = [asset statusOfValueForKey:tracksKey error:&error]; if (status == AVKeyValueStatusLoaded) { self.playerItem = [AVPlayerItem playerItemWithAsset:asset]; [self.playerItem addObserver:self forKeyPath:@"status" options:0 context:&ItemStatusContext]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playerItemDidReachEnd:) name:AVPlayerItemDidPlayToEndTimeNotification Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 29object:self.playerItem]; self.player = [AVPlayer playerWithPlayerItem:self.playerItem]; [self.playerView setPlayer:self.player]; } else { // You should deal with the error appropriately. NSLog(@"The asset's tracks were not loaded:\n%@", [error localizedDescription]); } }); Responding to the Player Item’s Status Change When the player item’s status changes, the view controller receives a key-value observing change notification. AV Foundation does not specify what thread that the notification is sent on. If you want to update the user interface, you must make sure that any relevant code is invoked on the main thread. This example uses dispatch_async(3) OS X Developer Tools Manual Page to queue a message on the main thread to synchronize the user interface. - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { if (context == &ItemStatusContext) { dispatch_async(dispatch_get_main_queue(), ^{ [self syncUI]; }); return; } [super observeValueForKeyPath:keyPath ofObject:object change:change context:context]; return; } Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 30Playing the Item Playing the item is trivial: you send a play message to the player. - (IBAction)play:sender { [player play]; } This only playsthe item once, though. After playback, the player’s head isset to the end of the item, and further invocations of play will have no effect. To position the play head back at the beginning of the item, you can register to receive an AVPlayerItemDidPlayToEndTimeNotification from the item. In the notification’s callback method, invoke seekToTime: with the argument kCMTimeZero. // Register with the notification center after creating the player item. [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playerItemDidReachEnd:) name:AVPlayerItemDidPlayToEndTimeNotification object:[self.player currentItem]]; - (void)playerItemDidReachEnd:(NSNotification *)notification { [self.player seekToTime:kCMTimeZero]; } Playback Putting it all Together: Playing a Video File Using AVPlayerLayer 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 31To manage the capture from a device such as a camera or microphone, you assemble objects to represent inputs and outputs, and use an instance of AVCaptureSession to coordinate the data flow between them. Minimally you need: ● An instance of AVCaptureDevice to represent the input device, such as a camera or microphone ● An instance of a concrete subclass of AVCaptureInput to configure the ports from the input device ● An instance of a concrete subclass of AVCaptureOutput to manage the output to a movie file or still image ● An instance of AVCaptureSession to coordinate the data flow from the input to the output To show the user what a camera is recording, you can use an instance of AVCaptureVideoPreviewLayer (a subclass of CALayer). You can configure multiple inputs and outputs, coordinated by a single session: AVCapture Device Input AVCapture Device Input AVCaptureMovieFileOutput AVCaptureStillImageOutput AVCaptureVideoPreviewLayer AVCapture Session Capture Session For many applications, thisis as much detail as you need. Forsome operations, however, (if you want to monitor the power levels in an audio channel, for example) you need to consider how the various ports of an input device are represented, how those ports are connected to the output. 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 32 Media CaptureA connection between a capture input and a capture output in a capture session is represented by an AVCaptureConnection object. Capture inputs(instances of AVCaptureInput) have one or more input ports (instances of AVCaptureInputPort). Capture outputs (instances of AVCaptureOutput) can accept data from one or more sources (for example, an AVCaptureMovieFileOutput object accepts both video and audio data). When you add an input or an output to a session, the session “greedily” forms connections between all the compatible capture inputs’ ports and capture outputs. A connection between a capture input and a capture output is represented by an AVCaptureConnection object. Capture Device Input Capture connection Capture connection Capture input port (Video) Capture input port (Audio) Capture Device Input Capture input port (Audio) Connections AVCaptureMovieFileOutput AVCaptureStillImageOutput Connections Capture Session Capture connection You can use a capture connection to enable or disable the flow of data from a given input or to a given output. You can also use a connection to monitor the average and peak power levels in an audio channel. Use a Capture Session to Coordinate Data Flow AVCaptureSession object is the central coordinating object you use to manage data capture. You use an instance to coordinate the flow of data from AV input devices to outputs. You add the capture devices and outputs you want to the session, then start data flow by sending the session a startRunning message, and stop recording by sending a stopRunning message. AVCaptureSession *session = [[AVCaptureSession alloc] init]; // Add inputs and outputs. [session startRunning]; Media Capture Use a Capture Session to Coordinate Data Flow 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 33Configuring a Session You use a preset on the session to specify the image quality and resolution you want. A preset is a constant that identifies one of a number of possible configurations; in some cases the actual configuration is device-specific: Symbol Resolution Comments Highest recording quality. This varies per device. AVCaptureSessionPresetHigh High Suitable for WiFi sharing. The actual values may change. AVCaptureSessionPresetMedium Medium Suitable for 3G sharing. The actual values may change. AVCaptureSessionPresetLow Low AVCaptureSessionPreset640x480 640x480 VGA. AVCaptureSessionPreset1280x720 1280x720 720p HD. Full photo resolution. This is not supported for video output. AVCaptureSessionPresetPhoto Photo For examples of the actual valuesthese presetsrepresent for various devices,see “Saving to a Movie File” (page 43) and “Capturing Still Images” (page 47). If you want to set a size-specific configuration, you should check whether it is supported before setting it: if ([session canSetSessionPreset:AVCaptureSessionPreset1280x720]) { session.sessionPreset = AVCaptureSessionPreset1280x720; } else { // Handle the failure. } In many situations, you create a session and the various inputs and outputs all at once. Sometimes, however, you may want to reconfigure a running session, perhaps as different input devices become available, or in response to user request. This can present a challenge, since, if you change them one at a time, a new setting may be incompatible with an existing setting. To deal with this, you use beginConfiguration and commitConfiguration to batch multiple configuration operations into an atomic update. After calling Media Capture Use a Capture Session to Coordinate Data Flow 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 34beginConfiguration, you can for example add or remove outputs, alter the sessionPreset, or configure individual capture input or output properties. No changes are actually made until you invoke commitConfiguration, at which time they are applied together. [session beginConfiguration]; // Remove an existing capture device. // Add a new capture device. // Reset the preset. [session commitConfiguration]; Monitoring Capture Session State A capture session posts notifications that you can observe to be notified, for example, when it starts or stops running, or when it is interrupted. You can also register to receive an AVCaptureSessionRuntimeErrorNotification if a runtime error occurs. You can also interrogate the session’s running property to find out if it is running, and its interrupted property to find out if it is interrupted. An AVCaptureDevice Object Represents an Input Device An AVCaptureDevice object abstracts a physical capture device that provides input data (such as audio or video) to an AVCaptureSession object. There is one object for each input device, so for example on an iPhone 3GS there is one video input for the camera and one audio input for the microphone; on an iPhone 4 there are two video inputs—one for front-facing the camera, one for the back-facing camera—and one audio input for the microphone. You can find out what capture devices are currently available using the AVCaptureDevice class methods devices and devicesWithMediaType:, and if necessary find out what featuresthe devices offer (see “Device Capture Settings” (page 36)). The list of available devices may change, though. Current devices may become unavailable (if they’re used by another application), and new devices may become available, (if they’re relinquished by another application). You should register to receive AVCaptureDeviceWasConnectedNotification and AVCaptureDeviceWasDisconnectedNotificationnotifications to be alerted when the list of available devices changes. You add a device to a capture session using a capture input (see “Use Capture Inputs to Add a Capture Device to a Session” (page 41)). Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 35Device Characteristics You can ask a device about several different characteristics. You can test whether it provides a particular media type or supports a given capture session preset using hasMediaType: and supportsAVCaptureSessionPreset: respectively. To provide information to the user, you can find out the position of the capture device (whether it is on the front or the back of the unit they’re using), and its localized name. This may be useful if you want to present a list of capture devices to allow the user to choose one. The following code example iterates over all the available devices and logs their name, and for video devices their position on the unit. NSArray *devices = [AVCaptureDevice devices]; for (AVCaptureDevice *device in devices) { NSLog(@"Device name: %@", [device localizedName]); if ([device hasMediaType:AVMediaTypeVideo]) { if ([device position] == AVCaptureDevicePositionBack) { NSLog(@"Device position : back"); } else { NSLog(@"Device position : front"); } } } In addition, you can find out the device’s model ID and its unique ID. Device Capture Settings Different devices have different capabilities; for example, some may support different focus or flash modes; some may support focus on a point of interest. Feature iPhone 3G iPhone 3GS iPhone 4 (Back) iPhone 4 (Front) Focus mode NO YES YES NO Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 36Feature iPhone 3G iPhone 3GS iPhone 4 (Back) iPhone 4 (Front) Focus point of interest NO YES YES NO Exposure mode YES YES YES YES Exposure point of interest NO YES YES YES White balance mode YES YES YES YES Flash mode NO NO YES NO Torch mode NO NO YES NO The following code fragment shows how you can find video input devices that have a torch mode and support a given capture session preset: NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; NSMutableArray *torchDevices = [[NSMutableArray alloc] init]; for (AVCaptureDevice *device in devices) { [if ([device hasTorch] && [device supportsAVCaptureSessionPreset:AVCaptureSessionPreset640x480]) { [torchDevices addObject:device]; } } If you find multiple devices that meet your criteria, you might let the user choose which one they want to use. To display a description of a device to the user, you can use its localizedName property. You use the various different features in similar ways. There are constants to specify a particular mode, and you can ask a device whether it supports a particular mode. In several cases you can observe a property to be notified when a feature is changing. In all cases, you should lock the device before changing the mode of a particular feature, as described in “Configuring a Device” (page 40). Note: Focus point of interest and exposure point of interest are mutually exclusive, as are focus mode and exposure mode. Focus modes There are three focus modes: Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 37● AVCaptureFocusModeLocked: the focal length is fixed. This is useful when you want to allow the user to compose a scene then lock the focus. ● AVCaptureFocusModeAutoFocus: the camera does a single scan focus then reverts to locked. This is suitable for a situation where you want to select a particular item on which to focus and then maintain focus on that item even if it is not the center of the scene. ● AVCaptureFocusModeContinuousAutoFocus: the camera continuously auto-focuses as needed. You use the isFocusModeSupported: method to determine whether a device supports a given focus mode, then set the mode using the focusMode property. In addition, a device may support a focus point of interest. You test for support using focusPointOfInterestSupported. If it’ssupported, you set the focal point using focusPointOfInterest. You pass a CGPoint where {0,0} representsthe top left of the picture area, and {1,1} representsthe bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode. You can use the adjustingFocus property to determine whether a device is currently focusing. You can observe the property using key-value observing to be notified when a device starts and stops focusing. If you change the focus mode settings, you can return them to the default configuration as follows: if ([currentDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]) { CGPoint autofocusPoint = CGPointMake(0.5f, 0.5f); [currentDevice setFocusPointOfInterest:autofocusPoint]; [currentDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus]; } Exposure modes There are two exposure modes: ● AVCaptureExposureModeLocked: the exposure mode is fixed. ● AVCaptureExposureModeAutoExpose: the camera continuously changesthe exposure level as needed. You use the isExposureModeSupported: method to determine whether a device supports a given exposure mode, then set the mode using the exposureMode property. Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 38In addition, a device may support an exposure point of interest. You test for support using exposurePointOfInterestSupported. If it’s supported, you set the exposure point using exposurePointOfInterest. You pass a CGPoint where {0,0} represents the top left of the picture area, and {1,1} represents the bottom right in landscape mode with the home button on the right—this applies even if the device is in portrait mode. You can use the adjustingExposure property to determine whether a device is currently changing its exposure setting. You can observe the property using key-value observing to be notified when a device starts and stops changing its exposure setting. If you change the exposure settings, you can return them to the default configuration as follows: if ([currentDevice isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) { CGPoint exposurePoint = CGPointMake(0.5f, 0.5f); [currentDevice setExposurePointOfInterest:exposurePoint]; [currentDevice setExposureMode:AVCaptureExposureModeContinuousAutoExposure]; } Flash modes There are three flash modes: ● AVCaptureFlashModeOff: the flash will never fire. ● AVCaptureFlashModeOn: the flash will always fire. ● AVCaptureFlashModeAuto: the flash will fire if needed. You use hasFlash to determine whether a device has a flash. You use the isFlashModeSupported: method to determine whether a device supports a given flash mode, then set the mode using the flashMode property. Torch mode Torch mode is where a camera uses the flash continuously at a low power to illuminate a video capture. There are three torch modes: ● AVCaptureTorchModeOff: the torch is always off. ● AVCaptureTorchModeOn: the torch is always on. ● AVCaptureTorchModeAuto: the torch is switched on and off as needed. Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 39You use hasTorch to determine whether a device has a flash. You use the isTorchModeSupported: method to determine whether a device supports a given flash mode, then set the mode using the torchMode property. For devices with a torch, the torch only turns on if the device is associated with a running capture session. White balance There are two white balance modes: ● AVCaptureWhiteBalanceModeLocked: the white balance mode is fixed. ● AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance: the camera continuously changes the white balance as needed. You use the isWhiteBalanceModeSupported: method to determine whether a device supports a given white balance mode, then set the mode using the whiteBalanceMode property. You can use the adjustingWhiteBalance property to determine whether a device is currently changing its white balance setting. You can observe the property using key-value observing to be notified when a device starts and stops changing its white balance setting. Configuring a Device To set capture properties on a device, you must first acquire a lock on the device using lockForConfiguration:. This avoids making changes that may be incompatible with settings in other applications. The following code fragment illustrates how to approach changing the focus mode on a device by first determining whether the mode is supported, then attempting to lock the device for reconfiguration. The focus mode is changed only if the lock is obtained, and the lock is released immediately afterward. if ([device isFocusModeSupported:AVCaptureFocusModeLocked]) { NSError *error = nil; if ([device lockForConfiguration:&error]) { device.focusMode = AVCaptureFocusModeLocked; [device unlockForConfiguration]; } else { // Respond to the failure as appropriate. You should only hold the device lock if you need settable device properties to remain unchanged. Holding the device lock unnecessarily may degrade capture quality in other applications sharing the device. Media Capture An AVCaptureDevice Object Represents an Input Device 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 40Switching Between Devices Sometimes you may want to allow the user to switch between input devices—for example, on an iPhone 4 they could switch from using the front to the back camera. To avoid pauses or stuttering, you can reconfigure a session while it is running, however you should use beginConfiguration and commitConfiguration to bracket your configuration changes: AVCaptureSession *session = <#A capture session#>; [session beginConfiguration]; [session removeInput:frontFacingCameraDeviceInput]; [session addInput:backFacingCameraDeviceInput]; [session commitConfiguration]; When the outermost commitConfiguration is invoked, all the changes are made together. This ensures a smooth transition. Use Capture Inputs to Add a Capture Device to a Session To add a capture device to a capture session, you use an instance of AVCaptureDeviceInput (a concrete subclass of the abstract AVCaptureInput class). The capture device input manages the device’s ports. NSError *error; AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (!input) { // Handle the error appropriately. } You add inputs to a session using addInput:. If appropriate, you can check whether a capture input is compatible with an existing session using canAddInput:. AVCaptureSession *captureSession = <#Get a capture session#>; AVCaptureDeviceInput *captureDeviceInput = <#Get a capture device input#>; if ([captureSession canAddInput:captureDeviceInput]) { [captureSession addInput:captureDeviceInput]; Media Capture Use Capture Inputs to Add a Capture Device to a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 41} else { // Handle the failure. } See “Configuring a Session” (page 34) for more details on how you might reconfigure a running session. An AVCaptureInput vends one or more streams of media data. For example, input devices can provide both audio and video data. Each media stream provided by an input is represented by an AVCaptureInputPort object. A capture session uses an AVCaptureConnection object to define the mapping between a set of AVCaptureInputPort objects and a single AVCaptureOutput. Use Capture Outputs to Get Output from a Session To get output from a capture session, you add one or more outputs. An output is an instance of a concrete subclass of AVCaptureOutput; you use: ● AVCaptureMovieFileOutput to output to a movie file ● AVCaptureVideoDataOutput if you want to process frames from the video being captured ● AVCaptureAudioDataOutput if you want to process the audio data being captured ● AVCaptureStillImageOutput if you want to capture still images with accompanying metadata You add outputs to a capture session using addOutput:. You check whether a capture output is compatible with an existing session using canAddOutput:. You can add and remove outputs as you want while the session is running. AVCaptureSession *captureSession = <#Get a capture session#>; AVCaptureMovieFileOutput *movieInput = <#Create and configure a movie output#>; if ([captureSession canAddOutput:movieInput]) { [captureSession addOutput:movieInput]; } else { // Handle the failure. } Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 42Saving to a Movie File You save movie data to a file using an AVCaptureMovieFileOutput object. (AVCaptureMovieFileOutput is a concrete subclass of AVCaptureFileOutput, which defines much of the basic behavior.) You can configure various aspects of the movie file output, such as the maximum duration of the recording, or the maximum file size. You can also prohibit recording if there is less than a given amount of disk space left. AVCaptureMovieFileOutput *aMovieFileOutput = [[AVCaptureMovieFileOutput alloc] init]; CMTime maxDuration = <#Create a CMTime to represent the maximum duration#>; aMovieFileOutput.maxRecordedDuration = maxDuration; aMovieFileOutput.minFreeDiskSpaceLimit = <#An appropriate minimum given the quality of the movie format and the duration#>; The resolution and bit rate for the output depend on the capture session’s sessionPreset. The video encoding is typically H.264 and audio encoding AAC. The actual values vary by device, as illustrated in the following table. Preset iPhone 3G iPhone 3GS iPhone 4 (Back) iPhone 4 (Front) 640x480 3.5 mbps 1280x720 10.5 mbps 640x480 3.5 mbps No video Apple Lossless High 480x360 700 kbps 480x360 700 kbps 480x360 700 kbps No video Apple Lossless Medium 192x144 128 kbps 192x144 128 kbps 192x144 128 kbps No video Apple Lossless Low 640x480 3.5 mbps 640x480 3.5 mbps 640x480 3.5 mbps No video Apple Lossless 640x480 No video 64 kbps AAC No video 64 kbps AAC No video 64 kbps AAC No video Apple Lossless 1280x720 Notsupported for video output Notsupported for video output Not supported for video output Not supported for video output Photo Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 43Starting a Recording You start recording a QuickTime movie using startRecordingToOutputFileURL:recordingDelegate:. You need to supply a file-based URL and a delegate. The URL must not identify an existing file, as the movie file output does not overwrite existing resources. You must also have permission to write to the specified location. The delegate must conform to the AVCaptureFileOutputRecordingDelegate protocol, and must implement the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method. AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>; NSURL *fileURL = <#A file URL that identifies the output location#>; [aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>]; In the implementation of captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:, the delegate might write the resulting movie to the camera roll. Itshould also check for any errorsthat might have occurred. Ensuring the File Was Written Successfully To determine whether the file was saved successfully, in the implementation of captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: you check not only the error, but also the value of the AVErrorRecordingSuccessfullyFinishedKey in the error’s user info dictionary: - (void)captureOutput:(AVCaptureFileOutput *)captureOutput didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)error { BOOL recordedSuccessfully = YES; if ([error code] != noErr) { // A problem occurred: Find out if the recording was successful. id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey]; if (value) { recordedSuccessfully = [value boolValue]; } Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 44} // Continue as appropriate... You should check the value of the AVErrorRecordingSuccessfullyFinishedKey in the error’s user info dictionary because the file might have been saved successfully, even though you got an error. The error might indicate that one of your recording constraints wasreached, for example AVErrorMaximumDurationReached or AVErrorMaximumFileSizeReached. Other reasons the recording might stop are: ● The disk is full—AVErrorDiskFull. ● The recording device was disconnected (for example, the microphone was removed from an iPod touch)—AVErrorDeviceWasDisconnected. ● The session wasinterrupted (for example, a phone call wasreceived)—AVErrorSessionWasInterrupted. Adding Metadata to a File You can set metadata for the movie file at any time, even while recording. This is useful for situations where the information is not available when the recording starts, as may be the case with location information. Metadata for a file output is represented by an array of AVMetadataItem objects; you use an instance of its mutable subclass, AVMutableMetadataItem, to create metadata of your own. AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>; NSArray *existingMetadataArray = aMovieFileOutput.metadata; NSMutableArray *newMetadataArray = nil; if (existingMetadataArray) { newMetadataArray = [existingMetadataArray mutableCopy]; } else { newMetadataArray = [[NSMutableArray alloc] init]; } AVMutableMetadataItem *item = [[AVMutableMetadataItem alloc] init]; item.keySpace = AVMetadataKeySpaceCommon; item.key = AVMetadataCommonKeyLocation; CLLocation *location - <#The location to set#>; item.value = [NSString stringWithFormat:@"%+08.4lf%+09.4lf/" location.coordinate.latitude, location.coordinate.longitude]; Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 45[newMetadataArray addObject:item]; aMovieFileOutput.metadata = newMetadataArray; Processing Frames of Video An AVCaptureVideoDataOutput object uses delegation to vend video frames. You set the delegate using setSampleBufferDelegate:queue:. In addition to the delegate, you specify a serial queue on which they delegate methods are invoked. You must use a serial queue to ensure that frames are delivered to the delegate in the proper order. You should not pass the queue returned by dispatch_get_current_queue since there is no guarantee as to which thread the current queue is running on. You can use the queue to modify the priority given to delivering and processing the video frames. The frames are presented in the delegate method, captureOutput:didOutputSampleBuffer:fromConnection:, as instances of the CMSampleBuffer opaque type (see “Representations of Media” (page 58)). By default, the buffers are emitted in the camera’s most efficient format. You can use the videoSettings property to specify a custom output format. The video settings property is a dictionary; currently, the only supported key is kCVPixelBufferPixelFormatTypeKey. The recommended pixel format choices for iPhone 4 are kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange or kCVPixelFormatType_32BGRA; for iPhone 3G the recommended pixel format choices are kCVPixelFormatType_422YpCbCr8 or kCVPixelFormatType_32BGRA. Both Core Graphics and OpenGL work well with the BGRA format: AVCaptureSession *captureSession = <#Get a capture session#>; NSDictionary *newSettings = @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) }; captureSession.videoSettings = newSettings; Performance Considerations for Processing Video You should set the session output to the lowest practical resolution for your application. Setting the output to a higher resolution than necessary wastes processing cycles and needlessly consumes power. You must ensure that your implementation of captureOutput:didOutputSampleBuffer:fromConnection: is able to process a sample buffer within the amount of time allotted to a frame. If it takes too long, and you hold onto the video frames, AV Foundation will stop delivering frames, not only to your delegate but also other outputs such as a preview layer. Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 46You can use the capture video data output’s minFrameDuration property to ensure you have enough time to process a frame—at the cost of having a lower frame rate than would otherwise be the case. You might also ensure that the alwaysDiscardsLateVideoFrames property is set to YES (the default). This ensures that any late video frames are dropped rather than handed to you for processing. Alternatively, if you are recording and it doesn’t matter if the output fames are a little late, you would prefer to get all of them, you can set the property value to NO. This does not mean that frames will not be dropped (that is, frames may still be dropped), but they may not be dropped as early, or as efficiently. Capturing Still Images You use an AVCaptureStillImageOutput output if you want to capture still images with accompanying metadata. The resolution of the image depends on the preset for the session, as illustrated in this table: Preset iPhone 3G iPhone 3GS iPhone 4 (Back) iPhone 4 (Front) High 400x304 640x480 1280x720 640x480 Medium 400x304 480x360 480x360 480x360 Low 400x304 192x144 192x144 192x144 640x480 N/A 640x480 640x480 640x480 1280x720 N/A N/A 1280x720 N/A Photo 1600x1200 2048x1536 2592x1936 640x480 Pixel and Encoding Formats Different devices support different image formats: iPhone 3G iPhone 3GS iPhone 4 yuvs, 2vuy, BGRA, jpeg 420f, 420v, BGRA, jpeg 420f, 420v, BGRA, jpeg You can find out what pixel and codec types are supported using availableImageDataCVPixelFormatTypes and availableImageDataCodecTypes respectively. You set the outputSettings dictionary to specify the image format you want, for example: AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init]; NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG}; [stillImageOutput setOutputSettings:outputSettings]; Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 47If you want to capture a JPEG image, you should typically not specify your own compression format. Instead, you should let the still image output do the compression for you,since its compression is hardware-accelerated. If you need a data representation of the image, you can use jpegStillImageNSDataRepresentation: to get an NSData object without re-compressing the data, even if you modify the image’s metadata. Capturing an Image When you want to capture an image, you send the output a captureStillImageAsynchronouslyFromConnection:completionHandler: message. The first argument is the connection you want to use for the capture. You need to look for the connection whose input port is collecting video: AVCaptureConnection *videoConnection = nil; for (AVCaptureConnection *connection in stillImageOutput.connections) { for (AVCaptureInputPort *port in [connection inputPorts]) { if ([[port mediaType] isEqual:AVMediaTypeVideo] ) { videoConnection = connection; break; } } if (videoConnection) { break; } } The second argument to captureStillImageAsynchronouslyFromConnection:completionHandler: is a block that takes two arguments: a CMSampleBuffer containing the image data, and an error. The sample buffer itself may contain metadata,such as an Exif dictionary, as an attachment. You can modify the attachments should you want, but note the optimization for JPEG images discussed in “Pixel and Encoding Formats” (page 47). [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) { CFDictionaryRef exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL); if (exifAttachments) { // Do something with the attachments. } Media Capture Use Capture Outputs to Get Output from a Session 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 48// Continue as appropriate. }]; Showing the User What’s Being Recorded You can provide the user with a preview of what’s being recorded by the camera using a preview layer, or by the microphone by monitoring the audio channel. Video Preview You can provide the user with a preview of what’s being recorded using an AVCaptureVideoPreviewLayer object. AVCaptureVideoPreviewLayer is a subclass ofCALayer (see Core Animation Programming Guide . You don’t need any outputs to show the preview. Unlike a capture output, a video preview layer maintains a strong reference to the session with which it is associated. This is to ensure that the session is not deallocated while the layer is attempting to display video. This is reflected in the way you initialize a preview layer: AVCaptureSession *captureSession = <#Get a capture session#>; CALayer *viewLayer = <#Get a layer from the view in which you want to present the preview#>; AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]; [viewLayer addSublayer:captureVideoPreviewLayer]; In general, the preview layer behaves like any other CALayer object in the render tree (see Core Animation Programming Guide ). You can scale the image and perform transformations, rotations and so on just as you would any layer. One difference is that you may need to set the layer’s orientation property to specify how itshould rotate images coming from the camera. In addition, on iPhone 4 the preview layersupports mirroring (this is the default when previewing the front-facing camera). Video Gravity Modes The preview layer supports three gravity modes that you set using videoGravity: ● AVLayerVideoGravityResizeAspect: This preserves the aspect ratio, leaving black bars where the video does not fill the available screen area. Media Capture Showing the User What’s Being Recorded 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 49● AVLayerVideoGravityResizeAspectFill: This preservesthe aspect ratio, but fillsthe available screen area, cropping the video when necessary. ● AVLayerVideoGravityResize: This simply stretches the video to fill the available screen area, even if doing so distorts the image. Using “Tap to Focus” With a Preview You need to take care when implementing tap-to-focus in conjunction with a preview layer. You must account for the preview orientation and gravity of the layer, and the possibility that the preview may be mirrored. Showing Audio Levels To monitor the average and peak power levels in an audio channel in a capture connection, you use an AVCaptureAudioChannel object. Audio levels are not key-value observable, so you must poll for updated levels as often as you want to update your user interface (for example, 10 times a second). AVCaptureAudioDataOutput *audioDataOutput = <#Get the audio data output#>; NSArray *connections = audioDataOutput.connections; if ([connections count] > 0) { // There should be only one connection to an AVCaptureAudioDataOutput. AVCaptureConnection *connection = [connections objectAtIndex:0]; NSArray *audioChannels = connection.audioChannels; for (AVCaptureAudioChannel *channel in audioChannels) { float avg = channel.averagePowerLevel; float peak = channel.peakHoldLevel; // Update the level meter user interface. } } Putting it all Together: Capturing Video Frames as UIImage Objects This brief code example to illustrates how you can capture video and convert the frames you get to UIImage objects. It shows you how to: ● Create an AVCaptureSession object to coordinate the flow of data from an AV input device to an output Media Capture Putting it all Together: Capturing Video Frames as UIImage Objects 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 50● Find the AVCaptureDevice object for the input type you want ● Create an AVCaptureDeviceInput object for the device ● Create an AVCaptureVideoDataOutput object to produce video frames ● Implement a delegate for the AVCaptureVideoDataOutput object to process video frames ● Implement a function to convert the CMSampleBuffer received by the delegate into a UIImage object Note: To focus on the most relevant code, this example omits several aspects of a complete application, including memory management. To use AV Foundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces. Create and Configure a Capture Session You use an AVCaptureSession object to coordinate the flow of data from an AV input device to an output. Create a session, and configure it to produce medium resolution video frames. AVCaptureSession *session = [[AVCaptureSession alloc] init]; session.sessionPreset = AVCaptureSessionPresetMedium; Create and Configure the Device and Device Input Capture devices are represented by AVCaptureDevice objects; the class provides methods to retrieve an object for the input type you want. A device has one or more ports, configured using an AVCaptureInput object. Typically, you use the capture input in its default configuration. Find a video capture device, then create a device input with the device and add it to the session. AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; NSError *error = nil; AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (!input) { // Handle the error appropriately. } [session addInput:input]; Media Capture Putting it all Together: Capturing Video Frames as UIImage Objects 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 51Create and Configure the Data Output You use an AVCaptureVideoDataOutput object to process uncompressed frames from the video being captured. You typically configure several aspects of an output. For video, for example, you can specify the pixel format using the videoSettings property, and cap the frame rate by setting the minFrameDuration property. Create and configure an output for video data and add it to the session; cap the frame rate to 15 fps by setting the minFrameDuration property to 1/15 second: AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init]; [session addOutput:output]; output.videoSettings = @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) }; output.minFrameDuration = CMTimeMake(1, 15); The data output object uses delegation to vend the video frames. The delegate must adopt the AVCaptureVideoDataOutputSampleBufferDelegate protocol. When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked. dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL); [output setSampleBufferDelegate:self queue:queue]; dispatch_release(queue); You use the queue to modify the priority given to delivering and processing the video frames. Implement the Sample Buffer Delegate Method In the delegate class, implement the method (captureOutput:didOutputSampleBuffer:fromConnection:) that is called when a sample buffer is written. The video data output object delivers frames as CMSampleBuffers, so you need to convert from the CMSampleBuffer to a UIImage object. The function for this operation isshown in “Converting a CMSampleBuffer to a UIImage” (page 59). - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { Media Capture Putting it all Together: Capturing Video Frames as UIImage Objects 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 52UIImage *image = imageFromSampleBuffer(sampleBuffer); // Add your code here that uses the image. } Remember that the delegate method is invoked on the queue you specified in setSampleBufferDelegate:queue:; if you want to update the user interface, you must invoke any relevant code on the main thread. Starting and Stopping Recording After configuring the capture session, you send it a startRunning message to start the recording. [session startRunning]; To stop recording, you send the session a stopRunning message. Media Capture Putting it all Together: Capturing Video Frames as UIImage Objects 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 53Time-based audio-visual data such as a movie file or a video stream is represented in the AV Foundation framework by AVAsset. Its structure dictates much of the framework works. Several low-level data structures that AV Foundation uses to represent time and media such as sample buffers come from the Core Media framework. Representation of Assets AVAsset is the core class in the AV Foundation framework. It provides a format-independent abstraction of time-based audiovisual data, such as a movie file or a video stream. In many cases, you work with one of its subclasses: you use the composition subclasses when you create new assets (see “Editing” (page 7)), and you use AVURLAsset to create a new asset instance from media at a given URL (including assetsfrom the MPMedia framework or the Asset Library framework—see “Using Assets” (page 9)). AVURLAsset AVMutableComposition AVComposition AVAsset NSObject An asset contains a collection of tracks that are intended to be presented or processed together, each of a uniform media type, including (but not limited to) audio, video, text, closed captions, and subtitles. The asset object providesinformation about whole resource,such asits duration or title, as well as hintsfor presentation, such as its natural size. Assets may also have metadata, represented by instances of AVMetadataItem. 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 54 Time and Media RepresentationsA track is represented by an instance of AVAssetTrack. In a typical simple case, one track represents the audio component and another represents the video component; in a complex composition, there may be multiple overlapping tracks of audio and video. AVAsset AVMetadataItem AVMetadataItem AVAssetTrack AVAssetTrack AVAssetTrack AVAssetTrack A track has a number of properties, such as its type (video or audio), visual and/or audible characteristics (as appropriate), metadata, and timeline (expressed in terms of its parent asset). A track also has an array of format descriptions. The array contains CMFormatDescriptions (see CMFormatDescriptionRef), each of which describes the format of media samples referenced by the track. A track that contains uniform media (for example, all encoded using to the same settings) will provide an array with a count of 1. A track may itself be divided into segments, represented by instances of AVAssetTrackSegment. A segment is a time mapping from the source to the asset track timeline. Representations of Time Time in AV Foundation is represented by primitive structures from the Core Media framework. CMTime Represents a Length of Time CMTime is a C structure that represents time as a rational number, with a numerator (an int64_t value), and a denominator (an int32_t timescale).Conceptually, the timescale specifies the fraction of a second each unit in the numerator occupies. Thusif the timescale is 4, each unit represents a quarter of a second; if the timescale is 10, each unit represents a tenth of a second, and so on. You frequently use a timescale of 600, since this is a common multiple of several commonly-used frame-rates: 24 frames per second (fps) for film, 30 fps for NTSC (used for TV in North America and Japan), and 25 fps for PAL (used for TV in Europe). Using a timescale of 600, you can exactly represent any number of frames in these systems. In addition to a simple time value, a CMTime can represent non-numeric values: +infinity, -infinity, and indefinite. It can also indicate whether the time been rounded at some point, and it maintains an epoch number. Time and Media Representations Representations of Time 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 55Using CMTime You create a time using CMTimeMake, or one of the related functions such as CMTimeMakeWithSeconds (which allows you to create a time using a float value and specify a preferred time scale). There are several functions for time-based arithmetic and to compare times, as illustrated in the following example. CMTime time1 = CMTimeMake(200, 2); // 200 half-seconds CMTime time2 = CMTimeMake(400, 4); // 400 quarter-seconds // time1 and time2 both represent 100 seconds, but using different timescales. if (CMTimeCompare(time1, time2) == 0) { NSLog(@"time1 and time2 are the same"); } Float64 float64Seconds = 200.0 / 3; CMTime time3 = CMTimeMakeWithSeconds(float64Seconds , 3); // 66.66... third-seconds time3 = CMTimeMultiply(time3, 3); // time3 now represents 200 seconds; next subtract time1 (100 seconds). time3 = CMTimeSubtract(time3, time1); CMTimeShow(time3); if (CMTIME_COMPARE_INLINE(time2, ==, time3)) { NSLog(@"time2 and time3 are the same"); } For a list of all the available functions, see CMTime Reference . Special Values of CMTime Core Media provides constants for special values: kCMTimeZero, kCMTimeInvalid, kCMTimePositiveInfinity, and kCMTimeNegativeInfinity. There are many ways, though in which a CMTime can, for example, represent a time that is invalid. If you need to test whether a CMTime is valid, or a non-numeric value, you should use an appropriate macro, such as CMTIME_IS_INVALID, CMTIME_IS_POSITIVE_INFINITY, or CMTIME_IS_INDEFINITE. CMTime myTime = <#Get a CMTime#>; if (CMTIME_IS_INVALID(myTime)) { Time and Media Representations Representations of Time 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 56// Perhaps treat this as an error; display a suitable alert to the user. } You should not compare the value of an arbitrary CMTime with kCMTimeInvalid. Representing a CMTime as an Object If you need to use CMTimes in annotations or Core Foundation containers, you can convert a CMTime to and from a CFDictionary (see CFDictionaryRef) using CMTimeCopyAsDictionary and CMTimeMakeFromDictionary respectively. You can also get a string representation of a CMTime using CMTimeCopyDescription. Epochs The epoch number of a CMTime is usually set to 0, but you can use it to distinguish unrelated timelines. For example, the epoch could be incremented each cycle through a presentation loop, to differentiate between time N in loop 0 from time N in loop 1. CMTimeRange Represents a Time Range CMTimeRange is a C structure that has a start time and duration, both expressed as CMTimes. A time range does not include the time that is the start time plus the duration. You create a time range using CMTimeRangeMake or CMTimeRangeFromTimeToTime. There are constraints on the value of the CMTimes’ epochs: ● CMTimeRanges cannot span different epochs. ● The epoch in a CMTime that represents a timestamp may be non-zero, but you can only perform range operations (such as CMTimeRangeGetUnion) on ranges whose start fields have the same epoch. ● The epoch in a CMTime that represents a duration should always be 0, and the value must be non-negative. Working with Time Ranges Core Media provides functions you can use to determine whether a time range contains a given time or other time range, or whether two time ranges are equal, and to calculate unions and intersections of time ranges, such as CMTimeRangeContainsTime, CMTimeRangeEqual, CMTimeRangeContainsTimeRange, and CMTimeRangeGetUnion. Given that a time range does not include the time that is the start time plus the duration, the following expression always evaluates to false: Time and Media Representations Representations of Time 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 57CMTimeRangeContainsTime(range, CMTimeRangeGetEnd(range)) For a list of all the available functions, see CMTimeRange Reference . Special Values of CMTimeRange Core Media provides constants for a zero-length range and an invalid range, kCMTimeRangeZero and kCMTimeRangeInvalid respectively. There are many ways, though in which a CMTimeRange can be invalid, or zero—or indefinite (if one of the CMTimes is indefinite. If you need to test whether a CMTimeRange is valid, zero, or indefinite, you should use an appropriate macro: CMTIMERANGE_IS_VALID, CMTIMERANGE_IS_INVALID, CMTIMERANGE_IS_EMPTY, or CMTIMERANGE_IS_EMPTY. CMTimeRange myTimeRange = <#Get a CMTimeRange#>; if (CMTIMERANGE_IS_EMPTY(myTimeRange)) { // The time range is zero. } You should not compare the value of an arbitrary CMTimeRange with kCMTimeRangeInvalid. Representing a CMTimeRange as an Object If you need to use CMTimeRangesin annotations or Core Foundation containers, you can convert a CMTimeRange to and from a CFDictionary (see CFDictionaryRef) using CMTimeRangeCopyAsDictionary and CMTimeRangeMakeFromDictionary respectively. You can also get a string representation of a CMTime using CMTimeRangeCopyDescription. Representations of Media Video data and its associated metadata is represented in AV Foundation by opaque objects from the Core Media framework. Core Media represents video data using CMSampleBuffer (see CMSampleBufferRef). CMSampleBuffer is a Core Foundation-style opaque type; an instance contains the sample buffer for a frame of video data as a Core Video pixel buffer (see CVPixelBufferRef). You access the pixel buffer from a sample buffer using CMSampleBufferGetImageBuffer: CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(<#A CMSampleBuffer#>); Time and Media Representations Representations of Media 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 58From the pixel buffer, you can accessthe actual video data. For an example,see “Converting a CMSampleBuffer to a UIImage” (page 59). In addition to the video data, you can retrieve a number of other aspects of the video frame: ● Timing information You get accurate timestamps for both the original presentation time and the decode timeusingCMSampleBufferGetPresentationTimeStampandCMSampleBufferGetDecodeTimeStamp respectively. ● Format information The format information is encapsulated in a CMFormatDescription object (see CMFormatDescriptionRef). From the format description, you can get for example the pixel type and video dimensions using CMVideoFormatDescriptionGetCodecType and CMVideoFormatDescriptionGetDimensions respectively. ● Metadata Metadata are stored in a dictionary as an attachment. You use CMGetAttachment to retrieve the dictionary: CMSampleBufferRef sampleBuffer = <#Get a sample buffer#>; CFDictionaryRef metadataDictionary = CMGetAttachment(sampleBuffer, CFSTR("MetadataDictionary", NULL); if (metadataDictionary) { // Do something with the metadata. } Converting a CMSampleBuffer to a UIImage The following function shows how you can convert a CMSampleBuffer to a UIImage object. You should consider your requirements carefully before using it. Performing the conversion is a comparatively expensive operation. It is appropriate to, for example, create a still image from a frame of video data taken every second or so. You should not use this as a means to manipulate every frame of video coming from a capture device in real time. UIImage *imageFromSampleBuffer(CMSampleBufferRef sampleBuffer) { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer. CVPixelBufferLockBaseAddress(imageBuffer,0); // Get the number of bytes per row for the pixel buffer. Time and Media Representations Converting a CMSampleBuffer to a UIImage 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 59size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); // Get the pixel buffer width and height. size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); // Create a device-dependent RGB color space. static CGColorSpaceRef colorSpace = NULL; if (colorSpace == NULL) { colorSpace = CGColorSpaceCreateDeviceRGB(); if (colorSpace == NULL) { // Handle the error appropriately. return nil; } } // Get the base address of the pixel buffer. void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); // Get the data size for contiguous planes of the pixel buffer. size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer); // Create a Quartz direct-access data provider that uses data we supply. CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize, NULL); // Create a bitmap image from data supplied by the data provider. CGImageRef cgImage = CGImageCreate(width, height, 8, 32, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little, dataProvider, NULL, true, kCGRenderingIntentDefault); CGDataProviderRelease(dataProvider); // Create and return an image object to represent the Quartz image. UIImage *image = [UIImage imageWithCGImage:cgImage]; CGImageRelease(cgImage); Time and Media Representations Converting a CMSampleBuffer to a UIImage 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 60CVPixelBufferUnlockBaseAddress(imageBuffer, 0); return image; } Time and Media Representations Converting a CMSampleBuffer to a UIImage 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 61This table describes the changes to AV Foundation Programming Guide . Date Notes 2011-10-12 Updated for iOS5 to include references to release notes. 2011-04-28 First release for OS X v10.7. 2010-09-08 TBD First version of a document that describes a low-level framework you use to play, inspect, create, edit, capture, and transcode media assets. 2010-08-16 2011-10-12 | © 2011 Apple Inc. All Rights Reserved. 62 Document Revision HistoryApple Inc. © 2011 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, iPhone, iPod, iPod touch, Mac, Objective-C, OS X, Quartz, and QuickTime are trademarks of Apple Inc., registered in the U.S. and other countries. OpenGL is a registered trademark of Silicon Graphics, Inc. Times is a registered trademark of Heidelberger Druckmaschinen AG, available from Linotype Library GmbH. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Preferences and Settings Programming GuideContents About Preferences and Settings 5 At a Glance 5 You Decide What Preferences You Want to Expose 5 Apps Provide Their Own Preferences Interface 5 Apps Access Preferences Using the User Defaults Object 6 iCloud Stores Shared Preference and Configuration Data 6 Defaults Are Grouped into Domains in OS X 6 A Settings Bundle Manages Preferences for iOS Apps 6 See Also 7 About the User Defaults System 8 What Makes a Good Preference? 8 Providing a Preference Interface 8 The Organization of Preferences 9 The Argument Domain 10 The Application Domain 10 The Global Domain 11 The Languages Domains 11 The Registration Domain 11 Viewing Preferences Using the Defaults Tool 12 Accessing Preference Values 13 Registering Your App’s Default Preferences 13 Getting and Setting Preference Values 14 Synchronizing and Detecting Preference Changes 15 Managing Preferences Using Cocoa Bindings 16 Managing Preferences Using Core Foundation 16 Setting a Preference Value Using Core Foundation 16 Getting a Preference Value Using Core Foundation 17 Storing Preferences in iCloud 19 Strategies for Using the iCloud Key-Value Store 19 Configuring Your App to Use the Key-Value Store 20 Accessing Values in the Key-Value Store 21 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 2Defining the Scope of Key-Value Store Changes 22 Implementing an iOS Settings Bundle 24 The Settings App Interface 24 The Settings Bundle 26 The Settings Page File Format 27 Hierarchical Preferences 27 Localized Resources 28 Creating and Modifying the Settings Bundle 29 Adding the Settings Bundle 29 Preparing the Settings Page for Editing 29 Configuring a Settings Page: A Tutorial 31 Creating Additional Settings Page Files 33 Debugging Preferences for Simulated Apps 34 Document Revision History 35 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 3 ContentsFigures, Tables, and Listings About the User Defaults System 8 Table 1-1 Options for displaying preferences to the user 8 Table 1-2 Search order for domains 10 Accessing Preference Values 13 Listing 2-1 Registering default preference values 14 Listing 2-2 Writing a simple default 17 Listing 2-3 Reading a simple default 17 Storing Preferences in iCloud 19 Listing 3-1 Updating local preference values using iCloud 21 Implementing an iOS Settings Bundle 24 Figure 4-1 Organizing preferences using child panes 28 Figure 4-2 Formatted contents of the Root.plist file 30 Figure 4-3 A root Settings page 31 Table 4-1 Preference control types 25 Table 4-2 Contents of the Settings.bundle directory 26 Table 4-3 Root-level keys of a preferences Settings page file 27 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 4Preferences are pieces of information that you store persistently and use to configure your app. Apps often expose preferences to users so that they can customize the appearance and behavior of the app. Most preferences are stored locally using the Cocoa preferences system—known as the user defaults system. Apps can also store preferences in a user’s iCloud account using the key-value store. The user defaultssystem and key-value store are both designed forstoring simple data types—strings, numbers, dates, Boolean values, URLs, data objects, and so forth—in a property list. The use of a property list also means you can organize your preference data using array and dictionary types. It is also possible to store other objects in a property list by encoding them into an NSData object first. At a Glance Apps integrate preferences in several ways, including programmatically at various points throughout your code and as part of the user interface. Preferences are supported in both iOS and Mac apps. You Decide What Preferences You Want to Expose Preferences are different for each app, and it is up to you to decide what parts of your app you want to make configurable. Configuration involves checking the value of a stored preference from your code and taking action based on that value. Thus, the preference value itself should always be simple and have a specific meaning that is then implemented by your app. Relevant section: “What Makes a Good Preference?” (page 8) Apps Provide Their Own Preferences Interface Because each app’s preferences are different, the app itself is responsible for deciding how best to present those preferences to the user, if at all. Both iOS and OS X provide some standard places for you to incorporate a preferences interface, but you are still responsible for designing that interface and displaying it at the appropriate time. 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 5 About Preferences and SettingsRelevant section: “Providing a Preference Interface” (page 8) Apps Access Preferences Using the User Defaults Object Apps accesslocally stored preferences using a user defaults object, which is either an NSUserDefaults object (iOS and OS X) or an NSUserDefaultsController object (OS X only). In addition to retrieving preference values, apps can use this object to register default values for preferences and manage other aspects of the preferences system. Relevant chapter: “Accessing Preference Values” (page 13) iCloud Stores Shared Preference and Configuration Data Apps that support iCloud can put some of their preference data in the user’s iCloud account and make it available to instances of the app running on the user’s other devices. You use this capability to supplement (not replace) your app’s existing preferences data and provide a more coherent experience across the user’s devices. For example, a magazine app might store information about the page number and issue last read by the user so that the app running on a different device can show that same page. Relevant chapter: “Storing Preferences in iCloud” (page 19) Defaults Are Grouped into Domains in OS X OS X preferences are grouped by domainsso thatsystem preferences can be differentiated from app preferences. Splitting preferences in this manner lets the user specify some preferences globally and then override one or more of those preferences inside an app. Relevant section: “The Organization of Preferences” (page 9) A Settings Bundle Manages Preferences for iOS Apps An iOS, apps can display preferences from the Settings app, which is a good place to put preferences that the user does not need to configure frequently. To display preferences in the Settings app, an app’s bundle must include a special resource called a Settings bundle that defines the preferences to display, the proper way to display them, and the information needed to record the user’s selections. About Preferences and Settings At a Glance 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 6Note: Apps are not required to use a Settings bundle to manage all preferences. For preferences that the user islikely to change frequently, the app can display its own custom interface for managing those preferences. Relevant chapter: “Implementing an iOS Settings Bundle” (page 24) See Also For information about property lists, see Property List Programming Guide . For more advanced information about using Core Foundation to manage preferences, see Preferences Programming Topics for Core Foundation . About Preferences and Settings See Also 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 7The user defaults system manages the storage of preferences for each user. Most preferences are stored persistently and therefore do not change between subsequent launch cycles of your app. Apps use preferences to track user-initiated and program-initiated configuration changes. What Makes a Good Preference? When defining your app’s preferences, it is better to use simple values and data types whenever possible. The preferences system is built around property-list data types such as strings, numbers, and dates. Although you can use an NSData object to store arbitrary objects in preferences, doing so is not recommended in most cases. Storing objects persistently means that your app has to decode that object at some point. In the case of preferences, a stored object means decoding the object every time you access the preference. It also means that a newer version of your app has to ensure that it is able to decode objects created and written to disk using an earlier version of your app, which is potentially error prone. A better approach for preferences is to store simple strings and values and use them to create the objects your app needs. Storing simple values meansthat your app can always accessthe value. The only thing that changes from release to release is the interpretation of the simple value and the objects your app creates in response. Providing a Preference Interface For user-facing preferences, Table 1-1 lists the options for displaying those preferences to the user. As you can see from this table, most options involve the creation of a custom user interface for managing and presenting preferences. If you are creating an iOS app, you can use a Settings bundle to present preferences, but you should do so only for settings the user changes infrequently. Table 1-1 Options for displaying preferences to the user Preference iOS OS X Frequently changed preferences Custom UI Custom UI Infrequently changed preferences Settings bundle Custom UI 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 8 About the User Defaults SystemNote: An example of preferencesthat might change frequently include thingslike the volume levels or control options of a game. An example of preferences that might change infrequently are the email address and server settings in the Mail app. For iOS apps, it is ultimately up to you to decide whether it is appropriate to expose preferences from the Settings app or from inside your app. Preferences in Mac apps should be accessible from a Preferences menu item in the app menu. Cocoa apps created using the Xcode templates provide such a menu item for you automatically. It is your responsibility to present an appropriate user interface when the user choosesthis menu item. You can provide that user interface by defining an action method in your app delegate that displays a custom preferences window and connecting that action method to the menu item in Interface Builder. There is no standard way to display custom preferences from inside an iOS app. You can integrate preferences in many ways, including using a separate tab in a tab-bar interface or using a custom button from one of your app’s screens. Preferences should generally be presented using a distinct view controller so that changes in preferences can be recorded when that view controller is dismissed by the user. The Organization of Preferences Preferences are grouped into domains, each of which has a name and a specific usage. For example, there’s a domain for app-specific preferences and another for systemwide preferences that apply to all apps. All preferences are stored and accessed on a per-user basis. There is no support for sharing preferences between users. Each preference has three components: ● The domain in which it is stored ● Its name (specified as an NSString object) ● Its value, which can be any property-list object (NSData, NSString, NSNumber, NSDate, NSArray, or NSDictionary) The lifetime of a preference depends on which domain you store it in. Some domains store preferences persistently by writing them to the user’s defaults database. Such preferences continue to exist from one app launch to the next. Other domains store preferences in a more volatile way, preserving preference values only for the life of the corresponding user defaults object. About the User Defaults System The Organization of Preferences 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 9A search for the value of a given preference proceeds through the domains in an NSUserDefaults object’s search list. Only domains in the search list are searched and they are searched in the order shown in Table 1-2, starting with the NSArgumentDomain domain. A search ends when a preference with the specified name is found. If multiple domains contain the same preference, the value is taken from the domain nearest the beginning of the search list. Table 1-2 Search order for domains Domain State NSArgumentDomain volatile Application (Identified by the app’s identifier) persistent NSGlobalDomain persistent Languages (Identified by the language names) volatile NSRegistrationDomain volatile The Argument Domain The argument domain comprises values set from command- line arguments (if you started the app from the command line) and is identified by the NSArgumentDomain constant. Values set from the command line are automatically placed into this domain by the system. To add a value to this domain, specify the preference name on the command line (preceded with a hyphen) and follow it with the corresponding value. For example, the following command launches Xcode and sets the value of its IndexOnOpen preference to NO: localhost> Xcode.app/Contents/MacOS/Xcode -IndexOnOpen NO Preferencesset from the command line temporarily override the established valuesstored in the user’s defaults database. In the preceding example,setting the IndexOnOpen preference to NO prevents Xcode from indexing projects automatically, even if the preference is set to YES in the user defaults database. The Application Domain The application domain contains app-specific preferences that are stored in the user defaults database of the current user. When you use the shared NSUserDefaults object (or a NSUserDefaultsController object in OS X) to write preferences, those preferences are automatically placed in this domain. About the User Defaults System The Organization of Preferences 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 10Because this domain is app-specific, the contents of the domain are tied to your app’s bundle identifier. The contents of this domain are stored in a file that is managed by the system. Currently, this file is located in the $HOME/Library/Preferences/ directory, where $HOME is either the app’s home directory or the user’s home directory (depending on the platform and whether your app is in a sandbox). The name of the user defaults database file is .plist, where is your app’s bundle identifier. You should not modify this file directly but can inspect it during debugging to make sure preference values are being written by your app. The Global Domain The global domain contains preferencesthat are applicable to all apps and isidentified by the NSGlobalDomain constant. This domain is typically used by system frameworks to store system-wide values and should not be used by your app to store app-specific values. If you want to change the value of a preference in the global domain, write that same preference to the application domain with the new value. Examples of how the system frameworks use this domain: ● Instances of the NSRuleView class store the user’s preferred measurement units in the AppleMeasurementUnits key. Using this storage location causes ruler views in all apps to use the same units. ● The system uses the AppleLanguages key to store the user’s preferred languages as an array of strings. For example, a user could specify English as the preferred language, followed by Spanish, French, German, Italian, and Swedish. The Languages Domains For each language in the AppleLanguages preference, the system recordslanguage-specific preference values in a domain whose name is based on the language name. Each language-specific domain contains preferences for the corresponding locale. Many classes in the Foundation framework (such as the NSDate, NSDateFormatter, NSTimeZone, NSString, and NSScanner classes) use this locale information to modify their behavior. For example, when you request a string representation of an NSCalendarDate object, the NSCalendarDate object uses the locale information to find the names of months and the days of the week for the user’s preferred language. The Registration Domain The registration domain defines the set of default values to use if a given preference is not set explicitly in one of the other domains. At launch time, an app can call the registerDefaults: method of NSUserDefaults to specify a default set of values for important preferences. When an app launches for the first time, most preferences have no values,so retrieving them would yield undefined results. Registering a set of default values ensures that your app always has a known good set of values to operate on. About the User Defaults System The Organization of Preferences 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 11The contents of the registration domain can be set only by using the registerDefaults: method. Viewing Preferences Using the Defaults Tool In OS X, the defaults command-line tool provides a way for you to examine the contents of the user defaults database. During app development, you might use this tool to validate the preferences your app is writing to disk. To do that, you would use a command of the following form from the Terminal app: defaults read To read the contents of the global domain, you would use the following command: defaults read NSGlobalDomain For more information about using the defaults tool to read and write preference values, see defaults man page. About the User Defaults System Viewing Preferences Using the Defaults Tool 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 12You use the NSUserDefaults class to gain access to your app’s preferences. Each app is provided with a single instance of this class, accessible from the standardUserDefaults class method. You use the shared user defaults object to: ● Specify any default values for your app’s preferences at launch time. ● Get and set individual preference values stored in the app domain. ● Remove preference values. ● Examine the contents of the volatile preference domains. Mac appsthat use Cocoa bindings can use an NSUserDefaultsController object to set and get preferences automatically. You typically add such an object to the same nib file you use for displaying user-facing preferences. You bind your user interface controls to items in the user defaults controller, which handles the process of getting and setting values in the user defaults database. Preference values must be one of the standard property list object types: NSData, NSString, NSNumber, NSDate, NSArray, or NSDictionary. The NSUserDefaults class also provides built-in manipulations for storing NSURL objects as preference values. For more information about property lists and their contents, see Property List Programming Guide . Registering Your App’s Default Preferences At launch time, an app should register default values for any preferences that it expects to be present and valid. When you request the value of a preference that has never been set, the methods of the NSUserDefaults class return default values that are appropriate for the data type. For numerical scalar values, this typically means returning 0, but for strings and other objects it means returning nil. If these standard default values are not appropriate for your app, you can register your own default values using the registerDefaults: method. This method places your custom default values in the NSRegistrationDomain domain, which causes them to be returned when a preference is not explicitly set. When calling the registerDefaults: method, you must provide a dictionary of all the default values you need to register. Listing 2-1 shows an example where an iOS app registers its default values early in the launch cycle. You can register default values at any time, of course, butshould alwaysregister them before attempting to retrieve any preference values. 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 13 Accessing Preference ValuesListing 2-1 Registering default preference values - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // Register the preference defaults early. NSDictionary *appDefaults = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:@"CacheDataAgressively"]; [[NSUserDefaults standardUserDefaults] registerDefaults:appDefaults]; // Other initialization... } When registering default values for scalar types, use an NSNumber object to specify the value for the number. If you want to register a preference whose value is a URL, use the archivedDataWithRootObject: method of NSKeyedArchiver to encode the URL in an NSData object first. Although you can use a similar technique for other types of objects, you should avoid doing so when a simpler option is available. Getting and Setting Preference Values You get and set preference values using the methods of the NSUserDefaults class. This class has methods for getting and setting preferences with scalar values of type Boolean, integer, float, and double. It also has methodsfor getting and setting preferences whose value is an object of type NSData, NSDate, NSString, NSNumber, NSArray, NSDictionary, and NSURL. There are two situations where you might get preference values and one where you might set them: ● Get preference values: ● When you need to use the value to configure your app’s behavior. ● When you need to display the value in your preferences interface. ● Set preference values when the user changes them in your preferences interface. The following code shows how you might get a preference value in your code. In this example, the code retrieves the value of the CacheDataAggressively key, which is custom key that the app might use to determine its caching strategy. Code like this can be used anywhere to handle custom configuration of your app. If you wanted to display this particular preference value to the user, you would use similar code to configure the controls of your preferences interface. Accessing Preference Values Getting and Setting Preference Values 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 14if ([[NSUserDefaults standardUserDefaults] boolForKey:@"CacheDataAggressively"]) { // Delete the backup file. } To set a preference value programmatically, you call the corresponding setter methods of NSUserDefaults. When setting object values, you must use the setObject:forKey: method. When calling this method, you must make sure that the object is one of the standard property list types. The following example sets some preferences based on the state of the app’s preferences interface. NSUserDefaults* defaults = [NSUserDefaults standardUserDefaults]; if ([cacheAgressivelyButton state] == NSOnState) { // The user wants to cache files aggressively. [defaults setBool:YES forKey:@"CacheDataAggressively"]; [defaults setObject:[NSDate dateWithTimeIntervalSinceNow:(3600 * 24 * 7)] forKey:@"CacheExpirationDate"]; // Set a 1-week expiration } else { // The user wants to use lazy caching. [defaults setBool:NO forKey:@"CacheDataAggressively"]; [defaults removeObjectForKey:@"CacheExpirationDate"]; } You do not have to display a preferences interface to manage all values. Your app can use preferences to cache interesting information. For example, NSWindow objectsstore their current location in the user defaultssystem. This data allows them to return to the same location the next time the user starts the app. Synchronizing and Detecting Preference Changes Because the NSUserDefaults class caches values, it issometimes necessary to synchronize the cached values with the current contents of the user defaults database. Your app is not always the only entity modifying the user defaults database. In iOS, the Settings app can modify the values of preferences for apps that have a Settings bundle. In OS X, the system and other apps might modify preferences values in response to user actions. For example, if the user changes preferred languages, the system writes the new values to the user defaults database. In OS X v10.5 and later, the shared NSUserDefaults object synchronizes its caches automatically at periodic intervals. However, apps can call the synchronize method manually to force an update of the cached values. Accessing Preference Values Synchronizing and Detecting Preference Changes 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 15To detect when changes to a preference value occur, apps can also register for the notification NSUserDefaultsDidChangeNotification. The shared NSUserDefaults object sends this notification to your app whenever it detects a change to a preference located in one of the persistent domains. You can use this notification to respond to changes that might impact your user interface. For example, you could use it to detect changes to the user’s preferred language and update your app content appropriately. Managing Preferences Using Cocoa Bindings Mac apps can use Cocoa bindings to set preference values directly from their user interfaces. Modifying preferences using bindings involves adding an NSUserDefaultsController object to the appropriate nib files and binding the values of your controls to the preference values in the user defaults database. When your app showsthe interface, the user defaults controller automatically loads valuesfrom the user defaults database and uses them to set the value of controls. Similarly, when the user changes the value in a control, the user defaults controller updates the value in the user defaults database. For more information on how to use the NSUserDefaultsController class to bind preference values to your user interface, see “User Defaults and Bindings” in Cocoa Bindings Programming Topics. Managing Preferences Using Core Foundation The Core Foundation framework provides its own set of interfaces for accessing preferences stored in the user defaults database. Like the NSUserDefaults class, you can use Core Foundation functions to get and set preference values and synchronize the user defaults database. Unlike NSUserDefaults, you can use the Core Foundation functions to write preferences for different apps and on different computers. Note that modifying some preferences domains(those not belonging to the current app and user) requiresroot privileges(or admin privileges prior to OS X v10.6); for information on how to gain suitable privileges, see Authorization Services Programming Guide . Writing outside the app domain is not possible for apps installed in a sandbox. For information about the Core Foundation functions for getting and setting preferences, see Preferences Utilities Reference . Setting a Preference Value Using Core Foundation Preferences are stored as key-value pairs. The key must be a CFString object, but the value can be any Core Foundation property list value (see Property List Programming Topics for Core Foundation ), including the container types. For example, you might have a key called defaultWindowWidth that defines the width in Accessing Preference Values Managing Preferences Using Cocoa Bindings 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 16pixels of any new windows that your app creates. Its value would most likely be of type CFNumber. You might also decide to combine window width and height into a single preference called defaultWindowSize and make its value be a CFArray object containing two CFNumber objects. The code in Listing 2-2 demonstrates how to create a simple preference for the app MyTextEditor. The example sets the default text color for the app to blue. Listing 2-2 Writing a simple default CFStringRef textColorKey = CFSTR("defaultTextColor"); CFStringRef colorBLUE = CFSTR("BLUE"); // Set up the preference. CFPreferencesSetAppValue(textColorKey, colorBLUE, kCFPreferencesCurrentApplication); // Write out the preference data. CFPreferencesAppSynchronize(kCFPreferencesCurrentApplication); Notice that CFPreferencesSetAppValue by itself is not sufficient to create the new preference. A call to CFPreferencesAppSynchronize isrequired to actually save the value. If you are writing multiple preferences, it is more efficient to sync only once after the last value has been set than to sync after each individual value is set. For example, if you implement a preference pane you might synchronize only when the user presses an OK button. In other cases you might not want to sync at all until the app quits—although note that if the app crashes, all unsaved preferences settings will be lost. Getting a Preference Value Using Core Foundation The simplest way to locate and retrieve a preference value is to use the CFPreferencesCopyAppValue function. This call searches through the various preference domains in order until it finds the key you have specified. If a preference has been set in a less specific domain—Any Application, for example —its value is retrieved with this call if a more specific version cannot be found. Listing 2-3 shows how to retrieve the text color preference saved in Listing 2-2 (page 17). Listing 2-3 Reading a simple default CFStringRef textColorKey = CFSTR("defaultTextColor"); CFStringRef textColor; Accessing Preference Values Managing Preferences Using Core Foundation 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 17// Read the preference. textColor = (CFStringRef)CFPreferencesCopyAppValue(textColorKey, kCFPreferencesCurrentApplication); // When finished with value, you must release it // CFRelease(textColor); All values returned from preferences are immutable, even if you have just set the value using a mutable object. Accessing Preference Values Managing Preferences Using Core Foundation 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 18An app can use the iCloud key-value store to share small amounts of data with other instances of itself on the user’s other computers and iOS devices. The key-value store is intended for simple data types like those you might use for preferences. For example, a magazine app might store the current issue and page number being read by the user so that other instances of the app can open to the same page when launched. You should not use this store for large amounts of data or for complex data types. To use the iCloud key-value store, do the following: 1. In Xcode, configure the com.apple.developer.ubiquity-kvstore-identifier entitlement for your app. 2. In your code, create the shared NSUbiquitousKeyValueStore object and register for change notifications. 3. Use the methods of NSUbiquitousKeyValueStore to get and set values. Key-value data in iCloud is limited to simple property-list types (strings, numbers, dates, and so on). Strategies for Using the iCloud Key-Value Store The key-value store is not intended for storing large amounts of data. It is intended for storing configuration data, preferences, and small amounts of app-related data. To help you decide whether the key-value store is appropriate for your needs, consider the following: ● Each app is limited to 1 MB of total space in the key-value store. (There is also a separate per-key limit of 1 MB and a maximum of 1024 keys are allowed.) Thus, you cannot use the key-value store to share large amounts of data. ● The key-value store supports only property-list types. Property-list types include simple types such as NSNumber, NSString, and NSDate objects. You can also store raw blocks of data in NSData objects and arrange all of the types using NSArray and NSDictionary objects. ● The key-value store is intended for storing data that changes infrequently. If the apps on a device make frequent changes to the key-value store, the system may defer the synchronization of some changes in order to minimize the number of round trips to the server. The more frequently apps make changes, the more likely it is that later changes will be deferred and not show up on other devices right away. 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 19 Storing Preferences in iCloud● The key-value store is not a replacement for preferences or other local techniques for saving the same data. The purpose of the key-value store is to share data between apps, but if iCloud is not enabled or is not available on a given device, you still might want to keep a local copy of the data. If you are using the key-value store to share preferences, one approach is to store the actual values in the user defaults database and synchronize them using the key-value store. (If you do not want to use the preferences system, you could also save the changes in a custom property-list file or some other local storage.) When you change the value of a key locally, write that change to both the user defaults database and to the iCloud key-value store at the same time. To receive changesfrom externalsources, add an observer for the notification NSUbiquitousKeyValueStoreDidChangeExternallyNotification and use your handler method to detect which keys changed externally and update the corresponding data in the user defaults database. By doing this, your user defaults database always contains the correct configuration values. The iCloud key-value store simply becomes a mechanism for ensuring that the user defaults database has the most recent changes. Configuring Your App to Use the Key-Value Store In order to use of the key-value store, an app must be explicitly configured with the com.apple.developer.ubiquity-kvstore-identifier entitlement. You use Xcode to enable this entitlement and specify its value for your app: 1. In your Xcode project, select the target for your app. 2. In the Summary tab, enable the Entitlements option. 3. Specify a value for the iCloud Key-Value Store field. When you enable entitlements, Xcode automatically fills in a default value for the iCloud Key-Value Store field that is based on the bundle identifier of your app. For most apps, the default value is what you want. However, if your app shares its key-value storage with another app, you must specify the bundle identifier for the other app instead. For example, if you have a lite version of your app, you might want it to use the same key-value store as the paid version. Enabling the entitlement is all you have to do to use the shared NSUbiquitousKeyValueStore object. As long as the entitlement is configured and contains a valid value, the key-value store object writes its data to the appropriate location in the user’s iCloud account. If there is a problem attaching to the specified iCloud container, any attemptsto read or write key values will fail. To ensure the key-value store is configured properly and accessible, you should execute code similar to the following early in your app’s launch cycle: NSUbiquitousKeyValueStore* store = [NSUbiquitousKeyValueStore defaultStore]; [[NSNotificationCenter defaultCenter] addObserver:self Storing Preferences in iCloud Configuring Your App to Use the Key-Value Store 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 20selector:@selector(updateKVStoreItems:) name:NSUbiquitousKeyValueStoreDidChangeExternallyNotification object:store]; [store synchronize]; Creating the key-value store object early in your app’s launch cycle is recommended because it ensures that your app receives updates from iCloud in a timely manner. The best way to determine if changes have been made to keys and values is to register for the notification NSUbiquitousKeyValueStoreDidChangeExternallyNotification. And at launch time, you should call the synchronize method manually to detect if any changes were made externally. You do not need to call that method at other times during you app’s execution. For more information about how to configure entitlements for an iOS app, see “Configuring Apps” in Tools Workflow Guide for iOS . Accessing Values in the Key-Value Store You get and set key-value store values using the methods of the NSUbiquitousKeyValueStore class. This class has methods for getting and setting preferences with scalar values of type Boolean, long long, and double. It also has methods for getting and setting keys whose values are NSData, NSDate, NSString, NSNumber, NSArray, or NSDictionary objects. If you are using the key-value store as a way to update locally stored preferences, you could use code similar to that in Listing 3-1 to coordinate updates to the user defaults database. This example assumes that you use the same key names and corresponding values in both iCloud and the user defaults database. It also assumes that you previously registered the updateKVStoreItems: method as the method to call in response to the notification NSUbiquitousKeyValueStoreDidChangeExternallyNotification. Listing 3-1 Updating local preference values using iCloud - (void)updateKVStoreItems:(NSNotification*)notification { // Get the list of keys that changed. NSDictionary* userInfo = [notification userInfo]; NSNumber* reasonForChange = [userInfo objectForKey:NSUbiquitousKeyValueStoreChangeReasonKey]; NSInteger reason = -1; // If a reason could not be determined, do not update anything. Storing Preferences in iCloud Accessing Values in the Key-Value Store 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 21if (!reasonForChange) return; // Update only for changes from the server. reason = [reasonForChange integerValue]; if ((reason == NSUbiquitousKeyValueStoreServerChange) || (reason == NSUbiquitousKeyValueStoreInitialSyncChange)) { // If something is changing externally, get the changes // and update the corresponding keys locally. NSArray* changedKeys = [userInfo objectForKey:NSUbiquitousKeyValueStoreChangedKeysKey]; NSUbiquitousKeyValueStore* store = [NSUbiquitousKeyValueStore defaultStore]; NSUserDefaults* userDefaults = [NSUserDefaults standardUserDefaults]; // This loop assumes you are using the same key names in both // the user defaults database and the iCloud key-value store for (NSString* key in changedKeys) { id value = [store objectForKey:key]; [userDefaults setObject:value forKey:key]; } } } Defining the Scope of Key-Value Store Changes Every call to one of the NSUbiquitousKeyValueStore methods is treated as a single atomic transaction. When transferring the data for that transaction to iCloud, the whole transaction either fails or succeeds. If it succeeds, all of the keys are written to the store and if it fails no keys are written. There is no partial writing of keys to the store. When a failure occurs, the system also generates a NSUbiquitousKeyValueStoreDidChangeExternallyNotification notification that containsthe reason for the failure. If you are using the key-value store, you should use that notification to detect possible problems. Storing Preferences in iCloud Defining the Scope of Key-Value Store Changes 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 22If you have a group of keys whose values must all be updated at the same time in order to be valid, save them together in a single transaction. To write multiple keys and values in a single transaction, create an NSDictionary object with all of the keys and values. Then write the dictionary object to the key-value store using the setDictionary:forKey: method. Writing an entire dictionary of changes ensures that all of the keys are written or none of them are. Storing Preferences in iCloud Defining the Scope of Key-Value Store Changes 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 23In iOS, the Foundation framework provides the low-level mechanism for storing the preference data. Apps then have two options for presenting preferences: ● Display preferences inside the app. ● Use a Settings bundle to manage preferences from the Settings app. Which option you choose depends on how you expect users to interact with preferences. The Settings bundle is generally the preferred mechanism for displaying preferences. However, games and other apps that contain configuration options or other frequently accessed preferences might want to present them inside the app instead. Regardless of how you present them, you use the NSUserDefaults class to access preference values from your code. This chapter focuses on the creation of a Settings bundle for your app. A Settings bundle contains files that describe the structure and presentation style of your preferences. The Settings app uses this information to create an entry for your app and to display your custom preference pages. For guidelines on how to manage and present settings and configuration options, see iOS Human Interface Guidelines. The Settings App Interface The Settings app implements a hierarchical set of pages for navigating app preferences. The main page of the Settings app liststhe system and third-party apps whose preferences can be customized. Selecting a third-party app takes the user to the preferences for that app. Every app with a Settings bundle has at least one page of preferences, referred to as the main page . If your app has only a few preferences, the main page may be the only one you need. If the number of preferences gets too large to fit on the main page, however, you can create child pages that link off the main page or other child pages. There is no specific limit to the number of child pages you can create, but you should strive to keep your preferences as simple and easy to navigate as possible. The contents of each page consists of one or more controls that you configure. Table 4-1 lists the types of controls supported by the Settings app and describes how you might use each type. The table also lists the raw key name stored in the configuration files of your Settings bundle. 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 24 Implementing an iOS Settings BundleTable 4-1 Preference control types Controltype Description The text field type displays a title (optional) and an editable text field. You can use this type for preferences that require the user to specify a custom string value. The key for this type is PSTextFieldSpecifier. Text field The title type displays a read-only string value. You can use thistype to display read-only preference values. (If the preference contains cryptic or nonintuitive values, this type lets you map the possible values to custom strings.) The key for this type is PSTitleValueSpecifier. Title The toggle switch type displays an ON/OFF toggle button. You can use this type to configure a preference that can have only one of two values. Although you typically use this type to represent preferences containing Boolean values, you can also use it with preferences containing non-Boolean values. The key for this type is PSToggleSwitchSpecifier. Toggle switch The slider type displays a slider control. You can use this type for a preference that represents a range of values. The value for this type is a real number whose minimum and maximum value you specify. The key for this type is PSSliderSpecifier. Slider The multivalue type lets the user select one value from a list of values. You can use this type for a preference that supports a set of mutually exclusive values. The values can be of any type. The key for this type is PSMultiValueSpecifier. Multivalue The group type is for organizing groups of preferences on a single page. The group type does not represent a configurable preference. It simply contains a title string that is displayed immediately before one or more configurable preferences. The key for this type is PSGroupSpecifier. Group The child pane type lets the user navigate to a new page of preferences. You use this type to implement hierarchical preferences. For more information on how you configure and use this preference type, see “Hierarchical Preferences” (page 27). The key for this type is PSChildPaneSpecifier. Child pane For detailed information about the format of each preference type, see Settings Application Schema Reference . To learn how to create and edit Settings page files, see “Creating and Modifying the Settings Bundle” (page 29). Implementing an iOS Settings Bundle The Settings App Interface 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 25The Settings Bundle A Settings bundle hasthe name Settings.bundle and residesin the top-level directory of your app’s bundle. This bundle contains one or more Settings page files that describe the individual pages of preferences. It may also include other support files needed to display your preferences, such as images or localized strings. Table 4-2 lists the contents of a typical Settings bundle. Table 4-2 Contents of the Settings.bundle directory Item name Description The Settings page file containing the preferences for the root page. The name of thisfile must be Root.plist. The contents of thisfile are described in more detail in “The Settings Page File Format” (page 27). Root.plist If you build a set of hierarchical preferences using child panes, the contents for each child pane are stored in a separate Settings page file. You are responsible for naming these files and associating them with the correct child pane. Additional .plist files These directories store localized string resources for your Settings page files. Each directory contains a single strings file, whose title is specified in your Settings page file. The strings files provide the localized strings to display for your preferences. One or more .lproj directories If you use the slider control, you can store the images for your slider in the top-level directory of the bundle. Additional images In addition to the Settings bundle, the app bundle can contain a custom icon for your app settings. The Settings app displays the icon you provide next to the entry for your app preferences. For information about app icons and how you specify them, see iOS App Programming Guide . When the Settings app launches, it checks each custom app for the presence of a Settings bundle. For each custom bundle it finds, it loadsthat bundle and displaysthe corresponding app’s name and icon in the Settings main page. When the user taps the row belonging to your app, Settings loads the Root.plist Settings page file for your Settings bundle and uses that file to build your app’s main page of preferences. In addition to loading your bundle’s Root.plist Settings page file, the Settings app also loads any language-specific resources for that file, as needed. Each Settings page file can have an associated .strings file containing localized values for any user-visible strings. As it prepares your preferences for display, the Settings app looksforstring resourcesin the user’s preferred language and substitutesthem in your preferences page prior to display. Implementing an iOS Settings Bundle The Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 26The Settings Page File Format Each Settings page file is stored in the iPhone Settings property-list file format, which is a structured file format. The simplest way to edit Settings page files is to use the built-in editor facilities of Xcode; see “Preparing the Settings Page for Editing” (page 29). You can also edit property-list files using the Property List Editor app that comes with the Xcode tools. Note: Xcode converts any XML-based property files in your project to binary format when building your app. This conversion saves space and is done for you automatically. The root element of each Settings page file contains the keys listed in Table 4-3. Only one key is actually required, but it is recommended that you include both of them. Table 4-3 Root-level keys of a preferences Settings page file Key Type Value The value for this key is an array of dictionaries, with each dictionary containing the information for a single control. For a list of control types, see Table 4-1 (page 25). For a description of the keys associated with each control, see Settings Application Schema Reference . PreferenceSpecifiers Array (required) The name of the strings file associated with this file. A copy of this file (with appropriate localized strings) should be located in each of your bundle’s language-specific project directories. If you do not include this key, the strings in this file are not localized. For information on how these strings are used, see “Localized Resources” (page 28). StringsTable String Hierarchical Preferences If you plan to organize your preferences hierarchically, each page you define must have its own separate .plist file. Each .plist file contains the set of preferences displayed only on that page. Your app’s main preferences page is always stored in a file called Root.plist. Additional pages can be given any name you like. To specify a link between a parent page and a child page, you include a child pane control in the parent page. A child pane control creates a row that, when tapped, displays a new page of settings. The File key of the child pane control identifies the name of the .plist file with the contents of the child page. The Title key Implementing an iOS Settings Bundle The Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 27identifies the title of the child page; this title is also used as the text of the control used to display the child page. The Settings app automatically provides navigation controls on the child page to allow the user to navigate back to the parent page. Figure 4-1 shows how this hierarchical set of pages works. The left side of the figure shows the .plist files, and the right side shows the relationships between the corresponding pages. Figure 4-1 Organizing preferences using child panes Sounds New Voicemail Group 1 Group 2 New Email Sent Mail Ringtones Sounds page Settings Group 1 Usage Sounds Group 2 Group 3 Brightness Wallpaper General Root page Sounds.plist Root.plist General.plist General page General Date & Time Group 1 Network Keyboard For more information about child pane controls and their associated keys, see Settings Application Schema Reference . Localized Resources Because preferences contain user-visible strings, you should provide localized versions of those strings with your Settings bundle. Each page of preferences can have an associated .strings file for each localization supported by your bundle. When the Settings app encounters a key that supports localization, it checks the appropriately localized .strings file for a matching key. If it finds one, it displays the value associated with that key. When looking for localized resources such as .strings files, the Settings app follows the same rules that other iOS apps follow. It first tries to find a localized version of the resource that matches the user’s preferred language setting. If no such resource exists, an appropriate fallback language is selected. Implementing an iOS Settings Bundle The Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 28For information about the format ofstringsfiles, language-specific project directories, and how language-specific resources are retrieved from bundles, see Internationalization Programming Topics. Creating and Modifying the Settings Bundle Xcode provides a template for adding a Settings bundle to your current project. The default Settings bundle contains a Root.plist file and a default language directory for storing any localized resources. You can expand this bundle as needed to include additional property list files and resources needed by your Settings bundle. Adding the Settings Bundle To add a Settings bundle to your Xcode project: 1. Choose File > New > New File. 2. Under iOS, choose Resource, and then select the Settings Bundle template. 3. Name the file Settings.bundle. In addition to adding a new Settings bundle to your project, Xcode automatically addsthat bundle to the Copy Bundle Resources build phase of your app target. Thus, all you have to do is modify the property list files of your Settings bundle and add any needed resources. The new Settings bundle has the following structure: Settings.bundle/ Root.plist en.lproj/ Root.strings Preparing the Settings Page for Editing Before editing any of the property-list files in your Settings bundle, you should configure the Xcode editor to format the contents of those files as iPhone settings. Xcode does this automatically for the Root.plist file, but you may need to format additional property-list files manually. To format a file as iPhone Settings, do the following: 1. Select the file. 2. Control-click the editor window and choose Property List Type > iPhone Settings plist if it is not already chosen. Implementing an iOS Settings Bundle Creating and Modifying the Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 29Formatting a property list makes it easier to understand and edit the file’s contents. Xcode substitutes human-readable strings (as shown in Figure 4-2) that are appropriate for the selected format. Figure 4-2 Formatted contents of the Root.plist file Implementing an iOS Settings Bundle Creating and Modifying the Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 30Configuring a Settings Page: A Tutorial This section shows you how to configure a Settings page to display the controls you want. The goal of the tutorial is to create a page like the one in Figure 4-3. If you have not yet created a Settings bundle for your project, you should do so as described in “Adding the Settings Bundle” (page 29) before proceeding with these steps. Figure 4-3 A root Settings page 1. Disclose the Preference Items key to display the default items that come with the template. 2. Change the title of Item 0 to Sound. ● Disclose Item 0 of Preference Items. ● Change the value of the Title key from Group to Sound. ● Leave the Type key set to Group. ● Click the disclosure triangle of the item to hide its contents. 3. Create the first toggle switch for the renamed Sound group. ● Select Item 2 (the toggle switch item) of Preference Items and choose Edit > Cut. ● Select Item 0 and choose Edit > Paste. (This moves the toggle switch item in front of the text field item.) ● Disclose the toggle switch item to reveal its configuration keys. ● Change the value of the Title key to Play Sounds. ● Change the value of the Identifier key to play_sounds_preference. Implementing an iOS Settings Bundle Creating and Modifying the Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 31● Click the disclosure triangle of the item to hide its contents. 4. Create a second toggle switch for the Sound group. ● Select Item 1 (the Play Sounds toggle switch). ● Choose Edit > Copy. ● Choose Edit >Paste to place a copy of the toggle switch right after the first one. ● Disclose the new toggle switch item to reveal its configuration keys. ● Change the value of its Title key to 3D Sound. ● Change the value of its Identifier key to 3D_sound_preference. ● Click the disclosure triangle of the item to hide its contents. At this point, you have finished the first group of settings and are ready to create the User Info group. 5. Change Item 3 into a Group control and name it User Info. ● Click Item 3 in the Preferences Items. This displays a pop-up menu with a list of item types. ● From the pop-up menu, choose Group to change the type of the control. ● Disclose the contents of Item 3. ● Set the value of the Title key to User Info. ● Click the disclosure triangle of the item to hide its contents. 6. Create the Name field. ● Select Item 4 in the Preferences Items. ● Using the pop-up menu, change its type to Text Field. ● Set the value of the Title key to Name. ● Set the value of the Identifier key to user_name. ● Click the disclosure triangle of the item to hide its contents. 7. Create the Experience Level settings. ● Select Item 4. ● Control-click the editor window and select Add Row to add a new item. ● Set the type of the new item to Multi Value. ● Disclose the item’s contents and set its title to Experience Level, its identifier to experience_preference, and its default value to 0. ● With the Default Value key selected, Control-click and select Add Row to add a Titles array. ● Select the Titles array and press Return to add a new subitem. ● Add two more subitems to create a total of three items. Implementing an iOS Settings Bundle Creating and Modifying the Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 32● Set the values of the subitems to Beginner, Expert, and Master. ● Hide the key’s subitems. ● Add a new item for the Values array. ● Add three subitems to the Values array and set their values to 0, 1, and 2. ● Hide the contents of Item 5. 8. Add the final group to your settings page. ● Create a new item and set its type to Group and its title to Gravity. ● Create another new item and set itstype to Slider, itsidentifier to gravity_preference, its default value to 1, and its maximum value to 2. Creating Additional Settings Page Files The Settings Bundle template includes the Root.plist file, which defines your app’s top Settings page. To define additional Settings pages, you must add additional property list files to your Settings bundle. To add a property list file to your Settings bundle in Xcode, do the following: 1. Choose File > New > New File. 2. Under iOS, select Resource, and then select the Property List template. 3. Select the new file to display its contents in the editor. 4. Control-click the editor pane and choose Property List Type > iPhone Settings plist to format the contents. 5. Control-click the editor pane again and choose Add Row to add a new key. 6. Add and configure any additional keys you need. After adding a new Settings page to your Settings bundle, you can edit the page’s contents as described in “Configuring a Settings Page: A Tutorial” (page 31). To display the settings for your page, you must reference it from a child pane control as described in “Hierarchical Preferences” (page 27). Implementing an iOS Settings Bundle Creating and Modifying the Settings Bundle 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 33Note: In Xcode 4, adding a property-list file to your project does not automatically associate it with your Settings bundle. You must use the Finder to move any additional property-list files into your Settings bundle. Debugging Preferences for Simulated Apps When running your app, iOS Simulatorstores any preferences valuesfor your app in ~/Library/Application Support/iOS Simulator/User/Applications//Library/Preferences, where is a programmatically generated directory name that iOS uses to identify your app. Each time you build your app, Xcode preserves your app preferences and other relevant library files. If you want to remove the current preferences for testing purposes, you can delete the app from Simulator or choose Reset Contents and Settings from the iOS Simulator menu. Implementing an iOS Settings Bundle Debugging Preferences for Simulated Apps 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 34This table describes the changes to Preferences and Settings Programming Guide . Date Notes 2012-03-01 Updated the document to reflect new limits for key and value sizes. Updated the document to include information about Settings bundles and iOS in general. Also incorporated iCloud information. 2011-10-12 Removed the articles on storing NSColor objects and using Cocoa bindings and now link to their locations instead. Changed document name from User Defaults Programming Topics. 2007-10-31 Updated information about periodic autosave behavior. 2007-01-08 Corrected typos and capitalization mistakes. Added overview of procedure forstoring non-property-list objectsin user defaults, and linked to related article. 2006-11-07 2006-09-05 Made small additions to the content. Changed title from "User Defaults." Expanded explanation of user defaults in introduction. Noted requirement that a default’s value must be a property list value at the beginning of the “Using NSUserDefaults” article. Included an article that describes the use of NSUserDefaultsController. Corrected minor typographical errors. 2005-08-11 2004-02-03 Added article “Storing NSColor in User Defaults”. Linked to the Core Foundation Preferences Programming Topic, which was also incorrectly named. 2003-05-09 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 35 Document Revision HistoryDate Notes Added link in limitations area to CFPreferences. Corrected class name in Defaults Domains Concept. 2003-01-13 Revision history was added to existing topic. It will be used to record changes to the content of the topic. 2002-11-12 Document Revision History 2012-03-01 | © 2012 Apple Inc. All Rights Reserved. 36Apple Inc. © 2012 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, Finder, iPhone, Mac, OS X, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. .Mac and iCloud are service marks of Apple Inc., registered in the U.S. and other countries. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. OpenGL Programming Guide for MacContents About OpenGL for OS X 11 At a Glance 11 OpenGL Is a C-based, Platform-Neutral API 12 Different Rendering Destinations Require Different Setup Commands 12 OpenGL on Macs Exists in a Heterogenous Environment 12 OpenGL Helps Applications Harness the Power of Graphics Processors 13 Concurrency in OpenGL Applications Requires Additional Effort 13 Performance Tuning Allows Your Application to Provide an Exceptional User Experience 14 How to Use This Document 14 Prerequisites 15 See Also 15 OpenGL on the Mac Platform 17 OpenGL Concepts 17 OpenGL Implements a Client-Server Model 18 OpenGL Commands Can Be Executed Asynchronously 18 OpenGL Commands Are Executed In Order 19 OpenGL Copies Client Data at Call-Time 19 OpenGL Relies on Platform-Specific Libraries For Critical Functionality 19 OpenGL in OS X 20 Accessing OpenGL Within Your Application 21 OpenGL APIs Specific to OS X 22 Apple-Implemented OpenGL Libraries 23 Terminology 24 Renderer 24 Renderer and Buffer Attributes 24 Pixel Format Objects 24 OpenGL Profiles 25 Rendering Contexts 25 Drawable Objects 25 Virtual Screens 26 Offline Renderer 31 Running an OpenGL Program in OS X 31 Making Great OpenGL Applications on the Macintosh 33 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 2Drawing to a Window or View 35 General Approach 35 Drawing to a Cocoa View 36 Drawing to an NSOpenGLView Class: A Tutorial 37 Drawing OpenGL Content to a Custom View 40 Optimizing OpenGL for High Resolution 44 Enable High-Resolution Backing for an OpenGL View 44 Set Up the Viewport to Support High Resolution 45 Adjust Model and Texture Assets 46 Check for Calls Defined in Pixel Dimensions 46 Tune OpenGL Performance for High Resolution 47 Use a Layer-Backed View to Overlay Text on OpenGL Content 48 Use an Application Window for Fullscreen Operation 49 Convert the Coordinate Space When Hit Testing 49 Drawing to the Full Screen 50 Creating a Full-Screen Application 50 52 Drawing Offscreen 53 Rendering to a Framebuffer Object 53 Using a Framebuffer Object as a Texture 54 Using a Framebuffer Object as an Image 58 Rendering to a Pixel Buffer 60 Setting Up a Pixel Buffer for Offscreen Drawing 61 Using a Pixel Buffer as a Texture Source 61 Rendering to a Pixel Buffer on a Remote System 63 Choosing Renderer and Buffer Attributes 64 OpenGL Profiles (OS X v10.7) 64 Buffer Size Attribute Selection Tips 65 Ensuring That Back Buffer Contents Remain the Same 66 Ensuring a Valid Pixel Format Object 66 Ensuring a Specific Type of Renderer 67 Ensuring a Single Renderer for a Display 68 Allowing Offline Renderers 69 OpenCL 70 Deprecated Attributes 70 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 3 ContentsWorking with Rendering Contexts 72 Update the Rendering Context When the Renderer or Geometry Changes 72 Tracking Renderer Changes 73 Updating a Rendering Context for a Custom Cocoa View 73 Context Parameters Alter the Context’s Behavior 76 Swap Interval Allows an Application to Synchronize Updates to the Screen Refresh 76 Surface Opacity Specifies How the OpenGL Surface Blends with Surfaces Behind It 77 Surface Drawing Order Specifies the Position of the OpenGL Surface Relative to the Window 77 Determining Whether Vertex and Fragment Processing Happens on the GPU 78 Controlling the Back Buffer Size 78 Sharing Rendering Context Resources 79 Determining the OpenGL Capabilities Supported by the Renderer 83 Detecting Functionality 83 Guidelines for Code That Checks for Functionality 87 OpenGL Renderer Implementation-Dependent Values 88 OpenGL Application Design Strategies 89 Visualizing OpenGL 89 Designing a High-Performance OpenGL Application 91 Update OpenGL Content Only When Your Data Changes 94 Synchronize with the Screen Refresh Rate 96 Avoid Synchronizing and Flushing Operations 96 Using glFlush Effectively 97 Avoid Querying OpenGL State 98 Use Fences for Finer-Grained Synchronization 98 Allow OpenGL to Manage Your Resources 99 Use Double Buffering to Avoid Resource Conflicts 100 Be Mindful of OpenGL State Variables 101 Replace State Changes with OpenGL Objects 102 Use Optimal Data Types and Formats 102 Use OpenGL Macros 103 Best Practices for Working with Vertex Data 104 Understand How Vertex Data Flows Through OpenGL 105 Techniques for Handling Vertex Data 107 Vertex Buffers 107 Using Vertex Buffers 108 Buffer Usage Hints 110 Flush Buffer Range Extension 113 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 4 ContentsVertex Array Range Extension 113 Vertex Array Object 116 Best Practices for Working with Texture Data 118 Using Extensions to Improve Texture Performance 119 Pixel Buffer Objects 121 Apple Client Storage 124 Apple Texture Range and Rectangle Texture 125 Combining Client Storage with Texture Ranges 127 Optimal Data Formats and Types 128 Working with Non–Power-of-Two Textures 129 Creating Textures from Image Data 131 Creating a Texture from a Cocoa View 131 Creating a Texture from a Quartz Image Source 133 Getting Decompressed Raw Pixel Data from a Source Image 135 Downloading Texture Data 136 Double Buffering Texture Data 137 Customizing the OpenGL Pipeline with Shaders 139 Shader Basics 141 Advanced Shading Extensions 142 Transform Feedback 142 GPU Shader 4 143 Geometry Shaders 143 Uniform Buffers 143 Techniques for Scene Antialiasing 144 Guidelines 145 General Approach 145 Hinting for a Specific Antialiasing Technique 147 Concurrency and OpenGL 148 Identifying Whether an OpenGL Application Can Benefit from Concurrency 149 OpenGL Restricts Each Context to a Single Thread 149 Strategies for Implementing Concurrency in OpenGL Applications 150 Multithreaded OpenGL 150 Perform OpenGL Computations in a Worker Task 151 Use Multiple OpenGL Contexts 153 Guidelines for Threading OpenGL Applications 154 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 5 ContentsTuning Your OpenGL Application 155 Gathering and Analyzing Baseline Performance Data 156 Using OpenGL Driver Monitor to Measure Stalls 161 Identifying Bottlenecks with Shark 161 Legacy OpenGL Functionality by Version 163 Version 1.1 163 Version 1.2 164 Version 1.3 165 Version 1.4 165 Version 1.5 166 Version 2.0 166 Version 2.1 167 Updating an Application to Support the OpenGL 3.2 Core Specification 168 Removed Functionality 168 Extension Changes on OS X 169 Setting Up Function Pointers to OpenGL Routines 171 Obtaining a Function Pointer to an Arbitrary OpenGL Entry Point 171 Initializing Entry Points 172 Document Revision History 175 Glossary 179 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 6 ContentsFigures, Tables, and Listings OpenGL on the Mac Platform 17 Figure 1-1 OpenGL provides the reflections in iChat 17 Figure 1-2 OpenGL client-server model 18 Figure 1-3 Graphics platform model 18 Figure 1-4 MacOS X OpenGL driver model 20 Figure 1-5 Layers of OpenGL for OS X 21 Figure 1-6 The programing interfaces used for OpenGL content 22 Figure 1-7 Data flow through OpenGL 26 Figure 1-8 A virtual screen displays what the user sees 27 Figure 1-9 Two virtual screens 28 Figure 1-10 A virtual screen can represent more than one physical screen 29 Figure 1-11 Two virtual screens and two graphics cards 30 Figure 1-12 The flow of data through OpenGL 31 Drawing to a Window or View 35 Figure 2-1 OpenGL content in a Cocoa view 35 Figure 2-2 The output from the Golden Triangle program 39 Listing 2-1 The interface for MyOpenGLView 37 Listing 2-2 Include OpenGL/gl.h 38 Listing 2-3 The drawRect: method for MyOpenGLView 38 Listing 2-4 Code that draws a triangle using OpenGL commands 38 Listing 2-5 The interface for a custom OpenGL view 40 Listing 2-6 The initWithFrame:pixelFormat: method 41 Listing 2-7 The lockFocus method 42 Listing 2-8 The drawRect method for a custom view 42 Listing 2-9 Detaching the context from a drawable object 43 Optimizing OpenGL for High Resolution 44 Figure 3-1 Enabling high-resolution backing for an OpenGL view 45 Figure 3-2 A text overlay scales automatically for standard resolution (left) and high resolution (right) 48 Listing 3-1 Setting up the viewport for drawing 45 Drawing to the Full Screen 50 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 7Figure 4-1 Drawing OpenGL content to the full screen 50 Drawing Offscreen 53 Listing 5-1 Setting up a framebuffer for texturing 57 Listing 5-2 Setting up a renderbuffer for drawing images 59 Choosing Renderer and Buffer Attributes 64 Table 6-1 Renderer types and pixel format attributes 67 Listing 6-1 Using the CGL API to create a pixel format object 66 Listing 6-2 Setting an NSOpenGLContext object to use a specific display 68 Listing 6-3 Setting a CGL context to use a specific display 69 Working with Rendering Contexts 72 Figure 7-1 A fixed size back buffer and variable size front buffer 79 Figure 7-2 Shared contexts attached to the same drawable object 80 Figure 7-3 Shared contexts and more than one drawable object 80 Listing 7-1 Handling context updates for a custom view 74 Listing 7-2 Using CGL to set up synchronization 76 Listing 7-3 Using CGL to set surface opacity 77 Listing 7-4 Using CGL to set surface drawing order 77 Listing 7-5 Using CGL to check whether the GPU is processing vertices and fragments 78 Listing 7-6 Using CGL to set up back buffer size control 79 Listing 7-7 Setting up an NSOpenGLContext object for sharing 81 Listing 7-8 Setting up a CGL context for sharing 82 Determining the OpenGL Capabilities Supported by the Renderer 83 Table 8-1 Common OpenGL renderer limitations 88 Table 8-2 OpenGL shader limitations 88 Listing 8-1 Checking for OpenGL functionality 84 Listing 8-2 Setting up a valid rendering context to get renderer functionality information 86 OpenGL Application Design Strategies 89 Figure 9-1 OpenGL graphics pipeline 90 Figure 9-2 OpenGL client-server architecture 91 Figure 9-3 Application model for managing resources 92 Figure 9-4 Single-buffered vertex array data 100 Figure 9-5 Double-buffered vertex array data 101 Listing 9-1 Setting up a Core Video display link 94 Listing 9-2 Setting up synchronization 96 Listing 9-3 Disabling state variables 102 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 8 Figures, Tables, and ListingsListing 9-4 Using CGL macros 103 Best Practices for Working with Vertex Data 104 Figure 10-1 Vertex data sets can be quite large 104 Figure 10-2 Vertex data path 105 Figure 10-3 Immediate mode requires a copy of the current vertex data 105 Listing 10-1 Submitting vertex data using glDrawElements. 106 Listing 10-2 Using the vertex buffer object extension with dynamic data 109 Listing 10-3 Using the vertex buffer object extension with static data 110 Listing 10-4 Geometry with different usage patterns 111 Listing 10-5 Using the vertex array range extension with dynamic data 115 Listing 10-6 Using the vertex array range extension with static data 116 Best Practices for Working with Texture Data 118 Figure 11-1 Textures add realism to a scene 118 Figure 11-2 Texture data path 119 Figure 11-3 Data copies in an OpenGL program 120 Figure 11-4 The client storage extension eliminates a data copy 124 Figure 11-5 The texture range extension eliminates a data copy 126 Figure 11-6 Combining extensions to eliminate data copies 127 Figure 11-7 Normalized and non-normalized coordinates 129 Figure 11-8 An image segmented into power-of-two tiles 130 Figure 11-9 Using an image as a texture for a cube 131 Figure 11-10 Single-buffered data 137 Figure 11-11 Double-buffered data 138 Listing 11-1 Using texture extensions for a rectangular texture 127 Listing 11-2 Using texture extensions for a power-of-two texture 128 Listing 11-3 Building an OpenGL texture from an NSView object 132 Listing 11-4 Using a Quartz image as a texture source 134 Listing 11-5 Getting pixel data from a source image 135 Listing 11-6 Code that downloads texture data 136 Customizing the OpenGL Pipeline with Shaders 139 Figure 12-1 OpenGL fixed-function pipeline 139 Figure 12-2 OpenGL shader pipeline 140 Listing 12-1 Loading a Shader 141 Techniques for Scene Antialiasing 144 Table 13-1 Antialiasing hints 147 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 9 Figures, Tables, and ListingsConcurrency and OpenGL 148 Figure 14-1 CPU processing and OpenGL on separate threads 152 Figure 14-2 Two contexts on separate threads 153 Listing 14-1 Enabling the multithreaded OpenGL engine 151 Tuning Your OpenGL Application 155 Figure 15-1 Output produced by the top application 157 Figure 15-2 The OpenGL Profiler window 158 Figure 15-3 A statistics window 159 Figure 15-4 A Trace window 160 Figure 15-5 The graph view in OpenGL Driver Monitor 161 Legacy OpenGL Functionality by Version 163 Table A-1 Functionality added in OpenGL 1.1 163 Table A-2 Functionality added in OpenGL 1.2 164 Table A-3 Functionality added in OpenGL 1.3 165 Table A-4 Functionality added in OpenGL 1.4 165 Table A-5 Functionality added in OpenGL 1.5 166 Table A-6 Functionality added in OpenGL 2.0 166 Table A-7 Functionality added in OpenGL 2.1 167 Updating an Application to Support the OpenGL 3.2 Core Specification 168 Table B-1 Extensions described in this guide 169 Setting Up Function Pointers to OpenGL Routines 171 Listing C-1 Using NSLookupAndBindSymbol to obtain a symbol for a symbol name 172 Listing C-2 Using NSGLGetProcAddress to obtain an OpenGL entry point 173 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 10 Figures, Tables, and ListingsOpenGL is an open, cross-platform graphics standard with broad industry support. OpenGL greatly eases the task of writing real-time 2D or 3D graphics applications by providing a mature, well-documented graphics processing pipeline that supports the abstraction of current and future hardware accelerators. OpenGL client OpenGL server Graphics hardware Application OpenGL framework OpenGL driver Runs on GPU Runs on CPU At a Glance OpenGL is an excellent choice for graphics development on the Macintosh platform because it offers the following advantages: ● Reliable Implementation. The OpenGL client-server model abstracts hardware details and guarantees consistent presentation on any compliant hardware and software configuration. Every implementation of OpenGL adheres to the OpenGL specification and must pass a set of conformance tests. ● Performance. Applications can harness the considerable power of the graphics hardware to improve rendering speeds and quality. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 11 About OpenGL for OS X● Industry acceptance. The specification for OpenGL is controlled by the Khronos Group, an industry consortium whose members include many of the major companies in the computer graphics industry, including Apple. In addition to OpenGL for OS X, there are OpenGL implementations for Windows, Linux, Irix, Solaris, and many game consoles. OpenGL Is a C-based, Platform-Neutral API Because OpenGL is a C-based API, it is extremely portable and widely supported. As a C API, it integrates seamlessly with Objective-C based Cocoa applications. OpenGL provides functions your application uses to generate 2D or 3D images. Your application presents the rendered images to the screen or copies them back to its own memory. The OpenGL specification does not provide a windowing layer of its own. It relies on functions defined by OS X to integrate OpenGL drawing with the windowing system. Your application creates an OS X OpenGL rendering context and attaches a rendering target to it (known as a drawable object). The rendering context manages OpenGL state changes and objects created by calls to the OpenGL API. The drawable object is the final destination for OpenGL drawing commands and is typically associated with a Cocoa window or view. Relevant Chapters: “OpenGL on the Mac Platform” (page 17) Different Rendering Destinations Require Different Setup Commands Depending on whether your application intends to draw OpenGL content to a window, to draw to the entire screen, or to perform offscreen image processing, it takes different steps to create the rendering context and associate it with a drawable object. Relevant Chapters: “Drawing to a Window or View” (page 35), “Drawing to the Full Screen” (page 50) and “Drawing Offscreen” (page 53) OpenGL on Macs Exists in a Heterogenous Environment Macs support different types of graphics processors, each with different rendering capabilities, supporting versions of OpenGL from 1.x through OpenGL 3.2. When creating a rendering context, your application can accept a broad range of renderers or it can restrict itself to devices with specific capabilities. Once you have a context, you can configure how that context executes OpenGL commands. About OpenGL for OS X At a Glance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 12OpenGL on the Mac is not only a heterogenous environment, but it is also a dynamic environment. Users can add or remove displays, or take a laptop running on battery power and plug it into a wall. When the graphics environment on the Mac changes, the renderer associated with the context may change. Your application must handle these changes and adjust how it uses OpenGL. Relevant Chapters: “Choosing Renderer and Buffer Attributes” (page 64), “Working with Rendering Contexts” (page 72), and “Determining the OpenGL Capabilities Supported by the Renderer” (page 83) OpenGL Helps Applications Harness the Power of Graphics Processors Graphics processors are massively parallelized devices optimized for graphics operations. To access that computing power adds additional overhead because data must move from your application to the GPU over slower internal buses. Accessing the same data simultaneously from both your application and OpenGL is usually restricted. To get great performance in your application, you must carefully design your application to feed data and commands to OpenGL so that the graphics hardware runs in parallel with your application. A poorly tuned application may stall either on the CPU or the GPU waiting for the other to finish processing. When you are ready to optimize your application’s performance, Apple provides both general-purpose and OpenGL-specific profiling tools that make it easy to learn where your application spends its time. Relevant Chapters: “Optimizing OpenGL for High Resolution” (page 44), “OpenGL on the Mac Platform” (page 17),“OpenGL Application Design Strategies” (page 89), “Best Practices for Working with Vertex Data” (page 104), “Best Practicesfor Working with Texture Data” (page 118), “Customizing the OpenGL Pipeline with Shaders” (page 139), and “Tuning Your OpenGL Application” (page 155) Concurrency in OpenGL Applications Requires Additional Effort Many Macs ship with multiple processors or multiple cores, and future hardware is expected to add more of each. Designing applications to take advantage of multiprocessing is critical. OpenGL places additional restrictions on multithreaded applications. If you intend to add concurrency to an OpenGL application, you must ensure that the application does not access the same context from two different threads at the same time. About OpenGL for OS X At a Glance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 13Relevant Chapters: “Concurrency and OpenGL” (page 148) Performance Tuning Allows Your Application to Provide an Exceptional User Experience Once you’ve improved the performance of your OpenGL application and taken advantage of concurrency, put some of the freed processing power to work for you. Higher resolution textures, detailed models, and more complex lighting and shading algorithms can improve image quality. Full-scene antialiasing on modern graphics hardware can eliminate many of the “jaggies” common on lower resolution images. Relevant Chapters: “Customizing the OpenGL Pipeline with Shaders” (page 139),“Techniques for Scene Antialiasing” (page 144) How to Use This Document If you have never programmed in OpenGL on the Mac, you should read this book in its entirety, starting with “OpenGL on the Mac Platform” (page 17). Critical Mac terminology is defined in that chapter as well as in the “Glossary” (page 179). If you already have an OpenGL application running on the Mac, but have not yet updated it for OS X v10.7, read “Choosing Renderer and Buffer Attributes” (page 64) to learn how to choose an OpenGL profile for your application. To find out how to update an existing OpenGL app for high resolution, see “Optimizing OpenGL for High Resolution” (page 44). Once you have OpenGL content in your application, read “OpenGL Application Design Strategies” (page 89) to learn fundamental patterns for implementing high-performance OpenGL applications, and the chapters that follow to learn how to apply those patterns to specific OpenGL problems. About OpenGL for OS X How to Use This Document 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 14Important: Although this guide describes how to create rendering contexts that support OpenGL 3.2, most code examples and discussion in the rest of the book describe the earlier legacy versions of OpenGL. See “Updating an Application to Support the OpenGL 3.2 Core Specification” (page 168) for more information on migrating your application to OpenGL 3.2. Prerequisites This guide assumes that you have some experience with OpenGL programming, but want to learn how to apply that knowledge to create software for the Mac. Although this guide provides advice on optimizing OpenGL code, it does not provide entry-level information on how to use the OpenGL API. If you are unfamiliar with OpenGL, you should read “OpenGL on the Mac Platform” (page 17) to get an overview of OpenGL on the Mac platform, and then read the following OpenGL programming guide and reference documents: ● OpenGL Programming Guide, by Dave Shreiner and the Khronos OpenGL Working Group; otherwise known as "The Red book.” ● OpenGL Shading Language , by Randi J. Rost, is an excellent guide for those who want to write programs that compute surface properties (also known as shaders). ● OpenGL Reference Pages. Before reading this document, you should be familiar with Cocoa windows and views asintroduced in Window Programming Guide and View Programming Guide . See Also Keep these reference documents handy as you develop your OpenGL program for OS X: ● NSOpenGLView Class Reference , NSOpenGLContext Class Reference , NSOpenGLPixelBuffer Class Reference , and NSOpenGLPixelFormat Class Reference provide a complete description of the classes and methods needed to integrate OpenGL content into a Cocoa application. ● CGL Reference describes low-level functions that can be used to create full-screen OpenGL applications. ● OpenGL Extensions Guide provides information about OpenGL extensions supported in OS X. The OpenGL Foundation website, http://www.opengl.org, provides information on OpenGL commands, the Khronos OpenGL Working Group, logo requirements, OpenGL news, and many other topics. It's a site that you'll want to visit regularly. Among the many resources it provides, the following are important reference documents for OpenGL developers: About OpenGL for OS X Prerequisites 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 15● OpenGL Specification provides detailed information on how an OpenGL implementation is expected to handle each OpenGL command. ● OpenGL Reference describes the main OpenGL library. ● OpenGL GLU Reference describes the OpenGL Utility Library, which contains convenience functions implemented on top of the OpenGL API. ● OpenGL GLUT Reference describes the OpenGL Utility Toolkit, a cross-platform windowing API. ● OpenGL API Code and Tutorial Listings provides code examples for fundamental tasks, such as modeling and texture mapping, as well as for advanced techniques, such as high dynamic range rendering (HDRR). About OpenGL for OS X See Also 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 16You can tell that Apple has an implementation of OpenGL on its platform by looking at the user interface for many of the applications that are installed with OS X. The reflections built into iChat (Figure 1-1) provide one of the more notable examples. The responsiveness of the windows, the instant results of applying an effect in iPhoto, and many other operations in OS X are due to the use of OpenGL. OpenGL is available to all Macintosh applications. OpenGL for OS X is implemented as a set of frameworks that contain the OpenGL runtime engine and its drawing software. These frameworks use platform-neutral virtual resourcesto free your programming as much as possible from the underlying graphics hardware. OS X provides a set of application programming interfaces (APIs) that Cocoa applications can use to support OpenGL drawing. Figure 1-1 OpenGL provides the reflections in iChat This chapter provides an overview of OpenGL and the interfaces your application uses on the Mac platform to tap into it. OpenGL Concepts To understand how OpenGL fits into OS X and your application, you should first understand how OpenGL is designed. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 17 OpenGL on the Mac PlatformOpenGL Implements a Client-Server Model OpenGL uses a client-server model, as shown in Figure 1-2. When your application calls an OpenGL function, it talks to an OpenGL client. The client delivers drawing commands to an OpenGL server. The nature of the client, the server, and the communication path between them is specific to each implementation of OpenGL. For example, the server and clients could be on different computers, or they could be different processes on the same computer. Figure 1-2 OpenGL client-server model Application OpenGL client OpenGL server A client-server model allows the graphics workload to be divided between the client and the server. For example, all Macintosh computersship with dedicated graphics hardware that is optimized to perform graphics calculations in parallel. Figure 1-3 shows a common arrangement of CPUs and GPUs. With this hardware configuration, the OpenGL client executes on the CPU and the server executes on the GPU. Figure 1-3 Graphics platform model CPU RAM Core Core GPU RAM Core Core Core Core Core Core System OpenGL Commands Can Be Executed Asynchronously A benefit of the OpenGL client-server model is that the client can return control to the application before the command has finished executing. An OpenGL client may also buffer or delay execution of OpenGL commands. If OpenGL required all commands to complete before returning control to the application, then either the CPU or the GPU would be idle waiting for the other to provide it data, resulting in reduced performance. OpenGL on the Mac Platform OpenGL Concepts 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 18Some OpenGL commandsimplicitly or explicitly require the client to wait untilsome or all previously submitted commands have completed. OpenGL applicationsshould be designed to reduce the frequency of client-server synchronizations. See “OpenGL Application Design Strategies” (page 89) for more information on how to design your OpenGL application. OpenGL Commands Are Executed In Order OpenGL guarantees that commands are executed in the order they are received by OpenGL. OpenGL Copies Client Data at Call-Time When an application calls an OpenGL function, the OpenGL client copies any data provided in the parameters before returning control to the application. For example, if a parameter points at an array of vertex data stored in application memory, OpenGL must copy that data before returning. Therefore, an application is free to change memory it owns regardless of calls it makes to OpenGL. The data that the client copies is often reformatted before it is transmitted to the server. Copying, modifying, and transmitting parameters to the server adds overhead to calling OpenGL. Applications should be designed to minimize copy overhead. OpenGL Relies on Platform-Specific Libraries For Critical Functionality OpenGL provides a rich set of cross-platform drawing commands, but does not define functions to interact with an operating system’s graphics subsystem. Instead, OpenGL expects each implementation to define an interface to create rendering contexts and associate them with the graphics subsystem. A rendering context holds all of the data stored in the OpenGL state machine. Allowing multiple contexts allows the state in one machine to be changed by an application without affecting other contexts. Associating OpenGL with the graphic subsystem usually means allowing OpenGL content to be rendered to a specific window. When content is associated with a window, the implementation creates whatever resources are required to allow OpenGL to render and display images. OpenGL on the Mac Platform OpenGL Concepts 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 19OpenGL in OS X OpenGL in OS X implementsthe OpenGL client-server model using a common OpenGL framework and plug-in drivers. The framework and driver combine to implement the client portion of OpenGL, as shown in Figure 1-4. Dedicated graphics hardware provides the server. Although this is the common scenario, Apple also provides a software renderer implemented entirely on the CPU. Figure 1-4 MacOS X OpenGL driver model OpenGL client OpenGL server Graphics hardware Application OpenGL framework OpenGL driver Runs on GPU Runs on CPU OS X supports a display space that can include multiple dissimilar displays, each driven by different graphics cards with different capabilities. In addition, multiple OpenGL renderers can drive each graphics card. To accommodate this versatility, OpenGL for OS X is segmented into well-defined layers: a window system layer, OpenGL on the Mac Platform OpenGL in OS X 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 20a framework layer, and a driver layer, as shown in Figure 1-5. This segmentation allows for plug-in interfaces to both the window system layer and the framework layer. Plug-in interfaces offer flexibility in software and hardware configuration without violating the OpenGL standard. Figure 1-5 Layers of OpenGL for OS X Software GLD plug-in ATI GLD plug-in NVIDIA GLD plug-in Intel GLD plug-in Application Hardware Window system layer Common OpenGL framework Driver layer NSOpenGL CGL OpenGL The window system layer is an OS X–specific layer that your application uses to create OpenGL rendering contexts and associate them with the OS X windowing system. The NSOpenGL classes and Core OpenGL (CGL) API also provide some additional controlsfor how OpenGL operates on that context. See “OpenGL APIs Specific to OS X” (page 22) for more information. Finally, this layer also includes the OpenGL libraries—GL, GLU, and GLUT. (See “Apple-Implemented OpenGL Libraries” (page 23) for details.) The common OpenGL framework layer is the software interface to the graphics hardware. This layer contains Apple's implementation of the OpenGL specification. The driver layer contains the optional GLD plug-in interface and one or more GLD plug-in drivers, which may have different software and hardware support capabilities. The GLD plug-in interface supports third-party plug-in drivers, allowing third-party hardware vendors to provide drivers optimized to take best advantage of their graphics hardware. Accessing OpenGL Within Your Application The programming interfacesthat your application callsfall into two categories—those specific to the Macintosh platform and those defined by the OpenGL Working Group. The Apple-specific programming interfaces are what Cocoa applications use to communicate with the OS X windowing system. These APIs don't create OpenGL content, they manage content, direct it to a drawing destination, and control various aspects of the rendering OpenGL on the Mac Platform Accessing OpenGL Within Your Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 21operation. Your application calls the OpenGL APIs to create content. OpenGL routines accept vertex, pixel, and texture data and assemble the data to create an image. The final image resides in a framebuffer, which is presented to the user through the windowing-system specific API. Figure 1-6 The programing interfaces used for OpenGL content OpenGL engine and drivers GLUT CGL OpenGL NSOpenGL classes GLUT application Cocoa application OpenGL APIs Specific to OS X OS X offers two easy-to-use APIs that are specific to the Macintosh platform: the NSOpenGL classes and the CGL API. Throughout this document, these APIs are referred to as the Apple-specific OpenGL APIs. Cocoa provides many classes specifically for OpenGL: ● The NSOpenGLContext class implements a standard OpenGL rendering context. ● The NSOpenGLPixelFormat class is used by an application to specify the parameters used to create the OpenGL context. ● The NSOpenGLView class is a subclass of NSView that uses NSOpenGLContext and NSOpenGLPixelFormat to display OpenGL content in a view. Applicationsthatsubclass NSOpenGLView do not need to directly subclass NSOpenGLPixelFormat or NSOpenGLContext. Applications that need customization or flexibility, can subclass NSView and create NSOpenGLPixelFormat and NSOpenGLContext objects manually. ● The NSOpenGLLayer class allows your application to integrate OpenGL drawing with Core Animation. ● The NSOpenGLPixelBuffer class provides hardware-accelerated offscreen drawing. OpenGL on the Mac Platform Accessing OpenGL Within Your Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 22The Core OpenGL API (CGL) residesin the OpenGL framework and is used to implement the NSOpenGL classes. CGL offersthe most direct accessto system functionality and providesthe highest level of graphics performance and control for drawing to the full screen. CGL Reference provides a complete description of this API. Apple-Implemented OpenGL Libraries OS X also provides the full suite of graphics libraries that are part of every implementation of OpenGL: GL, GLU, GLUT, and GLX. Two of these—GL and GLU—provide low-level drawing support. The other two—GLUT and GLX—support drawing to the screen. Your application typically interfaces directly with the core OpenGL library (GL), the OpenGL Utility library (GLU), and the OpenGL Utility Toolkit (GLUT). The GL library provides a low-level modular API that allows you to define graphical objects. Itsupportsthe core functions defined by the OpenGL specification. It providessupport for two fundamental types of graphics primitives: objects defined by sets of vertices, such as line segments and simple polygons, and objects that are pixel-based images, such as filled rectangles and bitmaps. The GL API does not handle complex custom graphical objects; your application must decompose them into simpler geometries. The GLU library combines functions from the GL library to support more advanced graphics features. It runs on all conforming implementations of OpenGL. GLU is capable of creating and handling complex polygons (including quartic equations), processing nonuniform rational b-spline curves (NURBs), scaling images, and decomposing a surface to a series of polygons (tessellation). The GLUT library provides a cross-platform API for performing operations associated with the user windowing environment—displaying and redrawing content, handling events, and so on. It isimplemented on most UNIX, Linux, and Windows platforms. Code that you write with GLUT can be reused across multiple platforms. However, such code is constrained by a generic set of user interface elements and event-handling options. This document does not show how to use GLUT. The GLUTBasics sample project shows you how to get started with GLUT. GLX is an OpenGL extension that supports using OpenGL within a window provided by the X Window system. X11 for OS X is available as an optional installation. (It's not shown in Figure 1-6 (page 22).) See OpenGL Programming for the X Window System, published by Addison Wesley for more information. This document does not show how to use these libraries. For detailed information, either go to the OpenGL Foundation website http://www.opengl.org or see the most recent version of "The Red book"—OpenGL Programming Guide, published by Addison Wesley. OpenGL on the Mac Platform Accessing OpenGL Within Your Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 23Terminology There are a number of termsthat you’ll want to understand so that you can write code effectively using OpenGL: renderer, renderer attributes, buffer attributes, pixel format objects, rendering contexts, drawable objects, and virtual screens. As an OpenGL programmer, some of these may seem familiar to you. However, understanding the Apple-specific nuances of these terms will help you get the most out of OpenGL on the Macintosh platform. Renderer A renderer isthe combination of the hardware and software that OpenGL usesto execute OpenGL commands. The characteristics of the final image depend on the capabilities of the graphics hardware associated with the renderer and the device used to display the image. OS X supports graphics accelerator cards with varying capabilities, as well as a software renderer. It is possible for multiple renderers, each with different capabilities or features, to drive a single set of graphics hardware. To learn how to determine the exact features of a renderer, see “Determining the OpenGL Capabilities Supported by the Renderer” (page 83). Renderer and Buffer Attributes Your application uses renderer and buffer attributes to communicate renderer and buffer requirements to OpenGL. The Apple implementation of OpenGL dynamically selectsthe best renderer for the current rendering task and doesso transparently to your application. If your application has very specific rendering requirements and wants to control renderer selection, it can do so by supplying the appropriate renderer attributes. Buffer attributes describe such things as color and depth buffer sizes, and whether the data is stereoscopic or monoscopic. Renderer and buffer attributes are represented by constants defined in the Apple-specific OpenGL APIs. OpenGL uses the attributes you supply to perform the setup work needed prior to drawing content. “Drawing to a Window or View” (page 35) provides a simple example that shows how to use renderer and buffer attributes. “Choosing Renderer and Buffer Attributes” (page 64) explains how to choose renderer and buffer attributes to achieve specific rendering goals. Pixel Format Objects A pixel format describes the format for pixel data storage in memory. The description includes the number and order of components as well as their names (typically red, blue, green and alpha). It also includes other information, such as whether a pixel contains stencil and depth values. A pixel format object is an opaque data structure that holds a pixel format along with a list of renderers and display devices that satisfy the requirements specified by an application. OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 24Each of the Apple-specific OpenGL APIs defines a pixel format data type and accessor routines that you can use to obtain the information referenced by this object. See “Virtual Screens” (page 26) for more information on renderer and display devices. OpenGL Profiles OpenGL profiles are new in OS X 10.7. An OpenGL profile is a renderer attribute used to request a specific version of the OpenGL specification. When your application provides an OpenGL profile as part of its renderer attributes, it only receives renderers that provide the complete feature set promised by that profile. The render can implement a different version of the OpenGL so long asthe version itsuppliesto your application provides the same functionality that your application requested. Rendering Contexts A rendering context, or simply context, contains OpenGL state information and objects for your application. State variables include such things as drawing color, the viewing and projection transformations, lighting characteristics, and material properties. State variables are set per context. When your application creates OpenGL objects (for example, textures), these are also associated with the rendering context. Although your application can maintain more than one context, only one context can be the current context in a thread. The current context is the rendering context that receives OpenGL commands issued by your application. Drawable Objects A drawable object refers to an object allocated by the windowing system that can serve as an OpenGL framebuffer. A drawable object is the destination for OpenGL drawing operations. The behavior of drawable objects is not part of the OpenGL specification, but is defined by the OS X windowing system. A drawable object can be any of the following: a Cocoa view, offscreen memory, a full-screen graphics device, or a pixel buffer. OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 25Note: A pixel buffer (pbuffer) is an OpenGL buffer designed for hardware-accelerated offscreen drawing and as a source for texturing. An application can render an image into a pixel buffer and then use the pixel buffer as a texture for other OpenGL commands. Although pixel buffers are supported on Apple’s implementation of OpenGL, Apple recommends you use framebuffer objects instead. See “Drawing Offscreen” (page 53) for more information on offscreen rendering. Before OpenGL can draw to a drawable object, the object must be attached to a rendering context. The characteristics of the drawable object narrow the selection of hardware and software specified by the rendering context. Apple’s OpenGL automatically allocates buffers, creates surfaces, and specifies which renderer is the current renderer. The logical flow of data from an application through OpenGL to a drawable object is shown in Figure 1-7. The application issues OpenGL commands that are sent to the current rendering context. The current context, which contains state information, constrains how the commands are interpreted by the appropriate renderer. The renderer converts the OpenGL primitives to an image in the framebuffer. (See also “Running an OpenGL Program in OS X ” (page 31).) Figure 1-7 Data flow through OpenGL Rendered Image Application Possible renderers OpenGL buffers Current Drawable objects CONTEXT Virtual Screens The characteristics and quality of the OpenGL content that the user sees depend on both the renderer and the physical display used to view the content. The combination of renderer and physical display is called a virtual screen. This important concept has implications for any OpenGL application running on OS X. A simple system, with one graphics card and one physical display, typically has two virtual screens. One virtual screen consists of a hardware-based renderer and the physical display and the other virtual screen consists of a software-based renderer and the physical display. OS X provides a software-based renderer as a fallback. It's possible for your application to decline the use of thisfallback. You'llsee how in “Choosing Renderer and Buffer Attributes” (page 64). OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 26The green rectangle around the OpenGL image in Figure 1-8 surrounds a virtual screen for a system with one graphics card and one display. Note that a virtual screen is not the physical display, which is why the green rectangle is drawn around the application window thatshowsthe OpenGL content. In this case, it isthe renderer provided by the graphics card combined with the characteristics of the display. Figure 1-8 A virtual screen displays what the user sees Graphics card Virtual screen OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 27Because a virtual screen is not simply the physical display, a system with one display can use more than one virtualscreen at a time, asshown in Figure 1-9. The green rectangles are drawn to point out each virtualscreen. Imagine that the virtual screen on the right side uses a software-only renderer and that the one on the left uses a hardware-dependent renderer. Although this is a contrived example, it illustrates the point. Figure 1-9 Two virtual screens Graphics card Virtual screen 2 (Software renderer) Virtual screen 1 (Hardware renderer) OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 28It's also possible to have a virtualscreen that can represent more than one physical display. The green rectangle in Figure 1-10 is drawn around a virtual screen that spans two physical displays. In this case, the same graphics hardware drives a pair of identical displays. A mirrored display also has a single virtual screen associated with multiple physical displays. Figure 1-10 A virtual screen can represent more than one physical screen Dual-headed graphics card Identical displays Virtual screen The concept of a virtualscreen is particularly important when the user drags an image from one physicalscreen to another. When this happens, the virtual screen may change, and with it, a number of attributes of the imaging process, such as the current renderer, may change. With the dual-headed graphics card shown in Figure 1-10 (page 29), dragging between displays preserves the same virtual screen. However, Figure 1-11 shows the case for which two displays represent two unique virtual screens. Not only are the two graphics cards different, but it's possible that the renderer, buffer attributes, and pixel characteristics are different. A change in any of these three items can result in a change in the virtual screen. OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 29When the user drags an image from one display to another, and the virtualscreen isthe same for both displays, the image quality should appear similar. However, for the case shown in Figure 1-11, the image quality can be quite different. Figure 1-11 Two virtual screens and two graphics cards Graphics card 1 Graphics card 2 Virtual screen 1 Virtual screen 2 OpenGL for OS X transparently manages rendering across multiple monitors. A user can drag a window from one monitor to another, even though their display capabilities may be different or they may be driven by dissimilar graphics cards with dissimilar resolutions and color depths. OpenGL dynamically switches renderers when the virtual screen that contains the majority of the pixels in an OpenGL window changes. When a window issplit between multiple virtualscreens, the framebuffer israsterized entirely by the renderer driving the screen that contains the largest segment of the window. The regions of the window on the other virtual screens are drawn by copying the rasterized image. When the entire OpenGL drawable object is displayed on one virtual screen, there is no performance impact from multiple monitor support. Applications need to track virtual screen changes and, if appropriate, update the current application state to reflect changes in renderer capabilities. See “Working with Rendering Contexts” (page 72). OpenGL on the Mac Platform Terminology 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 30Offline Renderer An offline renderer is one that is not currently associated with a display. For example, a graphics processor might be powered down to conserve power, or there might not be a display hooked up to the graphics card. Offline renderers are not normally visible to your application, but your application can enable them by adding the appropriate renderer attribute. Taking advantage of offline renderers is useful because it gives the user a seamless experience when they plug in or remove displays. For more information about configuring a context to see offline renderers, see “Choosing Renderer and Buffer Attributes” (page 64). To enable your application to switch to a renderer when a display is attached,see “Update the Rendering Context When the Renderer or Geometry Changes” (page 72). Running an OpenGL Program in OS X Figure 1-12 shows the flow of data in an OpenGL program, regardless of the platform that the program runs on. Figure 1-12 The flow of data through OpenGL Rasterization Fragment shading and per-fragment operations Per-pixel operations Texture assembly Framebuffer Vertex shading and per-vertex operations Pixel data Vertex data Per-vertex operations include such things as applying transformation matrices to add perspective or to clip, and applying lighting effects. Per-pixel operations include such things as color conversion and applying blur and distortion effects. Pixels destined for textures are sent to texture assembly, where OpenGL stores textures until it needs to apply them onto an object. OpenGL rasterizesthe processed vertex and pixel data, meaning that the data are converged to create fragments. A fragment encapsulates all the values for a pixel, including color, depth, and sometimes texture values. These values are used during antialiasing and any other calculations needed to fill shapes and to connect vertices. OpenGL on the Mac Platform Running an OpenGL Program in OS X 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 31Per-fragment operations include applying environment effects, depth and stencil testing, and performing other operations such as blending and dithering. Some operations—such as hidden-surface removal—end the processing of a fragment. OpenGL draws fully processed fragments into the appropriate location in the framebuffer. The dashed arrows in Figure 1-12 indicate reading pixel data back from the framebuffer. They represent operations performed byOpenGL functionssuch as glReadPixels, glCopyPixels, and glCopyTexImage2D. So far you've seen how OpenGL operates on any platform. But how do Cocoa applications provide data to the OpenGL for processing? A Mac application must perform these tasks: ● Set up a list of buffer and renderer attributes that define the sort of drawing you want to perform. (See “Renderer and Buffer Attributes” (page 24).) ● Request the system to create a pixel format object that contains a pixel format that meets the constraints of the buffer and render attributes and a list of all suitable combinations of displays and renderers. (See “Pixel Format Objects” (page 24) and “Virtual Screens” (page 26).) ● Create a rendering context to hold state information that controls such things as drawing color, view and projection matrices, characteristics of light, and conventions used to pack pixels. When you set up this context, you must provide a pixel format object because the rendering context needs to know the set of virtual screens that can be used for drawing. (See “Rendering Contexts” (page 25).) ● Bind a drawable object to the rendering context. The drawable object is what capturesthe OpenGL drawing sent to that rendering context. (See “Drawable Objects” (page 25).) ● Make the rendering context the current context. OpenGL automatically targets the current context. Although your application might have several rendering contexts set up, only the current one is the active one for drawing purposes. ● Issue OpenGL drawing commands. ● Flush the contents of the rendering context. This causes previously submitted commands to be rendered to the drawable object and displays them to the user. The tasks described in the first five bullet items are platform-specific. “Drawing to a Window or View” (page 35) provides simple examples of how to perform them. As you read other parts of this document, you'll see there are a number of other tasks that, although not mandatory for drawing, are really quite necessary for any application that wantsto use OpenGL to perform complex 3D drawing efficiently on a wide variety of Macintosh systems. OpenGL on the Mac Platform Running an OpenGL Program in OS X 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 32Making Great OpenGL Applications on the Macintosh OpenGL lets you create applications with outstanding graphics performance as well as a great user experience—but neither of these things come for free. Your application performs best when it works with OpenGL rather than against it. With that in mind, here are guidelines you should follow to create high-performance, future-looking OpenGL applications: ● Ensure your application runs successfully with offline renderers and multiple graphics cards. Apple ships many sophisticated hardware configurations. Your application should handle renderer changes seamlessly. You should test your application on a Mac with multiple graphics processors and include tests for attaching and removing displays. For more information on how to implement hot plugging correctly, see “Working with Rendering Contexts” (page 72) ● Avoid finishing and flushing operations. Pay particular attention to OpenGL functions that force previously submitted commands to complete. Synchronizing the graphics hardware to the CPU may result in dramatically lower performance. Performance is covered in detail in “OpenGL Application Design Strategies” (page 89). ● Use multithreading to improve the performance of your OpenGL application. Many Macs support multiple simultaneous threads of execution. Your application should take advantage of concurrency. Well-behaved applications can take advantage of concurrency in just a few line of code. See “Concurrency and OpenGL” (page 148). ● Use buffer objects to manage your data. Vertex buffer objects (VBOs) allow OpenGL to manage your application’s vertex data. Using vertex buffer objects gives OpenGL more opportunities to cache vertex data in a format that is friendly to the graphics hardware, improving application performance. For more information see “Best Practices for Working with Vertex Data” (page 104). Similarly, pixel buffer objects (PBOs) should be used to manage your image data. See “Best Practices for Working with Texture Data” (page 118) ● Use framebuffer objects (FBOs) when you need to render to offscreen memory. Framebuffer objects allow your application to create offscreen rendering targets without many of the limitations of platform-dependent interfaces. See “Rendering to a Framebuffer Object” (page 53). ● Generate objects before binding them. Earlier version of OpenGL allowed your applications to create its own object names before binding them. However, you should avoid this. Always use the OpenGL API to generate object names. ● Migrate your OpenGL Applications to OpenGL 3.2 OpenGL on the Mac Platform Making Great OpenGL Applications on the Macintosh 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 33The OpenGL 3.2 Core profile provides a clean break from earlier versions of OpenGL in favor of a simpler shader-based pipeline. For better compatibility with future hardware and OS X releases, migrate your applications away from legacy versions of OpenGL. Many of the recommendationslisted above are required when your application uses OpenGL 3.2. ● Harness the power of Apple’s development tools. Apple provides many toolsthat help create OpenGL applications and analyze and tune their performance. Learning how to use these tools helps you create fast, reliable applications. “Tuning Your OpenGL Application” (page 155) describes many of these tools. OpenGL on the Mac Platform Making Great OpenGL Applications on the Macintosh 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 34The OpenGL programming interface provides hundreds of drawing commands that drive graphics hardware. It doesn't provide any commands that interface with the windowing system of an operating system. Without a windowing system, the 3D graphics of an OpenGL program are trapped inside the GPU. Figure 2-1 shows a cube drawn to a Cocoa view. Figure 2-1 OpenGL content in a Cocoa view This chapter shows how to display OpenGL drawing onscreen using the APIs provided by OS X. (This chapter does not show how to use GLUT.) The first section describes the overall approach to drawing onscreen and provides an overview of the functions and methods used by each API. General Approach To draw your content to a view or a layer, your application uses the NSOpenGL classes from within the Cocoa application framework. While the CGL API is used by your applications only to create full-screen content, every NSOpenGLContext object contains a CGL context object. This object can be retrieved from the NSOpenGLContext when your application needs to reference it directly. To show the similarities between the two, this chapter discusses both the NSOpenGL classes and the CGL API. To draw OpenGL content to a window or view using the NSOpenGL classes, you need to perform these tasks: 1. Set up the renderer and buffer attributes that support the OpenGL drawing you want to perform. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 35 Drawing to a Window or ViewEach of the OpenGL APIs in OS X has its own set of constants that represent renderer and buffer attributes. For example, the all-renderers attribute is represented by the NSOpenGLPFAAllRenderers constant in Cocoa and the kCGLPFAAllRenderers constant in the CGL API. 2. Request, from the operating system, a pixel format object that encapsulates pixel storage information and the renderer and buffer attributes required by your application. The returned pixel format object contains all possible combinations of renderers and displays available on the system that your program runs on and that meets the requirements specified by the attributes. The combinations are referred to as virtual screens. (See “Virtual Screens” (page 26).) There may be situationsfor which you want to ensure that your program uses a specific renderer. “Choosing Renderer and Buffer Attributes” (page 64) discusses how to set up an attributes array that guarantees the system passes back a pixel format object that uses only that renderer. If an error occurs, your application may receive a NULL pixel format object. Your application must handle this condition. 3. Create a rendering context and bind the pixel format object to it. The rendering context keeps track of state information that controls such things as drawing color, view and projection matrices, characteristics of light, and conventions used to pack pixels. Your application needs a pixel format object to create a rendering context. 4. Release the pixel format object. Once the pixel format object is bound to a rendering context, itsresources are no longer needed. 5. Bind a drawable object to the rendering context. For a windowed context, this is typically a Cocoa view. 6. Make the rendering context the current context. The system sends OpenGL drawing to whichever rendering context is designated as the current one. It's possible for you to set up more than one rendering context, so you need to make sure that the one you want to draw to is the current one. 7. Perform your drawing. The specific functions or methods that you use to perform each of the steps are discussed in the sections that follow. Drawing to a Cocoa View There are two ways to draw OpenGL content to a Cocoa view. If your application has modest drawing requirements, then you can use the NSOpenGLView class. See “Drawing to an NSOpenGLView Class: A Tutorial” (page 37). Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 36If your application is more complex and needs to support drawing to multiple rendering contexts, you may want to consider subclassing the NSView class. For example, if your application supports drawing to multiple views at the same time, you need to set up a custom NSView class. See “Drawing OpenGL Content to a Custom View” (page 40). Drawing to an NSOpenGLView Class: A Tutorial The NSOpenGLView class is a lightweight subclass of the NSView class that provides convenience methods for setting up OpenGL drawing. An NSOpenGLView object maintains an NSOpenGLPixelFormat object and an NSOpenGLContext object into which OpenGL calls can be rendered. It provides methods for accessing and managing the pixel format object and the rendering context, and handles notification of visible region changes. An NSOpenGLView object does notsupportsubviews. You can, however, divide the view into multiple rendering areas using the OpenGL function glViewport. This section provides step-by-step instructions for creating a simple Cocoa application that draws OpenGL content to a view. The tutorial assumes that you know how to use Xcode and Interface Builder. If you have never created an application using the Xcode development environment, see Getting Started with Tools. 1. Create a Cocoa application project named Golden Triangle. 2. Add the OpenGL framework to your project. 3. Add a new file to your project using the Objective-C class template. Name the file MyOpenGLView.m and create a header file for it. 4. Open the MyOpenGLView.h file and modify the file so that it looks like the code shown in Listing 2-1 to declare the interface. Listing 2-1 The interface for MyOpenGLView #import @interface MyOpenGLView : NSOpenGLView { } - (void) drawRect: (NSRect) bounds; @end 5. Save and close the MyOpenGLView.h file. 6. Open the MyOpenGLView.m file and include the gl.h file, as shown in Listing 2-2. Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 37Listing 2-2 Include OpenGL/gl.h #import "MyOpenGLView.h" #include @implementation MyOpenGLView @end 7. Implement the drawRect: method asshown in Listing 2-3, adding the code after the @implementation statement. The method sets the clear color to black and clears the color buffer in preparation for drawing. Then, drawRect: calls your drawing routine, which you’ll add next. The OpenGL command glFlush draws the content provided by your routine to the view. Listing 2-3 The drawRect: method for MyOpenGLView -(void) drawRect: (NSRect) bounds { glClearColor(0, 0, 0, 0); glClear(GL_COLOR_BUFFER_BIT); drawAnObject(); glFlush(); } 8. Add the code to perform your drawing. In your own application, you'd perform whatever drawing is appropriate. But for the purpose of learning how to draw OpenGL content to a view, add the code shown in Listing 2-4. This code draws a 2D, gold-colored triangle, whose dimensions are not quite the dimensions of a true golden triangle, but good enough to show how to perform OpenGL drawing. Make sure that you insert this routine before the drawRect: method in the MyOpenGLView.m file. Listing 2-4 Code that draws a triangle using OpenGL commands static void drawAnObject () { glColor3f(1.0f, 0.85f, 0.35f); glBegin(GL_TRIANGLES); { Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 38glVertex3f( 0.0, 0.6, 0.0); glVertex3f( -0.2, -0.3, 0.0); glVertex3f( 0.2, -0.3 ,0.0); } glEnd(); } 9. Open the MainMenu.xib in Interface Builder. 10. Change the window’s title to Golden Triangle. 11. Drag an NSOpenGLView object from the Library to the window. Resize the view to fit the window. 12. Change the class of this object to MyOpenGLView. 13. Open the Attributes pane of the inspector for the view, and take a look at the renderer and buffer attributes that are available to set. These settings save you from setting attributes programmatically. Only those attributes listed in the Interface Builder inspector are set when the view is instantiated. If you need additional attributes, you need to set them programmatically. 14. Build and run your application. You should see content similar to the triangle shown in Figure 2-2. Figure 2-2 The output from the Golden Triangle program This example is extremely simple. In a more complex application, you'd want to do the following: ● Replace the immediate-mode drawing commands with commands that persist your vertex data inside OpenGL. See “OpenGL Application Design Strategies” (page 89). Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 39● In the interface for the view, declare a variable that indicates whether the view is ready to accept drawing. A view is ready for drawing only if it is bound to a rendering context and that context is set to be the current one. ● Cocoa does not call initialization routines for objects created in Interface Builder. If you need to perform any initialization tasks, do so in the awakeFromNib method for the view. Note that because you set attributes in the inspector, there is no need to set them up programmatically unless you need additional ones. There is also no need to create a pixel format object programmatically; it is created and loaded when Cocoa loads the nib file. ● Your drawRect: method should test whether the view is ready to draw into. You need to provide code that handles the case when the view is not ready to draw into. ● OpenGL is at its best when doing real-time and interactive graphics. Your application needs to provide a timer or support user interaction. For more information about creating animation in your OpenGL application, see “Synchronize with the Screen Refresh Rate” (page 96). Drawing OpenGL Content to a Custom View This section provides an overview of the key tasks you need to perform to customize the NSView class for OpenGL drawing. Before you create a custom view for OpenGL drawing, you should read “Creating a Custom View” in View Programming Guide . When you subclass the NSView class to create a custom view for OpenGL drawing, you override any Quartz drawing or other content that is in that view. To set up a custom view for OpenGL drawing, subclass NSView and create two private variables—one which is an NSOpenGLContext object and the other an NSOpenGLPixelFormat object, as shown in Listing 2-5. Listing 2-5 The interface for a custom OpenGL view @class NSOpenGLContext, NSOpenGLPixelFormat; @interface CustomOpenGLView : NSView { @private NSOpenGLContext* _openGLContext; NSOpenGLPixelFormat* _pixelFormat; } + (NSOpenGLPixelFormat*)defaultPixelFormat; - (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format; - (void)setOpenGLContext:(NSOpenGLContext*)context; Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 40- (NSOpenGLContext*)openGLContext; - (void)clearGLContext; - (void)prepareOpenGL; - (void)update; - (void)setPixelFormat:(NSOpenGLPixelFormat*)pixelFormat; - (NSOpenGLPixelFormat*)pixelFormat; @end In addition to the usual methods for the private variables (openGLContext, setOpenGLContext:, pixelFormat, and setPixelFormat:) you need to implement the following methods: ● + (NSOpenGLPixelFormat*) defaultPixelFormat Use this method to allocate and initialize the NSOpenGLPixelFormat object. ● - (void) clearGLContext Use this method to clear and release the NSOpenGLContext object. ● - (void) prepareOpenGL Use this method to initialize the OpenGL state after creating the NSOpenGLContext object. You need to override the update and initWithFrame: methods of the NSView class. ● update calls the update method of the NSOpenGLContext class. ● initWithFrame:pixelFormat retains the pixel format and sets up the notification NSViewGlobalFrameDidChangeNotification. See Listing 2-6. Listing 2-6 The initWithFrame:pixelFormat: method - (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format { self = [super initWithFrame:frameRect]; if (self != nil) { _pixelFormat = [format retain]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(_surfaceNeedsUpdate:) name:NSViewGlobalFrameDidChangeNotification object:self]; } Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 41return self; } - (void) _surfaceNeedsUpdate:(NSNotification*)notification { [self update]; } If the custom view is not guaranteed to be in a window, you must also override the lockFocus method of the NSView class. See Listing 2-7. This method makes sure that the view is locked prior to drawing and that the context is the current one. Listing 2-7 The lockFocus method - (void)lockFocus { NSOpenGLContext* context = [self openGLContext]; [super lockFocus]; if ([context view] != self) { [context setView:self]; } [context makeCurrentContext]; } The reshape method is not supported by the NSView class. You need to update bounds in the drawRect: method, which should take the form shown in Listing 2-8. Listing 2-8 The drawRect method for a custom view -(void) drawRect { [context makeCurrentContext]; //Perform drawing here [context flushBuffer]; } Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 42There may be other methods that you want to add. For example, you might consider detaching the context from the drawable object when the custom view is moved from the window, as shown in Listing 2-9. Listing 2-9 Detaching the context from a drawable object -(void) viewDidMoveToWindow { [super viewDidMoveToWindow]; if ([self window] == nil) [context clearDrawable]; } Drawing to a Window or View Drawing to a Cocoa View 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 43OpenGL is a pixel-based API so the NSOpenGLView class does not provide high-resolution surfaces by default. Because adding more pixelsto renderbuffers has performance implications, you must explicitly opt in to support high-resolution screens. It’s easy to enable high-resolution backing for an OpenGL view. When you do, you’ll want to perform a few additional tasks to ensure the best possible high-resolution experience for your users. Enable High-Resolution Backing for an OpenGL View You can opt in to high resolution by calling the method setWantsBestResolutionOpenGLSurface: when you initialize the view, and supplying YES as an argument: [self setWantsBestResolutionOpenGLSurface:YES]; If you don’t opt in, the system magnifies the rendered results. The wantsBestResolutionOpenGLSurface property is relevant only for views to which an NSOpenGLContext object is bound. Its value does not affect the behavior of other views. For compatibility, wantsBestResolutionOpenGLSurface defaultsto NO, providing a 1-pixel-per-point framebuffer regardless of the backing scale factor for the display the view occupies. Setting this property to YES for a given view causes AppKit to allocate a higher-resolution framebuffer when appropriate for the backing scale factor and target display. To function correctly with wantsBestResolutionOpenGLSurface set to YES, a view must perform correct conversions between view units (points) and pixel units as needed. For example, the common practice of passing the width and height of [self bounds] to glViewport() will yield incorrect results at high resolution, because the parameters passed to the glViewport() function must be in pixels. As a result, you’ll get only partial instead of complete coverage of the render surface. Instead, use the backing store bounds: [self convertRectToBacking:[self bounds]]; 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 44 Optimizing OpenGL for High ResolutionYou can also opt in to high resolution by enabling the Supports Hi-Res Backing setting for the OpenGL view in Xcode, as shown in Figure 3-1. Figure 3-1 Enabling high-resolution backing for an OpenGL view Set Up the Viewport to Support High Resolution The viewport dimensions are in pixelsrelative to the OpenGL surface. Passthe width and height to glViewPort and use 0,0 for the x and y offsets. Listing 3-1 shows how to get the view dimensions in pixels and take the backing store size into account. Listing 3-1 Setting up the viewport for drawing - (void)drawRect:(NSRect)rect // NSOpenGLView subclass { // Get view dimensions in pixels NSRect backingBounds = [self convertRectToBacking:[self bounds]]; Optimizing OpenGL for High Resolution Set Up the Viewport to Support High Resolution 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 45GLsizei backingPixelWidth = (GLsizei)(backingBounds.size.width), backingPixelHeight = (GLsizei)(backingBounds.size.height); // Set viewport glViewport(0, 0, backingPixelWidth, backingPixelHeight); // draw… } You don’t need to perform rendering in pixels, but you do need to be aware of the coordinate system you want to render in. For example, if you want to render in points, this code will work: glOrtho(NSWidth(bounds), NSHeight(bounds),...) Adjust Model and Texture Assets If you opt in to high-resolution drawing, you also need to adjust the model and texture assets of your app. For example, when running on a high-resolution display, you might want to choose larger models and more detailed textures to take advantage of the increased number of pixels. Conversely, on a standard-resolution display, you can continue to use smaller models and textures. If you create and cache textures when you initialize your app, you might want to consider a strategy that accommodates changing the texture based on the resolution of the display. Check for Calls Defined in Pixel Dimensions These functions use pixel dimensions: ● glViewport (GLint x, GLint y, GLsizei width, GLsizei height) ● glScissor (GLint x, GLint y, GLsizei width, GLsizei height) ● glReadPixels (GLint x, GLint y, GLsizei width, GLsizei height, ...) ● glLineWidth (GLfloat width) ● glRenderbufferStorage (..., GLsizei width, GLsizei height) ● glTexImage2D (..., GLsizei width, GLsizei height, ...) Optimizing OpenGL for High Resolution Adjust Model and Texture Assets 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 46Tune OpenGL Performance for High Resolution Performance is an important factor when determining whether to support high-resolution content. The quadrupling of pixels that occurs when you opt in to high resolution requires more work by the fragment processor. If your app performs many per-fragment calculations, the increase in pixels might reduce its frame rate. If your app runs significantly slower at high resolution, consider the following options: ● Optimize fragment shader performance. (See “Tuning Your OpenGL Application” (page 155).) ● Choose a simpler algorithm to implement in your fragment shader. This reduces the quality of each individual pixel to allow for rendering the overall image at a higher resolution. ● Use a fractional scale factor between 1.0 and 2.0. A scale factor of 1.5 provides better quality than a scale factor of 1.0, but it needs to fill fewer pixels than an image scaled to 2.0. ● Multisampling antialiasing can be costly with marginal benefit at high resolution. If you are using it, you might want to reconsider. The best solution depends on the needs of your OpenGL app; you should test more than one of these options and choose the approach that provides the best balance between performance and image quality. Optimizing OpenGL for High Resolution Tune OpenGL Performance for High Resolution 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 47Use a Layer-Backed View to Overlay Text on OpenGL Content When you draw standard controls and Cocoa text to a layer-backed view, the system handles scaling the contents of that layer for you. You need to perform only a few steps to set and use the layer. Compare the controls and text in standard and high resolutions, as shown in Figure 3-2. The text looks the same on both without any additional work on your part. Figure 3-2 A text overlay scales automatically for standard resolution (left) and high resolution (right) To set up a layer-backed view for OpenGL content 1. Set the wantsLayer property of your NSOpenGLView subclass to YES. Enabling the wantsLayer property of an NSOpenGLView object activates layer-backed rendering of the OpenGL view. Drawing a layer-backed OpenGL view proceeds mostly normally through the view’s drawRect: method. The layer-backed rendering mode usesits own NSOpenGLContext object, which is distinct from the NSOpenGLContext that the view uses for drawing in non-layer-backed mode. AppKit automatically creates this context and assigns it to the view by invoking the setOpenGLContext: method. The view’s openGLContext accessor will return the layer-backed OpenGL context (rather than the non-layer-backed context) while the view is operating in layer-backed mode. Optimizing OpenGL for High Resolution Use a Layer-Backed View to Overlay Text on OpenGL Content 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 482. Create the layer content either as a XIB file or programmatically. The controls shown in Figure 3-2 were created in a XIB file by subclassing NSBox and using static text with a variety of standard controls. Using this approach allows the NSBox subclass to ignore mouse events while still allowing the user to interact with the OpenGL content. 3. Add the layer to the OpenGL view by calling the addSublayer: method. Use an Application Window for Fullscreen Operation For the best user experience, if you want your app to run full screen, create a window that covers the entire screen. This approach offers two advantages: ● The system provides optimized context performance. ● Users will be able to see critical system dialogs above your content. You should avoid changing the display mode of the system. Convert the Coordinate Space When Hit Testing Always convert window event coordinates when performing hit testing in OpenGL. The locationInWindow method of the NSEvent class returns the receiver’s location in the base coordinate system of the window. You then need to call the convertPoint:fromView: method to get the local coordinates for the OpenGL view. NSPoint aPoint = [theEvent locationInWindow]; NSPoint localPoint = [myOpenGLView convertPoint:aPoint fromView:nil]; Optimizing OpenGL for High Resolution Use an Application Window for Fullscreen Operation 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 49In OS X, you have the option to draw to the entire screen. This is a common scenario for games and other immersive applications, and OS X applies additional optimizations to improve the performance of full-screen contexts. Figure 4-1 Drawing OpenGL content to the full screen OS X v10.6 and later automatically optimize the performance ofscreen-sized windows, allowing your application to take complete advantage of the window server environment on OS X. For example, critical operating system dialogs may be displayed over your content when necessary. For information about high-resolution and full-screen drawing, see “Use an Application Window for Fullscreen Operation” (page 49). Creating a Full-Screen Application Creating a full-screen context is very simple. Your application should follow these steps: 1. Create a screen-sized window on the display you want to take over: NSRect mainDisplayRect = [[NSScreen mainScreen] frame]; 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 50 Drawing to the Full ScreenNSWindow *fullScreenWindow = [[NSWindow alloc] initWithContentRect: mainDisplayRect styleMask:NSBorderlessWindowMask backing:NSBackingStoreBuffered defer:YES]; 2. Set the window level to be above the menu bar.: [fullScreenWindow setLevel:NSMainMenuWindowLevel+1]; 3. Perform any other window configuration you desire: [fullScreenWindow setOpaque:YES]; [fullScreenWindow setHidesOnDeactivate:YES]; 4. Create a view with a double-buffered OpenGL context and attach it to the window: NSOpenGLPixelFormatAttribute attrs[] = { NSOpenGLPFADoubleBuffer, 0 }; NSOpenGLPixelFormat* pixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs]; NSRect viewRect = NSMakeRect(0.0, 0.0, mainDisplayRect.size.width, mainDisplayRect.size.height); MyOpenGLView *fullScreenView = [[MyOpenGLView alloc] initWithFrame:viewRect pixelFormat: pixelFormat]; [fullScreenWindow setContentView: fullScreenView]; 5. Show the window: [fullScreenWindow makeKeyAndOrderFront:self]; That’s all you need to do. Your content is in a window that is above most other content, but because it is in a window, OS X can still show critical UI elements above your content when necessary (such as error dialogs). When there is no content above your full-screen window, OS X automatically attemptsto optimize this context’s performance. For example, when your application calls flushBuffer on the NSOpenGLContext object, the Drawing to the Full Screen Creating a Full-Screen Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 51system may swap the buffers rather than copying the contents of the back buffer to the front buffer. These performance optimizations are not applied when your application adds the NSOpenGLPFABackingStore attribute to the context. Because the system may choose to swap the buffers rather than copy them, your application must completely redraw the scene after every call to flushBuffer. For more information on NSOpenGLPFABackingStore, see “Ensuring That Back Buffer Contents Remain the Same” (page 66). Avoid changing the display resolution from that chosen by the user. If your application needs to render data at a lower resolution for performance reasons, you can explicitly create a back buffer at the desired resolution and allow OpenGL to scale those results to the display. See “Controlling the Back Buffer Size” (page 78). Drawing to the Full Screen 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 52OpenGL applications may want to use OpenGL to render images without actually displaying them to the user. For example, an image processing application might render the image, then copy that image back to the application and save it to disk. Another useful strategy is to create intermediate images that are used later to render additional content. For example, your application might want to render an image and use it as a texture in a future rendering pass. For best performance, offscreen targets should be managed by OpenGL. Having OpenGL manage offscreen targets allows you to avoid copying pixel data back to your application, except when this is absolutely necessary. OS X offers two useful options for creating offscreen rendering targets: ● Framebuffer objects. The OpenGL framebuffer extension allows your application to create fully supported offscreen OpenGL framebuffers. Framebuffer objects are fully supported as a cross-platform extension, so they are the preferred way to create offscreen rendering targets. See “Rendering to a Framebuffer Object” (page 53). ● Pixel buffer drawable objects. Pixel buffer drawable objects are an Apple-specific technology for creating an offscreen target. Each of the Apple-specific OpenGL APIs provides routines to create an offscreen hardware accelerated pixel buffer. Pixel buffers are recommended for use only when framebuffer objects are not available. See “Rendering to a Pixel Buffer” (page 60). Rendering to a Framebuffer Object The OpenGL framebuffer extension (GL_EXT_framebuffer_object) allows applications to create offscreen rendering targets from within OpenGL. OpenGL manages the memory for these framebuffers. Note: Extensions are available on a per-renderer basis. Before you use framebuffer objects you must check each renderer to make sure that itsupportsthe extension. See “Detecting Functionality” (page 83) for more information. A framebuffer object(FBO) issimilar to a drawable object, except a drawable object is a window-system-specific object, whereas a framebuffer object is a window-agnostic object that's defined in the OpenGL standard. After drawing to a framebuffer object, it is straightforward to read the pixel data to the application, or to use it as source data for other OpenGL commands. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 53 Drawing OffscreenFramebuffer objects offer a number of benefits: ● They are window-system independent, which makes porting code easier. ● They are easy to set up and save memory. There is no need to set up attributes and obtain a pixel format object. ● They are associated with a single OpenGL context, whereas each pixel buffer must be bound to a context. ● You can switch between them faster since there is no context switch as with pixel buffers. Because all commands are rendered by a single context, no additional serialization is required. ● They can share depth buffers; pixel buffers cannot. ● You can use them for 2D pixel images and texture images. Completeness is a key concept to understanding framebuffer objects. Completeness is a state that indicates whether a framebuffer object meets all the requirements for drawing. You test for this state after performing all the necessary setup work. If a framebuffer object is not complete, it cannot be used as the destination for rendering operations and as a source for read operations. Completeness is dependent on many factors that are not possible to condense into one or two statements, but these factors are thoroughly defined in the OpenGL specification for the framebuffer object extension. The specification describes the requirements for internal formats of images attached to the framebuffer, how to determine if a format is color-, depth-, and stencil-renderable, as well as other requirements. Prior to using framebuffer objects, read the OpenGL specification, which not only defines the framebuffer object API, but provides detailed definitions of all the terms necessary to understand their use and shows several code examples. The remainder of thissection provides an overview of how to use a framebuffer as either a texture or an image. The functions used to set up textures and images are slightly different. The API for images usesthe renderbuffer terminology defined in the OpenGL specification. A renderbuffer image is simply a 2D pixel image. The API for textures uses texture terminology, as you might expect. For example, one of the calls for setting up a framebuffer object for a texture is glFramebufferTexture2DEXT, whereasthe call forsetting up a framebuffer object for an image is glFramebufferRenderbufferEXT. You'll see how to set up a simple framebuffer object for each type of drawing, starting first with textures. Using a Framebuffer Object as a Texture These are the basic steps needed to set up a framebuffer object for drawing a texture offscreen: 1. Make sure the framebuffer extension (GL_EXT_framebuffer_object) is supported on the system that your code runs on. See “Determining the OpenGL Capabilities Supported by the Renderer” (page 83). Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 542. Check the renderer limits. For example, you might want to call the OpenGL function glGetIntegerv to check the maximum texture size (GL_MAX_TEXTURE_SIZE) or find out the maximum number of color buffers you can attach to the framebuffer object(GL_MAX_COLOR_ATTACHMENTS_EXT). 3. Generate a framebuffer object name by calling the following function: void glGenFramebuffersEXT (GLsizei n, GLuint *ids); n is the number of framebuffer object names that you want to create. On return, *ids points to the generated names. 4. Bind the framebuffer object name to a framebuffer target by calling the following function: void glBindFramebufferEXT(GLenum target, GLuint framebuffer); target should be the constant GL_FRAMEBUFFER_EXT. framebuffer is set to an unused framebuffer object name. On return, the framebuffer object is initialized to the state values described in the OpenGL specification for the framebuffer object extension. Each attachment point of the framebuffer is initialized to the attachment point state values described in the specification. The number of attachment points is equal to GL_MAX_COLOR_ATTACHMENTS_EXT plus 2 (for depth and stencil attachment points). Whenever a framebuffer object is bound, drawing commands are directed to it instead of being directed to the drawable associated with the rendering context. 5. Generate a texture name. void glGenTextures(GLsizei n, GLuint *textures); n is the number of texture object names that you want to create. On return, *textures points to the generated names. 6. Bind the texture name to a texture target. void glBindTexture(GLenum target, GLuint texture); target is the type of texture to bind. texture is the texture name you just created. 7. Set up the texture environment and parameters. Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 558. Define the texture by calling the appropriate OpenGL function to specify the target, level of detail, internal format, dimensions, border, pixel data format, and texture data storage. 9. Attach the texture to the framebuffer by calling the following function: void glFramebufferTexture2DEXT (GLenum target, GLenum attachment, GLenum textarget, GLuint texture, GLint level); target must be GL_FRAMEBUFFER_EXT. attachment must be one of the attachment points of the framebuffer: GL_STENCIL_ATTACHMENT_EXT, GL_DEPTH_ATTACHMENT_EXT, or GL_COLOR_ATTACHMENTn_EXT, where n is a number from 0 to GL_MAX_COLOR_ATTACHMENTS_EXT-1. textarget is the texture target. texture is an existing texture object. level is the mipmap level of the texture image to attach to the framebuffer. 10. Check to make sure that the framebuffer is complete by calling the following function: GLenum glCheckFramebufferStatusEXT(GLenum target); target must be the constant GL_FRAMEBUFFER_EXT. This function returns a status constant. You must test to make sure that the constant is GL_FRAMEBUFFER_COMPLETE_EXT. If it isn't, see the OpenGL specification for the framebuffer object extension for a description of the other constants in the status enumeration. 11. Render content to the texture. You must make sure to bind a different texture to the framebuffer object or disable texturing before you render content. If you render to a framebuffer object texture attachment with that same texture currently bound and enabled, the result is undefined. 12. To draw the contents of the texture to a window, make the window the target of all rendering commands by calling the function glBindFramebufferEXT and passing the constant GL_FRAMEBUFFER_EXT and 0. The window is always specified as 0. 13. Use the texture attachment as a normal texture by binding it, enabling texturing, and drawing. 14. Delete the texture. 15. Delete the framebuffer object by calling the following function: void glDeleteFramebuffersEXT (GLsizei n, const GLuint *framebuffers); Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 56n is the number of framebuffer objects to delete. *framebuffers points to an array that contains the framebuffer object names. Listing 5-1 shows code that performs these tasks. This example creates and draws to a single framebuffer object. Listing 5-1 Setting up a framebuffer for texturing GLuint framebuffer, texture; GLenum status; glGenFramebuffersEXT(1, &framebuffer); // Set up the FBO with one texture attachment glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer); glGenTextures(1, &texture); glBindTexture(GL_TEXTURE_2D, texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, TEXWIDE, TEXHIGH, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, texture, 0); status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); if (status != GL_FRAMEBUFFER_COMPLETE_EXT) // Handle error here // Your code to draw content to the FBO // ... // Make the window the target glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); //Your code to use the contents of the FBO // ... //Tear down the FBO and texture attachment glDeleteTextures(1, &texture); glDeleteFramebuffersEXT(1, &framebuffer); Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 57Using a Framebuffer Object as an Image There is a lot of similarity between setting up a framebuffer object for drawing images and setting one up to draw textures. These are the basic steps needed to set up a framebuffer object for drawing a 2D pixel image (a renderbuffer image) offscreen: 1. Make sure the framebuffer extension (EXT_framebuffer_object) is supported on the renderer that your code runs on. 2. Check the renderer limits. For example, you might want to call the OpenGL function glGetIntegerv to find out the maximum number of color buffers (GL_MAX_COLOR_ATTACHMENTS_EXT). 3. Generate a framebuffer object name by calling the function glGenFramebuffersEXT. 4. Bind the framebuffer object name to a framebuffer target by calling the function glBindFramebufferEXT. 5. Generate a renderbuffer object name by calling the following function: void glGenRenderbuffersEXT (GLsizei n, GLuint *renderbuffers ); n is the number of renderbuffer object names to create. *renderbuffers points to storage for the generated names. 6. Bind the renderbuffer object name to a renderbuffer target by calling the following function: void glBindRenderbufferEXT (GLenum target, GLuint renderbuffer); target must be the constant GL_RENDERBUFFER_EXT. renderbuffer is the renderbuffer object name generated previously. 7. Create data storage and establish the pixel format and dimensions of the renderbuffer image by calling the following function: void glRenderbufferStorageEXT (GLenum target, GLenum internalformat, GLsizei width, GLsizei height); target must be the constant GL_RENDERBUFFER_EXT. internalformat is the pixel format of the image. The value must be RGB, RGBA, DEPTH_COMPONENT, STENCIL_INDEX, or one of the other formats listed in the OpenGL specification. width is the width of the image, in pixels. height is the height of the image, in pixels. Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 588. Attach the renderbufferto a framebuffertarget by calling the function glFramebufferRenderbufferEXT. void glFramebufferRenderbufferEXT(GLenum target, GLenum attachment, GLenum renderbuffertarget, GLuint renderbuffer); target must be the constant GL_FRAMEBUFFER_EXT. attachment should be one of the attachment points of the framebuffer: GL_STENCIL_ATTACHMENT_EXT, GL_DEPTH_ATTACHMENT_EXT, or GL_COLOR_ATTACHMENTn_EXT, where n is a number from 0 to GL_MAX_COLOR_ATTACHMENTS_EXT–1. renderbuffertarget must be the constant GL_RENDERBUFFER_EXT. renderbuffer should be set to the name of the renderbuffer object that you want to attach to the framebuffer. 9. Check to make sure that the framebuffer is complete by calling the following function: enum glCheckFramebufferStatusEXT(GLenum target); target must be the constant GL_FRAMEBUFFER_EXT. This function returns a status constant. You must test to make sure that the constant is GL_FRAMEBUFFER_COMPLETE_EXT. If it isn't, see the OpenGL specification for the framebuffer object extension for a description of the other constants in the status enumeration. 10. Render content to the renderbuffer. 11. To access the contents of the renderbuffer object, bind the framebuffer object and then use OpenGL functions such as glReadPixels or glCopyTexImage2D. 12. Delete the framebuffer object with its renderbuffer attachment. Listing 5-2 shows code that sets up and draws to a single renderbuffer object. Your application can set up more than one renderbuffer object if it requires them. Listing 5-2 Setting up a renderbuffer for drawing images GLuint framebuffer, renderbuffer; GLenum status; // Set the width and height appropriately for your image GLuint imageWidth = 1024, imageHeight = 1024; Drawing Offscreen Rendering to a Framebuffer Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 59//Set up a FBO with one renderbuffer attachment glGenFramebuffersEXT(1, &framebuffer); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebuffer); glGenRenderbuffersEXT(1, &renderbuffer); glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, renderbuffer); glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_RGBA8, imageWidth, imageHeight); glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, renderbuffer); status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); if (status != GL_FRAMEBUFFER_COMPLETE_EXT) // Handle errors //Your code to draw content to the renderbuffer // ... //Your code to use the contents // ... // Make the window the target glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); // Delete the renderbuffer attachment glDeleteRenderbuffersEXT(1, &renderbuffer); Rendering to a Pixel Buffer The OpenGL extension string GL_APPLE_pixel_buffer provides hardware-accelerated offscreen rendering to a pixel buffer. A pixel buffer is typically used as a texture source. It can also be used for remote rendering. Important: Pixel buffers are deprecated starting with OS X v10.7 and are not supported by the OpenGL 3.2 Core profile; use framebuffer objects instead. You must create a rendering context for each pixel buffer. For example, if you want to use a pixel buffer as a texture source, you create one rendering context attached to the pixel buffer and a second context attached to a window or view. The first step in using a pixel buffer is to create it. The Apple-specific OpenGL APIs each provide a routine for this purpose: Drawing Offscreen Rendering to a Pixel Buffer 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 60● The NSOpenGLPixelBuffer method initWithTextureTarget:textureInternalFormat:textureMaxMipMapLevel:pixelsWide:pixelsHigh: ● The CGL function CGLCreatePBuffer Each of these routinesrequiresthat you provide a texture target, an internal format, a maximum mipmap level, and the width and height of the texture. The texture target must be one of these OpenGL texture constants: GL_TEXTURE_2D for a 2D texture, GL_TEXTURE_RECTANGLE_ARB for a rectangular (not power-of-two) texture, or GL_TEXTURE_CUBE_MAP for a cube map texture. The internal format specifies how to interpret the data for texturing operations. You can supply any of these options: GL_RGB (each pixel is a three-component group), GL_RGBA (each pixel is a four-component group), or GL_DEPTH_COMPONENT (each pixel is a single depth component). The maximum mipmap level should be 0 for a pixel buffer that does not have a mipmap. The value that you supply should not exceed the actual maximum number of mipmap levels that can be represented with the given width and height. Note that none of the routines that create a pixel buffer allocate the storage needed. The storage is allocated by the system at the time that you attach the pixel buffer to a rendering context. Setting Up a Pixel Buffer for Offscreen Drawing After you create a pixel buffer, the general procedure for using a pixel buffer for drawing is similar to the way you set up windows and views for drawing: 1. Specify renderer and buffer attributes. 2. Obtain a pixel format object. 3. Create a rendering context and make it current. 4. Attach a pixel buffer to the context using the appropriate Apple OpenGL attachment function: ● The setPixelBuffer:cubeMapFace:mipMapLevel:currentVirtualScreen: method of the NSOpenGLContext class instructs the receiver to render into a pixel buffer. ● The CGL function CGLSetPBuffer attaches a CGL rendering context to a pixel buffer. 5. Draw, as you normally would, using OpenGL. Using a Pixel Buffer as a Texture Source Pixel bufferslet you perform direct texturing without incurring the cost of extra copies. After drawing to a pixel buffer, you can create a texture by following these steps: Drawing Offscreen Rendering to a Pixel Buffer 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 611. Generate a texture name by calling the OpenGL function glGenTextures. 2. Bind the named texture to a target by calling the OpenGL function glBindTexture. 3. Set the texture parameters by calling OpenGL function glTexEnvParameter. 4. Set up the pixel buffer asthe source for the texture by calling one of the following Apple OpenGL functions: ● The setTextureImageToPixelBuffer:colorBuffer: method of the NSOpenGLContext class attaches the image data in the pixel buffer to the texture object currently bound by the receiver. ● The CGL function CGLTexImagePBuffer binds the contents of a CGL pixel buffer as the data source for a texture object. The context that you attach to the pixel buffer is the target rendering context: the context that uses the pixel buffer as the source of the texture data. Each of these routines requires a source parameter, which is an OpenGL constant that specifies the source buffer to texture from. The source parameter must be a valid OpenGL buffer, such as GL_FRONT, GL_BACK, or GL_AUX0, and should be compatible with the buffer attributes used to create the OpenGL context associated with the pixel buffer. This means that the pixel buffer must possess the buffer in question for texturing to succeed. For example, if the buffer attribute used with the pixel buffer is only single buffered, then texturing from the GL_BACK buffer will fail. If you modify content of any pixel buffer that contains mipmap levels, you must call the appropriate Apple OpenGL function again (setTextureImageToPixelBuffer:colorBuffer: or CGLTexImagePBuffer) before drawing with the pixel buffer to ensure that the content issynchronized with OpenGL. To synchronize the content of pixel buffers without mipmaps, simply rebind to the texture object using glBind. 5. Draw primitives using the appropriate texture coordinates. (See "The Red book"—OpenGL Programming Guide—for details.) 6. Call glFlush to cause all drawing commands to be executed. 7. When you no longer need the texture object, call the OpenGL function glDeleteTextures. 8. Set the current context to NULL using one of the Apple OpenGL routines: ● The makeCurrentContext method of the NSOpenGLContext class ● The CGL function CGLSetCurrentContext 9. Destroy the pixel buffer by calling CGLDestroyPBuffer. 10. Destroy the context by calling CGLDestroyContext. 11. Destroy the pixel format by calling CGLDestroyPixelFormat. You might find these guidelines useful when using pixel buffers for texturing: Drawing Offscreen Rendering to a Pixel Buffer 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 62● You cannot make OpenGL texturing calls that modify pixel buffer content (such as glTexSubImage2D or glCopyTexImage2D) with the pixel buffer as the destination. You can use texturing commands to read data from a pixel buffer, such as glCopyTexImage2D, with the pixel buffer texture as the source. You can also use OpenGL functions such as glReadPixels to read the contents of a pixel buffer directly from the pixel buffer context. ● Texturing can fail to produce the intended results without reporting an error. You must make sure that you enable the proper texture target, set a compatible filter mode, and adhere to other requirements described in the OpenGL specification. ● You are not required to set up contextsharing when you texture from a pixel buffer. You can have different pixel format objects and rendering contexts for both the pixel buffer and the target drawable object, without sharing resources, and still texture using a pixel buffer in the target context. Rendering to a Pixel Buffer on a Remote System Follow these steps to render to a pixel buffer on a remote system. The remote system does not need to have a display attached to it. 1. When you set the renderer and buffer attributes, include the remote pixel buffer attribute kCGLPFARemotePBuffer. 2. Log in to the remote machine using the ssh command to ensure security. 3. Run the application on the target system. 4. Retrieve the content. Drawing Offscreen Rendering to a Pixel Buffer 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 63Renderer and buffer attributes determine the renderers that the system chooses for your application. Each of the Apple-specific OpenGL APIs provides constants that specify a variety of renderer and buffer attributes. You supply a list of attribute constants to one of the Apple OpenGL functions for choosing a pixel format object. The pixel format object maintains a list of renderers that meet the requirements defined by those attributes. In a real-world application, selecting attributes is an art because you don't know the exact combination of hardware and software that your application will run on. An attribute list that is too restrictive may miss out on future capabilities or it may fail to return renderers on some systems. For example, if you specify a buffer of a specific depth, your application won't be able to take advantage of a larger buffer when more memory is available in the future. In this case, you might specify a required minimum and direct OpenGL to use the maximum available. Although you might specify attributes that make your OpenGL content look and run its best, you also need to consider whether your application should run on a less-capable system with less speed or detail. If tradeoffs are acceptable, you need to set the attributes accordingly. OpenGL Profiles (OS X v10.7) When your application is running on OS X v10.7, it should always include the kCGLPFAOpenGLProfile attribute, followed by a constant for the profile whose functionality your application requires. A profile affects different parts of OpenGL in OS X: ● A profile requires that a specific version of the OpenGL API must provided by the renderer. The renderer may implement a different version of the OpenGL specification only if that version implements the same functions and constants required by the profile; typically, this means a renderer that supports a later version of the OpenGL specification that did not remove or alter behavior specified in the version of the OpenGL specification your application requested. ● The profile alters the list of OpenGL extensions returned by the renderer. For example, extensions whose functionality is provided by the version of the OpenGL specification you requested are not also returned in the list of extensions. ● On OS X, the profile affects what other renderer and buffer attributes may be included in the attributes list. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 64 Choosing Renderer and Buffer AttributesFollow these guidelines to choose an OpenGL profile: ● If you are developing a new OS X v10.7 application, implement your OpenGL functionality using the OpenGL 3.2 Core profile; include the kCGLOGLPVersion_3_2_Core constant. The OpenGL 3.2 core profile is defined by Khronos and explicitly removes removes deprecated features described in earlier versions of the OpenGL specification; further the core profile prohibits these functions from being added back into OpenGL using extensions. OpenGL 3.2 core represents a complete break from the fixed function pipeline of OpenGL 1.x in favor of a clean, lean shader-based pipeline. When you use the OpenGL 3.2 Core profile on OS X, legacy extensions are removed wherever their functionality is already provided by OpenGL 3.2. Further, pixel and buffer format attributesthat are marked as deprecated may not be used in conjunction with the OpenGL 3.2 core profile. ● If you are updating an existing OS X application, include the kCGLOGLPVersion_Legacy constant. The legacy profile provides the same functionality found in earlier versions of OS X, with no changes. It continues to support older extensions as well as deprecated pixel and buffer format attributes. No new functionality will be added to the legacy profile in future versions of OS X. ● If you want to use OpenGL 3.2 in your application, but also want to support earlier versions of OS X or Macsthat lack hardware support for OpenGL 3.2, you must implement multiple OpenGL rendering options in your application. On OS X v10.7, your application should first test to see if OpenGL 3.2 is supported. If OpenGL 3.2 is supported, create a context and provide it to your OpenGL 3.2 rendering path. Otherwise, search for a pixel format using the legacy profile instead. For more information on migrating an application to OpenGL 3.2, see “Updating an Application to Support the OpenGL 3.2 Core Specification” (page 168). Buffer Size Attribute Selection Tips Follow these guidelines to choose buffer attributes that specify buffer size: ● To choose color, depth, and accumulation buffers that are greater than or equal to a size you specify, use the minimum policy attribute (NSOpenGLPFAMinimumPolicy or kCGLPFAMinimumPolicy). ● To choose color, depth, and accumulation buffers that are closest to a size you specify, use the closest policy attribute (NSOpenGLPFAClosestPolicy or kCGLPFAClosestPolicy). ● To choose the largest color, depth, and accumulation buffers available, use the maximum policy attribute (NSOpenGLPFAMaximumPolicy or kCGLPFAMaximumPolicy). Aslong as you pass a value that is greater than 0, this attribute specifies the use of color, depth, and accumulation buffers that are the largest size possible. Choosing Renderer and Buffer Attributes Buffer Size Attribute Selection Tips 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 65Ensuring That Back Buffer Contents Remain the Same When your application uses a double-buffered context, it displays the rendered image by calling a function to flush the image to the screen— theNSOpenGLContext class’s flushBuffer method or the CGL function CGLFlushDrawable. When the image is displayed, the contents of the back buffer are not preserved. The next time your application wants to update the back buffer, it must completely redraw the scene. Your application can add a backing store attribute (NSOpenGLPFABackingStore or kCGLPFABackingStore) to preserve the contents of the buffer after the back buffer is flushed. Adding this attribute disables some optimizations that the system can perform, which may impact the performance of your application. Ensuring a Valid Pixel Format Object The pixel format routines (the initWithAttributes: method of the NSOpenGLPixelFormat class and the CGLChoosePixelFormat function) return a pixel format object to your application that you use to create a rendering context. The buffer and renderer attributes that you supply to the pixel format routine determine the characteristics of the OpenGL drawing sent to the rendering context. If the system can't find at least one pixel format that satisfies the constraints specified by the attribute array, it returns NULL for the pixel format object. In this case, your application should have an alternative that ensures it can obtain a valid object. One alternative is to set up your attribute array with the least restrictive attribute first and the most restrictive attribute last. Then, it is fairly easy to adjust the attribute list and make another request for a pixel format object. The code in Listing 6-1 illustrates this technique using the CGL API. Notice that the initial attributes list is set up with the supersample attribute last in the list. If the function CGLChoosePixelFormat returns NULL, it clears the supersample attribute to NULL and tries again. Listing 6-1 Using the CGL API to create a pixel format object int last_attribute = 6; CGLPixelFormatAttribute attribs[] = { kCGLPFAAccelerated, kCGLPFAColorSize, 24 kCGLPFADepthSize, 16, kCGLPFADoubleBuffer, kCGLPFASupersample, 0 }; Choosing Renderer and Buffer Attributes Ensuring That Back Buffer Contents Remain the Same 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 66CGLPixelFormatObj pixelFormatObj; GLint numPixelFormats; long value; CGLChoosePixelFormat (attribs, &pixelFormatObj, &numPixelFormats); if( pixelFormatObj == NULL ) { attribs[last_attribute] = NULL; CGLChoosePixelFormat (attribs, &pixelFormatObj, &numPixelFormats); } if( pixelFormatObj == NULL ) { // Your code to notify the user and take action. } Ensuring a Specific Type of Renderer There are times when you want to ensure that you obtain a pixel format that supports a specific renderer type, such as a hardware-accelerated renderer. Table 6-1 lists attributes that support specific types of renderers. The table reflects the following tips for setting up pixel formats: ● To select only hardware-accelerated renderers, use both the accelerated and no-recovery attributes. ● To use only the floating-point software renderer, use the appropriate generic floating-point constant. ● To render to system memory, use the offscreen pixel attribute. Note that this rendering option does not use hardware acceleration. ● To render offscreen with hardware acceleration, specify a pixel buffer attribute. (See “Rendering to a Pixel Buffer” (page 60).) Table 6-1 Renderer types and pixel format attributes Renderer type CGL Cocoa NSOpenGLPFAAccelerated NSOpenGLPFANoRecovery kCGLPFAAccelerated kCGLPFANoRecovery Hardware-accelerated onscreen Choosing Renderer and Buffer Attributes Ensuring a Specific Type of Renderer 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 67Renderer type CGL Cocoa NSOpenGLPFARendererID kCGLRendererGenericFloatID kCGLPFARendererID kCGLRendererGenericFloatID Software (floating-point) System memory (not kCGLPFAOffScreen NSOpenGLPFAOffScreen accelerated) Hardware-accelerated kCGLPFAPBuffer NSOpenGLPFAPixelBuffer offscreen Ensuring a Single Renderer for a Display In some cases you may want to use a specific hardware renderer and nothing else. Since the OpenGL framework normally provides a software renderer as a fallback in addition to whatever hardware renderer it chooses, you need to prevent OpenGL from choosing the software renderer as an option. To do this, specify the no-recovery attribute for a windowed drawable object. Limiting a context to use a specific display, and thus a single renderer, has its risks. If your application runs on a system that uses more than one display, dragging a windowed drawable object from one display to the other is likely to yield a less than satisfactory result. Either rendering fails, or OpenGL uses the specified renderer and then copiesthe result to the second display. The same unsatisfactory result happens when attaching a full-screen context to another display. If you choose to use the hardware renderer associated with a specific display, you need to add code that detects and handles display changes. The code examples that follow show how to use each of the Apple-specific OpenGL APIs to set up a context that uses a single renderer. Listing 6-2 shows how to set up an NSOpenGLPixelFormat object that supports a single renderer. The attribute NSOpenGLPFANoRecovery specifies to OpenGL not to provide the fallback option of the software renderer. Listing 6-2 Setting an NSOpenGLContext object to use a specific display #import + (NSOpenGLPixelFormat*)defaultPixelFormat { NSOpenGLPixelFormatAttribute attributes [] = { NSOpenGLPFAScreenMask, 0, NSOpenGLPFANoRecovery, Choosing Renderer and Buffer Attributes Ensuring a Single Renderer for a Display 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 68NSOpenGLPFADoubleBuffer, (NSOpenGLPixelFormatAttribute)nil }; CGDirectDisplayID display = CGMainDisplayID (); // Adds the display mask attribute for selected display attributes[1] = (NSOpenGLPixelFormatAttribute) CGDisplayIDToOpenGLDisplayMask (display); return [[(NSOpenGLPixelFormat *)[NSOpenGLPixelFormat alloc] initWithAttributes:attributes] autorelease]; } Listing 6-3 shows how to use CGL to set up a context that uses a single renderer. The attribute kCGLPFANoRecovery ensures that OpenGL does not provide the fallback option of the software renderer. Listing 6-3 Setting a CGL context to use a specific display #include CGLPixelFormatAttribute attribs[] = { kCGLPFADisplayMask, 0, kCGLPFANoRecovery, kCGLPFADoubleBuffer, 0 }; CGLPixelFormatObj pixelFormat = NULL; GLint numPixelFormats = 0; CGLContextObj cglContext = NULL; CGDirectDisplayID display = CGMainDisplayID (); // Adds the display mask attribute for selected display attribs[1] = CGDisplayIDToOpenGLDisplayMask (display); CGLChoosePixelFormat (attribs, &pixelFormat, &numPixelFormats); Allowing Offline Renderers Adding the attribute NSOpenGLPFAAllowOfflineRenderers allows OpenGL to include offline renderers in the list of virtual screens returned in the pixel format object. Apple recommends you include this attribute, because it allows your application to work better in environments where renderers come and go,such as when a new display is plugged into a Mac. Choosing Renderer and Buffer Attributes Allowing Offline Renderers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 69If your application includes NSOpenGLPFAAllowOfflineRenderers in the list of attributes, your application must also watch for display changes and update its rendering context. See “Update the Rendering Context When the Renderer or Geometry Changes” (page 72). OpenCL If your applications uses OpenCL to perform other computations, you may want to find an OpenGL renderer that also supports OpenCL. To do this, add the attribute NSOpenGLPFAAcceleratedCompute to the pixel format attribute list. Adding this attribute restricts the list of renderers to those that also support OpenCL. More information on OpenCL can be found in the OpenCL Programming Guide for Mac . Deprecated Attributes There are several renderer and buffer attributes that are no longer recommended either because they are too narrowly focused or no longer useful. Your application should move away from using any of these attributes: ● The robust attribute (NSOpenGLPFARobust or kCGLPFARobust) specifies only those renderers that do not have any failure modes associated with a lack of video card resources. ● The multiple-screen attribute (NSOpenGLPFAMultiScreen or kCGLPFAMultiScreen) specifies only those renderers that can drive more than one screen at a time. ● The multiprocessing-safe attribute (kCGLPFAMPSafe) specifies only those renderers that are thread safe. This attribute is deprecated in OS X because all renderers can accept commands for threads running on a second processor. However, this does not mean that all renderers are thread safe or reentrant. See “Concurrency and OpenGL” (page 148). ● The compliant attribute (NSOpenGLPFACompliant or kCGLPFACompliant) specifies only OpenGL-compliant renderers. All OS X renderers are OpenGL-compliant, so this attribute is no longer useful. ● The fullscreen attribute (kCGLPFAFullScreen) requested special fullscreen contexts. The window screen attribute (kCGLPFAWindow) required the context to support windowed contexts. OS X no longer requires a special full screen context to be created, as it automatically provides the same performance benefits with a properly formatted window. ● The offscreen buffer attribute (kCGLPFAOffScreen) selects renderers capable of rendering to offscreen memory. Instead, use a frame buffer object as the rendering target and read the final results back to application memory. Choosing Renderer and Buffer Attributes OpenCL 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 70● The pixel buffer attributes(kCGLPFAPBuffer and kCGLPFARemotePBuffer are no longer recommended; use frame buffer objects instead. ● The auxiliary buffers attribute (kCGLPFAAuxBuffers) specifies the number of required auxiliary buffers your application requires. Auxiliary buffers are not supported by the OpenGL 3.2 Core profile. Because auxiliary buffers are not supported, the kCGLPFAAuxDepthStencil attribute that modifies it is also deprecated. ● The accumulation buffersize attribute (kCGLPFAAccumSize)specifiesthe desired size for the accumulation buffer. Accumulation buffers are not supported by the OpenGL 3.2 Core Profile. Important: Your application may not use any of the deprecated attributes in conjunction with a profile other than the legacy profile; if you do, pixel format creation fails. Choosing Renderer and Buffer Attributes Deprecated Attributes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 71A rendering context is a container forstate information. When you designate a rendering context asthe current rendering context,subsequent OpenGL commands modify that context’sstate, objects attached to that context, or the drawable object associated with that context. The actual drawing surfaces are never owned by the rendering context but are created, as needed, when the rendering context is actually attached to a drawable object. You can attach multiple rendering contexts to the same drawing surfaces. Each context maintains its own drawing state. “Drawing to a Window or View” (page 35), “Drawing to the Full Screen” (page 50), and “Drawing Offscreen” (page 53) show how to create a rendering context and attach it to a drawable object. This chapter describes advanced ways to interact with rendering contexts. Update the Rendering Context When the Renderer or Geometry Changes A renderer change can occur when the user drags a window from one display to another or when a display is attached or removed. Geometry changes occur when the display mode changes or when a window is resized or moved. If your application uses an NSOpenGLView object to maintain the context, it is automatically updated. An application that creates a custom view to hold the rendering context must track the appropriate system events and update the context when the geometry or display changes. Updating a rendering context notifies it of geometry changes; it doesn't flush content. Calling an update function updates the attached drawable objects and ensures that the renderer is properly updated for any virtual screen changes. If you don't update the rendering context, you may see rendering artifacts. The routine that you call for updating determines how events related to renderer and geometry changes are handled. For applications that use or subclass NSOpenGLView, Cocoa calls the update method automatically. Applications that create an NSOpenGLContext object manually must call the update method of NSOpenGLContext directly. For a full-screen Cocoa application, calling the setFullScreen method of NSOpenGLContext ensures that depth, size, or display changes take affect. Your application must update the rendering context after the system event but before drawing to the context. If the drawable object is resized, you may want to issue a glViewport command to ensure that the content scales properly. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 72 Working with Rendering ContextsNote: Some system-level events(such as display mode changes) that require a context update could reallocate the buffers of the context; thus you need to redraw the entire scene after all context updates. It's important that you don't update rendering contexts more than necessary. Your application should respond to system-level events and notifications rather than updating every frame. For example, you'll want to respond to window move and resize operations and to display configuration changes such as a color depth change. Tracking Renderer Changes It's fairly straightforward to track geometry changes, but how are renderer changes tracked? This is where the concept of a virtual screen becomes important (see “Virtual Screens” (page 26)). A change in the virtual screen indicates a renderer change, a change in renderer capability, or both. When your application detects a window resize event, window move event, or display change, it should check for a virtual screen change and respond to the change to ensure that the current application state reflects any changes in renderer capabilities. Each of the Apple-specific OpenGL APIs has a function that returns the current virtual screen number: ● The currentVirtualScreen method of the NSOpenGLContext class ● The CGLGetVirtualScreen function The virtual screen number represents an index in the list of virtual screens that were set up specifically for the pixel format object used for the rendering context. The number is unique to the list but is meaningless otherwise. When the renderer changes, the limits and extensions available to OpenGL may also change. Your application should retest the capabilities of the renderer and use these to choose its rendering algorithms appropriately. See “Determining the OpenGL Capabilities Supported by the Renderer” (page 83). Updating a Rendering Context for a Custom Cocoa View If you subclass NSView instead of using the NSOpenGLView class, your application must update the rendering context. That's due to a slight difference between the events normally handled by the NSView class and those handled by the NSOpenGLView class. Cocoa does not call a reshape method for the NSView class when the size changes because that class does not export a reshape method to override. Instead, you need to perform reshape operations directly in your drawRect: method, looking for changes in view bounds prior to drawing content. Using this approach provides results that are equivalent to using the reshape method of the NSOpenGLView class. Working with Rendering Contexts Update the Rendering Context When the Renderer or Geometry Changes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 73Listing 7-1 is a partial implementation of a custom view thatshows how to handle context updates. The update method is called after move, resize, and display change events and when the surface needs updating. The class adds an observer to the notification NSViewGlobalFrameDidChangeNotification, which is posted whenever an NSView object that has attached surfaces (that is, NSOpenGLContext objects) resizes, moves, or changes coordinate offsets. It's slightly more complicated to handle changes in the display configuration. For that, you need to register for the notification NSApplicationDidChangeScreenParametersNotification through the NSApplication class. This notification is posted whenever the configuration of any of the displays attached to the computer is changed (either programmatically or when the user changes the settings in the interface). Listing 7-1 Handling context updates for a custom view #import #import #import @class NSOpenGLContext, NSOpenGLPixelFormat; @interface CustomOpenGLView : NSView { @private NSOpenGLContext* _openGLContext; NSOpenGLPixelFormat* _pixelFormat; } - (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format; - (void)update; @end @implementation CustomOpenGLView - (id)initWithFrame:(NSRect)frameRect pixelFormat:(NSOpenGLPixelFormat*)format { Working with Rendering Contexts Update the Rendering Context When the Renderer or Geometry Changes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 74self = [super initWithFrame:frameRect]; if (self != nil) { _pixelFormat = [format retain]; [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(_surfaceNeedsUpdate:) name:NSViewGlobalFrameDidChangeNotification object:self]; } return self; } - (void)dealloc [[NSNotificationCenter defaultCenter] removeObserver:self name:NSViewGlobalFrameDidChangeNotification object:self]; [self clearGLContext]; } - (void)update { if ([_openGLContext view] == self) { [_openGLContext update]; } } - (void) _surfaceNeedsUpdate:(NSNotification*)notification { [self update]; } @end Working with Rendering Contexts Update the Rendering Context When the Renderer or Geometry Changes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 75Context Parameters Alter the Context’s Behavior A rendering context has a variety of parameters that you can set to suit the needs of your OpenGL drawing. Some of the most useful, and often overlooked, context parameters are discussed in thissection:swap interval, surface opacity, surface drawing order, and back-buffer size control. Each of the Apple-specific OpenGL APIs provides a routine forsetting and getting rendering context parameters: ● The setValues:forParameter: method of the NSOpenGLContext class takes as arguments a list of values and a list of parameters. ● The CGLSetParameter function takes as parameters a rendering context, a constant that specifies an option, and a value for that option. Some parameters need to be enabled for their values to take effect. The reference documentation for a parameter indicates whether a parameter needs to be enabled. See NSOpenGLContext Class Reference , and CGL Reference . Swap Interval Allows an Application to Synchronize Updatesto the Screen Refresh If the swap interval is set to 0 (the default), buffers are swapped as soon as possible, without regard to the vertical refresh rate of the monitor. If the swap interval is set to any other value, the buffers are swapped only during the vertical retrace of the monitor. For more information, see “Synchronize with the Screen Refresh Rate” (page 96). You can use the following constants to specify that you are setting the swap interval value: ● For Cocoa, use NSOpenGLCPSwapInterval. ● If you are using the CGL API, use kCGLCPSwapInterval as shown in Listing 7-2. Listing 7-2 Using CGL to set up synchronization GLint sync = 1; // ctx must be a valid context CGLSetParameter (ctx, kCGLCPSwapInterval, &sync); Working with Rendering Contexts Context Parameters Alter the Context’s Behavior 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 76Surface Opacity Specifies How the OpenGL Surface Blends with Surfaces Behind It OpenGL surfaces are typically rendered as opaque. Thus the background color for pixels with alpha values of 0.0 is the surface background color. If you set the value of the surface opacity parameter to 0, then the contents of the surface are blended with the contents of surfaces behind the OpenGL surface. This operation is equivalent to OpenGL blending with a source contribution proportional to the source alpha and a background contribution proportional to 1 minus the source alpha. A value of 1 means the surface is opaque (the default); 0 means completely transparent. You can use the following constants to specify that you are setting the surface opacity value: ● For Cocoa, use NSOpenGLCPSurfaceOpacity. ● If you are using the CGL API, use kCGLCPSurfaceOpacity as shown in Listing 7-3. Listing 7-3 Using CGL to set surface opacity GLint opaque = 0; // ctx must be a valid context CGLSetParameter (ctx, kCGLCPSurfaceOpacity, &opaque); Surface Drawing Order Specifies the Position of the OpenGL Surface Relative to the Window A value of 1 means that the position is above the window; a value of –1 specifies a position that is below the window. When you have overlapping views, setting the order to -1 causes OpenGL to draw underneath, 1 causes OpenGL to draw on top. This parameter is useful for drawing user interface controls on top of an OpenGL view. You can use the following constants to specify that you are setting the surface drawing order value: ● For Cocoa, use NSOpenGLCPSurfaceOrder. ● If you are using the CGL API, use kCGLCPSurfaceOrder as shown in Listing 7-4. Listing 7-4 Using CGL to set surface drawing order GLint order = –1; // below window // ctx must be a valid context CGLSetParameter (ctx, kCGLCPSurfaceOrder, &order); Working with Rendering Contexts Context Parameters Alter the Context’s Behavior 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 77Determining Whether Vertex and Fragment Processing Happens on the GPU CGL provides two parameters for checking whether the system is using the GPU for processing: kCGLCPGPUVertexProcessing and kCGLCPGPUFragmentProcessing. To check vertex processing, pass the vertex constant to the CGLGetParameter function. To check fragment processing, pass the fragment constant to CGLGetParameter. Listing 7-5 demonstrates how to use these parameters. Important: Although you can perform these queries at any time, keep in mind that such queries force an internal state validation, which can impact performance. For best performance, do not use these queries inside your drawing loop. Instead, perform the queries once at initialization or context setup time to determine whether OpenGL is using the CPU or the GPU for processing, and then act appropriately in your drawing loop. Listing 7-5 Using CGL to check whether the GPU is processing vertices and fragments BOOL gpuProcessing; GLint fragmentGPUProcessing, vertexGPUProcessing; CGLGetParameter (CGLGetCurrentContext(), kCGLCPGPUFragmentProcessing, &fragmentGPUProcessing); CGLGetParameter(CGLGetCurrentContext(), kCGLCPGPUVertexProcessing, &vertexGPUProcessing); gpuProcessing = (fragmentGPUProcessing && vertexGPUProcessing) ? YES : NO; Controlling the Back Buffer Size Normally, the back buffer is the same size as the window or view that it's drawn into, and it changes size when the window or view changes size. For a window whose size is 720×pixels, the OpenGL back buffer is sized to match. If the window grows to 1024×768 pixels, for example, then the back buffer is resized as well. If you do not want this behavior, use the back buffer size control parameter. Using this parameter fixes the size of the back buffer and lets the system scale the image automatically when it moves the data to a variable size buffer (see Figure 7-1). The size of the back buffer remains fixed at the size that you set up regardless of whether the image is resized to display larger onscreen. You can use the following constants to specify that you are setting the surface backing size: ● If you are using the CGL API, use kCGLCPSurfaceBackingSize, as shown in Listing 7-6. Working with Rendering Contexts Context Parameters Alter the Context’s Behavior 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 78Listing 7-6 Using CGL to set up back buffer size control GLint dim[2] = {720, 480}; // ctx must be a valid context CGLSetParameter(ctx, kCGLCPSurfaceBackingSize, dim); CGLEnable (ctx, kCGLCESurfaceBackingSize); Figure 7-1 A fixed size back buffer and variable size front buffer Sharing Rendering Context Resources A rendering context does not own the drawing objects attached to it, which leaves open the option forsharing. Rendering contexts can share resources and can be attached to the same drawable object (see Figure 7-2 (page 80)) or to different drawable objects (see Figure 7-3 (page 80)). You set up context sharing—either with more than one drawable object or with another context—at the time you create a rendering context. Contexts can share object resources and their associated object state by indicating a shared context at context creation time. Shared contexts share all texture objects, display lists, vertex programs, fragment programs, and buffer objects created before and after sharing is initiated. The state of the objects is also shared but not other Working with Rendering Contexts Sharing Rendering Context Resources 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 79contextstate,such as current color, texture coordinate settings, matrix and lighting settings, rasterization state, and texture environment settings. You need to duplicate context state changes as required, but you need to set up individual objects only once. Figure 7-2 Shared contexts attached to the same drawable object Context Context Drawable object Shared object state When you create an OpenGL context, you can designate another context whose object resources you want to share. Allsharing is peer to peer. Shared resources are reference-counted and thus are maintained until explicitly released or when the last context-sharing resource is released. Not every context can be shared with every other context. Both contexts must share the same OpenGL profile. You must also ensure that both contexts share the same set of renderers. You meet these requirements by ensuring each context uses the same virtual screen list, using either of the following techniques: ● Use the same pixel format object to create all the rendering contexts that you want to share. ● Create pixel format objects using attributes that narrow down the choice to a single display. This practice ensures that the virtual screen is identical for each pixel format object. Figure 7-3 Shared contexts and more than one drawable object Context Context Drawable object Drawable object Shared object state Setting up shared rendering contextsis very straightforward. Each Apple-specific OpenGL API providesfunctions with an option to specify a context to share in its context creation routine: ● Use the share argument for the initWithFormat:shareContext: method of the NSOpenGLContext class. See Listing 7-7 (page 81). ● Use the share parameter for the function CGLCreateContext. See Listing 7-8 (page 82). Working with Rendering Contexts Sharing Rendering Context Resources 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 80Listing 7-7 ensures the same virtual screen list by using the same pixel format object for each of the shared contexts. Listing 7-7 Setting up an NSOpenGLContext object for sharing #import + (NSOpenGLPixelFormat*)defaultPixelFormat { NSOpenGLPixelFormatAttribute attributes [] = { NSOpenGLPFADoubleBuffer, (NSOpenGLPixelFormatAttribute)nil }; return [(NSOpenGLPixelFormat *)[NSOpenGLPixelFormat alloc] initWithAttributes:attribs]; } - (NSOpenGLContext*)openGLContextWithShareContext:(NSOpenGLContext*)context { if (_openGLContext == NULL) { _openGLContext = [[NSOpenGLContext alloc] initWithFormat:[[self class] defaultPixelFormat] shareContext:context]; [_openGLContext makeCurrentContext]; [self prepareOpenGL]; } return _openGLContext; } - (void)prepareOpenGL { // Your code here to initialize the OpenGL state } Listing 7-8 ensures the same virtual screen list by using the same pixel format object for each of the shared contexts. Working with Rendering Contexts Sharing Rendering Context Resources 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 81Listing 7-8 Setting up a CGL context for sharing #include CGLPixelFormatAttribute attrib[] = {kCGLPFADoubleBuffer, 0}; CGLPixelFormatObj pixelFormat = NULL; Glint numPixelFormats = 0; CGLContextObj cglContext1 = NULL; CGLContextObj cglContext2 = NULL; CGLChoosePixelFormat (attribs, &pixelFormat, &numPixelFormats); CGLCreateContext(pixelFormat, NULL, &cglContext1); CGLCreateContext(pixelFormat, cglContext1, &cglContext2); Working with Rendering Contexts Sharing Rendering Context Resources 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 82One of the benefits of using OpenGL isthat it is extensible. An extension istypically introduced by one or more vendors and then later is accepted by the OpenGL Working Group. Some extensions are promoted from a vendor-specific extension to one shared by more than one vendor, sometimes even being incorporated into the core OpenGL API. Extensions allow OpenGL to embrace innovation, but require you to verify that the OpenGL functionality you want to use is available. Because extensions can be introduced at the vendor level, more than one extension can provide the same basic functionality. There might also be an ARB-approved extension that has functionality similar to that of a vendor-specific extension. Your application should prefer core functionality or ARB-approved extensions over those specific to a particular vendor, when both are offered by the same renderer. This makes it easier to transparently support new renderers from other vendors. As particular functionality becomes widely adopted, it can be moved into the core OpenGL API by the ARB. As a result, functionality that you want to use could be included as an extension, as part of the core API, or both. For example, the ability to combine texture environments is supported through the GL_ARB_texture_env_combine and the GL_EXT_texture_env_combine extensions. It's also part of the core OpenGL version 1.3 API. Although each has similar functionality, they use a different syntax. You may need to check in several places (core OpenGL API and extension strings) to determine whether a specific renderer supports functionality that you want to use. Detecting Functionality OpenGL hastwo types of commands—those that are part of the core API and those that are part of an extension to OpenGL. Your application first needs to check for the version of the core OpenGL API and then check for the available extensions. Keep in mind that OpenGL functionality is available on a per-renderer basis. For example, a software renderer might notsupport fog effects even though fog effects are available in an OpenGL extension implemented by a hardware vendor on the same system. For this reason, it's important that you check for functionality on a per-renderer basis. Regardless of what functionality you are checking for, the approach is the same. You need to call the OpenGL function glGetString twice. The first time pass the GL_VERSION constant. The function returns a string that specifies the version of OpenGL. The second time, pass the GL_EXTENSIONS constant. The function returns a pointer to an extension name string. The extension name string is a space-delimited list of the OpenGL 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 83 Determining the OpenGL Capabilities Supported by the Rendererextensions that are supported by the current renderer. This string can be rather long, so do not allocate a fixed-length string for the return value of the glGetString function. Use a pointer and evaluate the string in place. Pass the extension name string to the function gluCheckExtension along with the name of the extension you want to check for. The gluCheckExtension function returns a Boolean value that indicates whether or not the extension is available for the current renderer. If an extension becomes part of the core OpenGL API, OpenGL continues to export the name strings of the promoted extensions. It also continuesto support the previous versions of any extension that has been exported in earlier versions of OS X. Because extensions are not typically removed, the methodology you use today to check for a feature works in future versions of OS X. Checking for functionality, although fairly straightforward, involves writing a large chunk of code. The best way to check for OpenGL functionality is to implement a capability-checking function that you call when your program starts up, and then any time the renderer changes. Listing 8-1 shows a code excerpt that checks for a few extensions. A detailed explanation for each line of code appears following the listing. Listing 8-1 Checking for OpenGL functionality GLint maxRectTextureSize; GLint myMaxTextureUnits; GLint myMaxTextureSize; const GLubyte * strVersion; const GLubyte * strExt; float myGLVersion; GLboolean isVAO, isTexLOD, isColorTable, isFence, isShade, isTextureRectangle; strVersion = glGetString (GL_VERSION); // 1 sscanf((char *)strVersion, "%f", &myGLVersion); strExt = glGetString (GL_EXTENSIONS); // 2 glGetIntegerv(GL_MAX_TEXTURE_UNITS, &myMaxTextureUnits); // 3 glGetIntegerv(GL_MAX_TEXTURE_SIZE, &myMaxTextureSize); // 4 isVAO = gluCheckExtension ((const GLubyte*)"GL_APPLE_vertex_array_object",strExt); // 5 isFence = gluCheckExtension ((const GLubyte*)"GL_APPLE_fence", strExt); // 6 isShade = gluCheckExtension ((const GLubyte*)"GL_ARB_shading_language_100", strExt); // 7 Determining the OpenGL Capabilities Supported by the Renderer Detecting Functionality 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 84isColorTable = gluCheckExtension ((const GLubyte*)"GL_SGI_color_table", strExt) || gluCheckExtension ((const GLubyte*)"GL_ARB_imaging", strExt); // 8 isTexLOD = gluCheckExtension ((const GLubyte*)"GL_SGIS_texture_lod", strExt) || (myGLVersion >= 1.2); // 9 isTextureRectangle = gluCheckExtension ((const GLubyte*) "GL_EXT_texture_rectangle", strExt); if (isTextureRectangle) glGetIntegerv (GL_MAX_RECTANGLE_TEXTURE_SIZE_EXT, &maxRectTextureSize); else maxRectTextureSize = 0; // 10 Here is what the code does: 1. Gets a string that specifies the version of OpenGL. 2. Gets the extension name string. 3. Calls the OpenGL function glGetIntegerv to get the value of the attribute passed to it which, in this case, is the maximum number of texture units. 4. Gets the maximum texture size. 5. Checks whether vertex array objects are supported. 6. Checks for the Apple fence extension. 7. Checks for support for version 1.0 of the OpenGL shading language. 8. Checks for RGBA-format color lookup table support. In this case, the code needs to check for the vendor-specific string and for the ARB string. If either is present, the functionality is supported. 9. Checks for an extension related to the texture level of detail parameter (LOD). In this case, the code needs to check for the vendor-specific string and for the OpenGL version. If the vendor string is present or the OpenGL version is greater than or equal to 1.2, the functionality is supported. 10. Getsthe OpenGL limit for rectangle textures. Forsome extensions,such asthe rectangle texture extension, it may not be enough to check whether the functionality is supported. You may also need to check the limits. You can use glGetIntegerv and related functions (glGetBooleanv, glGetDoublev, glGetFloatv) to obtain a variety of parameter values. You can extend this example to make a comprehensive functionality-checking routine for your application. For more details, see the GLCheck.c file in the Cocoa OpenGL sample application. Determining the OpenGL Capabilities Supported by the Renderer Detecting Functionality 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 85The code in Listing 8-2 shows one way to query the current renderer. It uses the CGL API, which can be called from Cocoa applications. In reality, you need to iterate over all displays and all renderers for each display to get a true picture of the OpenGL functionality available on a particular system. You also need to update your functionality snapshot each time the list of displays or display configuration changes. Listing 8-2 Setting up a valid rendering context to get renderer functionality information #include #include CGDirectDisplayID display = CGMainDisplayID (); // 1 CGOpenGLDisplayMask myDisplayMask = CGDisplayIDToOpenGLDisplayMask (display); // 2 { // Check capabilities of display represented by display mask CGLPixelFormatAttribute attribs[] = {kCGLPFADisplayMask, myDisplayMask, 0}; // 3 CGLPixelFormatObj pixelFormat = NULL; GLint numPixelFormats = 0; CGLContextObj myCGLContext = 0; CGLContextObj curr_ctx = CGLGetCurrentContext (); // 4 CGLChoosePixelFormat (attribs, &pixelFormat, &numPixelFormats); // 5 if (pixelFormat) { CGLCreateContext (pixelFormat, NULL, &myCGLContext); // 6 CGLDestroyPixelFormat (pixelFormat); // 7 CGLSetCurrentContext (myCGLContext); // 8 if (myCGLContext) { // Check for capabilities and functionality here } } CGLDestroyContext (myCGLContext); // 9 CGLSetCurrentContext (curr_ctx); // 10 } Here's what the code does: 1. Gets the display ID of the main display. Determining the OpenGL Capabilities Supported by the Renderer Detecting Functionality 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 862. Maps a display ID to an OpenGL mask. 3. Fills a pixel format attributes array with the display mask attribute and the mask value. 4. Saves the current context so that it can be restored later. 5. Gets the pixel format object for the display. The numPixelFormats parameter specifies how many pixel formats are listed in the pixel format object. 6. Creates a context based on the first pixel format in the list supplied by the pixel format object. Only one renderer will be associated with this context. In your application, you would need to iterate through all pixel formats for this display. 7. Destroys the pixel format object when it is no longer needed. 8. Sets the current context to the newly created, single-renderer context. Now you are ready to check for the functionality supported by the current renderer. See Listing 8-1 (page 84) for an example of functionality-checking code. 9. Destroys the context because it is no longer needed. 10. Restores the previously saved context as the current context, thus ensuring no intrusion upon the user. Guidelines for Code That Checks for Functionality The guidelines in this section ensure that your functionality-checking code is thorough yet efficient. ● Don't rely on what's in a header file. A function declaration in a header file does not ensure that a feature is supported by the current renderer. Neither does linking against a stub library that exports a function. ● Make sure that a renderer is attached to a valid rendering context before you check the functionality of that renderer. ● Check the API version or the extension name string for the current renderer before you issue OpenGL commands. ● Check only once per renderer. After you've determined that the current renderer supports an OpenGL command, you don't need to check for that functionality again for that renderer. ● Make sure that you are aware of whether a feature is being used as part of the Core OpenGL API or as an extension. When a feature is implemented both as part of the core OpenGL API and as an extension, it uses different constants and function names. Determining the OpenGL Capabilities Supported by the Renderer Guidelines for Code That Checks for Functionality 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 87OpenGL Renderer Implementation-Dependent Values The OpenGL specification definesimplementation-dependent valuesthat define the limits of what an OpenGL implementation is capable of. For example, the maximum size of a texture and the number of texture units are both common implementation-dependent values that an application is expected to check. Each of these values provides a minimum value that all conforming OpenGL implementations are expected to support. If your application’s usage exceeds these minimums, it must check the limit first, and fail gracefully if the implementation cannot provide the limit desired. Your application may need to load smaller textures, disable a rendering feature, or choose a different implementation. Although the specification provides a comprehensive list of these limitations, a few stand out in most OpenGL applications. Table 8-1 lists values that applications should test if they require more than the minimum values in the specification. Table 8-1 Common OpenGL renderer limitations Maximum size of the texture GL_MAX_TEXTURE_SIZE Number of depth buffer planes GL_DEPTH_BITS Number of stencil buffer planes GL_STENCIL_BITS The limit on the size and complexity of your shaders is a key area you need to test. All graphics hardware supportslimited memory to pass attributesinto the vertex and fragmentshaders. Your application must either keep its usage below the minimums as defined in the specification, or it must check the shader limitations documented in Table 8-2 and choose shaders that are within those limits. Table 8-2 OpenGL shader limitations Maximum number of vertex attributes GL_MAX_VERTEX_ATTRIBS Maximum number of uniform vertex vectors GL_MAX_VERTEX_UNIFORM_COMPONENTS Maximum number of uniform fragment vectors GL_MAX_FRAGMENT_UNIFORM_COMPONENTS Maximum number of varying vectors GL_MAX_VARYING_FLOATS Maximum number of texture units usable in a GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS vertex shader Maximum number of texture units usable in a GL_MAX_TEXTURE_IMAGE_UNITS fragment shader Determining the OpenGL Capabilities Supported by the Renderer OpenGL Renderer Implementation-Dependent Values 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 88OpenGL performs many complex operations—transformations, lighting, clipping, texturing, environmental effects, and so on—on large data sets. The size of your data and the complexity of the calculations performed on it can impact performance, making your stellar 3D graphics shine less brightly than you'd like. Whether your application is a game using OpenGL to provide immersive real-time images to the user or an image processing application more concerned with image quality, use the information in this chapter to help you design your application. Visualizing OpenGL The most common way to visualize OpenGL is as a graphics pipeline, as shown in Figure 9-1 (page 90). Your application sends vertex and image data, configuration and state changes, and rendering commandsto OpenGL. Vertices are processed, assembled into primitives, and rasterized into fragments. Each fragment is calculated and merged into the framebuffer. The pipeline model is useful for identifying exactly what work your application must perform to generate the results you want. OpenGL allows you to customize each stage of the graphics pipeline, either through customized shader programs or by configuring a fixed-function pipeline through OpenGL function calls. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 89 OpenGL Application Design StrategiesIn most implementations, each pipeline stage can act in parallel with the others. This is a key point. If any one pipeline stage performs too much work, then the other stages sit idle waiting for it to complete. Your design should balance the work performed in each pipeline stage to the capabilities of the renderer. When you tune your application’s performance, the firststep is usually to determine which stage the application is bottlenecked in, and why. Figure 9-1 OpenGL graphics pipeline Geometry Fragment Framebuffer operations Texturing Fog Alpha, stencil, and depth tests Framebuffer blending Primitive assembly Clipping Vertex Application Primitives and image data Transform and lighting Another way to visualize OpenGL is as a client-server architecture, as shown in Figure 9-2 (page 91). OpenGL state changes, texture and vertex data, and rendering commands must all travel from the application to the OpenGL client. The client transforms these items so that the graphics hardware can understand them, and then forwards them to the GPU. Not only do these transformations add overhead, but the bandwidth between the CPU and the graphics hardware is often lower than other parts of the system. OpenGL Application Design Strategies Visualizing OpenGL 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 90To achieve great performance, an application must reduce the frequency of callsthey make to OpenGL, minimize the transformation overhead, and carefully manage the flow of data between the application and the graphics hardware. For example, OpenGL provides mechanismsthat allow some kinds of data to be cached in dedicated graphics memory. Caching reusable data in graphics memory reduces the overhead of transmitting data to the graphics hardware. Figure 9-2 OpenGL client-server architecture OpenGL client OpenGL server Graphics hardware Application OpenGL framework OpenGL driver Runs on GPU Runs on CPU Designing a High-Performance OpenGL Application To summarize, a well-designed OpenGL application needs to: ● Exploit parallelism in the OpenGL pipeline. ● Manage data flow between the application and the graphics hardware. OpenGL Application Design Strategies Designing a High-Performance OpenGL Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 91Figure 9-3 shows a suggested process flow for an application that uses OpenGL to perform animation to the display. Figure 9-3 Application model for managing resources Update dynamic resources Execute rendering commands Read back results Present to display Free up resources Render loop Slower process Faster process Create static resources When the application launches, it creates and initializes any static resources it intends to use in the renderer, encapsulating those resources into OpenGL objects where possible. The goal is to create any object that can remain unchanged for the runtime of the application. Thistradesincreased initialization time for better rendering performance. Ideally, complex commands or batches ofstate changesshould be replaced with OpenGL objects that can be switched in with a single function call. For example, configuring the fixed-function pipeline can take dozens of function calls. Replace it with a graphics shader that is compiled at initialization time, and you can switch to a different program with a single function call. In particular, OpenGL objects that are expensive to create or modify should be created as static objects. The rendering loop processes all of the items you intend to render to the OpenGL context, then swaps the buffersto display the resultsto the user. In an animated scene,some data needsto be updated for every frame. In the inner rendering loop shown in Figure 9-3, the application alternates between updating rendering resources(possibly creating or modifying OpenGL objectsin the process) and submitting rendering commands that use those resources. The goal of this inner loop is to balance the workload so that the CPU and GPU are working in parallel, without blocking each other by using the same resources simultaneously. OpenGL Application Design Strategies Designing a High-Performance OpenGL Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 92A goal for the inner loop is to avoid copying data back from the graphics processor to the CPU. Operations that require the CPU to read results back from the graphics hardware are sometimes necessary, but in general reading back results should be used sparingly. If those results are also used to render the current frame, as shown in the middle rendering loop, this can be very slow. Copying data from the GPU to the CPU often requires that some or all previously submitted drawing commands have completed. After the application submits all drawing commands needed in the frame, it presents the results to the screen. Alternatively, a non-interactive application might read the final image back to the CPU, but this is also slower than presenting results to the screen. This step should be performed only for results that must be read back to the application. For example, you might copy the image in the back buffer to save it to disk. Finally, when your application is ready to shut down, it deletes static and dynamic resources to make more hardware resources available to other applications. If your application is moved to the background, releasing resources to other applications is also good practice. To summarize the important characteristics of this design: ● Create static resources, whenever practical. ● The inner rendering loop alternates between modifying dynamic resources and submitting rendering commands. Enough work should be included in this loop so that when the application needs to read or write to any OpenGL object, the graphics processor has finished processing any commands that used it. ● Avoid reading intermediate rendering results into the application. The rest of this chapter provides useful OpenGL programming techniques to implement the features of this rendering loop. Later chapters demonstrate how to apply these general techniquesto specific areas of OpenGL programming. ● “Update OpenGL Content Only When Your Data Changes” (page 94) ● “Avoid Synchronizing and Flushing Operations” (page 96) ● “Allow OpenGL to Manage Your Resources” (page 99) ● “Use Optimal Data Types and Formats” (page 102) ● “Use Double Buffering to Avoid Resource Conflicts” (page 100) ● “Be Mindful of OpenGL State Variables” (page 101) ● “Use OpenGL Macros” (page 103) ● “Replace State Changes with OpenGL Objects” (page 102) OpenGL Application Design Strategies Designing a High-Performance OpenGL Application 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 93Update OpenGL Content Only When Your Data Changes OpenGL applications should avoid recomputing a scene when the data has not changed. This is critical on portable devices, where power conservation is critical to maximizing battery life. You can ensure that your application draws only when necessary by following a few simple guidelines: ● If your application isrendering animation, use a Core Video display link to drive the animation loop. Listing 9-1 (page 94) provides code that allows your application to be notified when a new frame needs to be displayed. This code also synchronizes image updates to the refresh rate of the display. See “Synchronize with the Screen Refresh Rate” (page 96) for more information. ● If your application does not animate its OpenGL content, you should allow the system to regulate drawing. For example, in Cocoa call the setNeedsDisplay: method when your data changes. ● If your application does not use a Core Video display link, you should still advance an animation only when necessary. To determine when to draw the next frame of an animation, calculate the difference between the current time and the start of the last frame. Use the difference to determine how much to advance the animation. You can use the Core Foundation function CFAbsoluteTimeGetCurrent to obtain the current time. Listing 9-1 Setting up a Core Video display link @interface MyView : NSOpenGLView { CVDisplayLinkRef displayLink; //display link for managing rendering thread } @end - (void)prepareOpenGL { // Synchronize buffer swaps with vertical refresh rate GLint swapInt = 1; [[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval]; // Create a display link capable of being used with all active displays CVDisplayLinkCreateWithActiveCGDisplays(&displayLink); // Set the renderer output callback function CVDisplayLinkSetOutputCallback(displayLink, &MyDisplayLinkCallback, self); OpenGL Application Design Strategies Update OpenGL Content Only When Your Data Changes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 94// Set the display link for the current renderer CGLContextObj cglContext = [[self openGLContext] CGLContextObj]; CGLPixelFormatObj cglPixelFormat = [[self pixelFormat] CGLPixelFormatObj]; CVDisplayLinkSetCurrentCGDisplayFromOpenGLContext(displayLink, cglContext, cglPixelFormat); // Activate the display link CVDisplayLinkStart(displayLink); } // This is the renderer output callback function static CVReturn MyDisplayLinkCallback(CVDisplayLinkRef displayLink, const CVTimeStamp* now, const CVTimeStamp* outputTime, CVOptionFlags flagsIn, CVOptionFlags* flagsOut, void* displayLinkContext) { CVReturn result = [(MyView*)displayLinkContext getFrameForTime:outputTime]; return result; } - (CVReturn)getFrameForTime:(const CVTimeStamp*)outputTime { // Add your drawing codes here return kCVReturnSuccess; } - (void)dealloc { // Release the display link CVDisplayLinkRelease(displayLink); [super dealloc]; } OpenGL Application Design Strategies Update OpenGL Content Only When Your Data Changes 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 95Synchronize with the Screen Refresh Rate Tearing is a visual anomaly caused when part of the current frame overwrites previous frame data in the framebuffer before the current frame is fully rendered on the screen. To avoid tearing, applications use a double-buffered context and synchronize buffer swaps with the screen refresh rate (sometimes called VBL , vertical blank , or vsynch ) to eliminate frame tearing. Note: During development, it's best to disable synchronization so that you can more accurately benchmark your application. Enable synchronization when you are ready to deploy your application. The refresh rate of the display limits how often the screen can be refreshed. The screen can be refreshed at rates that are divisible by integer values. For example, a CRT display that has a refresh rate of 60 Hz can support screen refresh rates of 60 Hz, 30 Hz, 20 Hz, and 15 Hz. LCD displays do not have a vertical retrace in the CRT sense and are typically considered to have a fixed refresh rate of 60 Hz. After you tell the context to swap the buffers, OpenGL must defer any rendering commands that follow that swap until after the buffers have successfully been exchanged. Applications that attempt to draw to the screen during this waiting period waste time that could be spent performing other drawing operations or saving battery life and minimizing fan operation. Listing 9-2 shows how an NSOpenGLView object can synchronize with the screen refresh rate; you can use a similar approach if your application uses CGL contexts. It assumes that you set up the context for double buffering. The swap interval can be set only to 0 or 1. If the swap interval is set to 1, the buffers are swapped only during the vertical retrace. Listing 9-2 Setting up synchronization GLint swapInterval = 1; [[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval]; Avoid Synchronizing and Flushing Operations OpenGL is not required to execute most commandsimmediately. Often, they are queued to a command buffer and read and executed by the hardware at a later time. Usually, OpenGL waits until the application has queued up a significant number of commands before sending the buffer to the hardware—allowing the graphics hardware to execute commands in batches is often more efficient. However, some OpenGL functions must flush the buffer immediately. Other functions not only flush the buffer, but also block until previously submitted commands have completed before returning control to the application. Your application should restrict the OpenGL Application Design Strategies Avoid Synchronizing and Flushing Operations 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 96use of flushing and synchronizing commands only to those cases where that behavior is necessary. Excessive use of flushing or synchronizing commands add additional stalls waiting for the hardware to finish rendering. On a single-buffered context, flushing may also cause visual anomalies, such as flickering or tearing. These situations require OpenGL to submit the command buffer to the hardware for execution. ● The function glFlush waits until commands are submitted but does not wait for the commands to finish executing. ● The function glFinish waits for all previously submitted commands to complete executing. ● Functions that retrieve OpenGL state (for example, glGetError), also wait for submitted commands to complete. ● Buffer swapping routines (the flushBuffer method of the NSOpenGLContext class or the CGLFlushDrawable function) implicitly call glFlush. Note that when using the NSOpenGLContext class or the CGL API, the term flush actually refers to a buffer-swapping operation. For single-buffered contexts, glFlush and glFinish are equivalent to a swap operation, since all rendering is taking place directly in the front buffer. ● The command buffer is full. Using glFlush Effectively Most of the time you don't need to call glFlush to move image data to the screen. There are only a few cases that require you to call the glFlush function: ● If your application submits rendering commands that use a particular OpenGL object, and it intends to modify that object in the near future. If you attempt to modify an OpenGL object that has pending drawing commands, your application may be forced to wait until those commands have been completed. In this situation, calling glFlush ensures that the hardware begins processing commands immediately. After flushing the command buffer, your application should perform work that does not need that resource. It can perform other work (even modifying other OpenGL objects). ● Your application needs to change the drawable object associated with the rendering context. Before you can switch to another drawable object, you must call glFlush to ensure that all commands written in the command queue for the previous drawable object have been submitted. ● When two contexts share an OpenGL object. After submitting any OpenGL commands, call glFlush before switching to the other context. ● To keep drawing synchronized across multiple threads and prevent command buffer corruption, each thread should submit its rendering commands and then call glFlush. OpenGL Application Design Strategies Avoid Synchronizing and Flushing Operations 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 97Avoid Querying OpenGL State Calls to glGet*(), including glGetError(), may require OpenGL to execute previous commands before retrieving any state variables. This synchronization forces the graphics hardware to run lockstep with the CPU, reducing opportunities for parallelism. Your application should keep shadow copies of any OpenGL state that you need to query, and maintain these shadow copies as you change the state. When errors occur, OpenGL sets an error flag that you can retrieve with the function glGetError. During development, it's crucial that your code contains error checking routines, not only for the standard OpenGL calls, but for the Apple-specific functions provided by the CGL API. If you are developing a performance-critical application, retrieve error information only in the debugging phase. Calling glGetError excessively in a release build degrades performance. Use Fences for Finer-Grained Synchronization Avoid using glFinish in your application, because it waits until all previously submitted commands are completed before returning control to your application. Instead, you should use the fence extension (APPLE_fence). This extension was created to provide the level of granularity that is not provided by glFinish. A fence is a token used to mark the current point in the command stream. When used correctly, it allows you to ensure that a specific series of commands has been completed. A fence helps coordinate activity between the CPU and the GPU when they are using the same resources. Follow these steps to set up and use a fence: 1. At initialization time, create the fence object by calling the function glGenFencesAPPLE. GLint myFence; glGenFencesAPPLE(1,&myFence); 2. Call the OpenGL functions that must complete prior to the fence. 3. Set up the fence by calling the function glSetFenceAPPLE. Thisfunction inserts a token into the command stream and sets the fence state to false. void glSetFenceAPPLE(GLuint fence); fence specifies the token to insert. For example: glSetFenceAPPLE(myFence); OpenGL Application Design Strategies Avoid Synchronizing and Flushing Operations 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 984. Call glFlush to force the commands to be sent to the hardware. This step is optional, but recommended to ensure that the hardware begins processing OpenGL commands. 5. Perform other work in your application. 6. Wait for all OpenGL commands issued prior to the fence to complete by calling the function glFinishFenceAPPLE. glFinishFenceAPPLE(myFence); As an alternative to calling glFinishFenceAPPLE, you can call glTestFenceAPPLE to determine whether the fence has been reached. The advantage of testing the fence is that your application does not block waiting for the fence to complete. This is useful if your application can continue processing other work while waiting for the fence to trigger. glTestFenceAPPLE(myFence); 7. When your application no longer needsthe fence, delete it by calling the function glDeleteFencesAPPLE. glDeleteFencesAPPLE(1,&myFence); There is an art to determining where to insert a fence in the command stream. If you insert a fence for too few drawing commands, you risk having your application stall while it waits for drawing to complete. You'll want to set a fence so your application operates as asynchronously as possible without stalling. The fence extension also lets you synchronize buffer updates for objects such as vertex arrays and textures. For that you call the function glFinishObjectAPPLE, supplying an object name along with the token. For detailed information on this extension, see the OpenGL specification for the Apple fence extension. Allow OpenGL to Manage Your Resources OpenGL allows many data types to be stored persistently inside OpenGL. Creating OpenGL objects to store vertex, texture, or other forms of data allows OpenGL to reduce the overhead of transforming the data and sending them to the graphics processor. If data is used more frequently than it is modified, OpenGL can substantially improve the performance of your application. OpenGL Application Design Strategies Allow OpenGL to Manage Your Resources 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 99OpenGL allows your application to hint how it intends to use the data. These hints allow OpenGL to make an informed choice of how to process your data. For example, static data might be placed in high-speed graphics memory directly connected to the graphics processor. Data that changes frequently might be kept in main memory and accessed by the graphics hardware through DMA. Use Double Buffering to Avoid Resource Conflicts Resource conflicts occur when your application and OpenGL want to access a resource at the same time. When one participant attempts to modify an OpenGL object being used by the other, one of two problems results: ● The participant that wantsto modify the object blocks until it is no longer in use. Then the other participant is not allowed to read from or write to the object until the modifications are complete. This is safe, but these can be hidden bottlenecks in your application. ● Some extensions allow OpenGL to access application memory that can be simultaneously accessed by the application. In this situation, synchronizing between the two participants is left to the application to manage. Your application calls glFlush to force OpenGL to execute commands and uses a fence or glFinish to ensure that no commands that access that memory are pending. Whether your application relies on OpenGL to synchronize access to a resource, or it manually synchronizes access, resource contention forces one of the participants to wait, rather than allowing them both to execute in parallel. Figure 9-4 demonstrates this problem. There is only a single buffer for vertex data, which both the application and OpenGL want to use and therefore the application must wait until the GPU finishes processing commands before it modifies the data. Figure 9-4 Single-buffered vertex array data CPU GPU Vertex array 1 Vertex array 1 Vertex array 1 Vertex array 1 glFlush glFlush glFinishObject(..., 1) glFinishObject(..., 1) Time Frame 1 Frame 2 OpenGL Application Design Strategies Use Double Buffering to Avoid Resource Conflicts 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 100To solve this problem, your application could fill this idle time with other processing, even other OpenGL processing that does not need the objects in question. If you need to process more OpenGL commands, the solution is to create two of the same resource type and let each participant access a resource. Figure 9-5 illustrates the double-buffered approach. While the GPU operates on one set of vertex array data, the CPU is modifying the other. After the initialstartup, neither processing unit isidle. This example uses a fence to ensure that access to each buffer is synchronized. Figure 9-5 Double-buffered vertex array data CPU Vertex array 1 Vertex array 1 GPU Vertex array 1 Vertex array 1 Vertex array 2 Vertex array 2 Vertex array 2 Vertex array 2 glFlush glFlush glFlush glFlush glFinishObject(..., 1) glFinishObject(..., 1) glFinishObject(..., 2) glFinishObject(..., 2) Time Frame 1 Frame 2 Frame 3 Frame 4 Double buffering issufficient for most applications, but it requiresthat both participantsfinish processing their commands before a swap can occur. For a traditional producer-consumer problem, more than two buffers may prevent a participant from blocking. With triple buffering, the producer and consumer each have a buffer, with a third idle buffer. If the producer finishes before the consumer finishes processing commands, it takes the idle buffer and continues to process commands. In this situation, the producer idles only if the consumer falls badly behind. Be Mindful of OpenGL State Variables The hardware has one current state, which is compiled and cached. Switching state is expensive, so it's best to design your application to minimize state switches. Don't set a state that's already set. Once a feature is enabled, it does not need to be enabled again. Calling an enable function more than once does nothing except waste time because OpenGL does not check the state of a feature when you call glEnable or glDisable. For instance, if you call glEnable(GL_LIGHTING) more than once, OpenGL does not check to see if the lighting state is already enabled. It simply updates the state value even if that value is identical to the current value. OpenGL Application Design Strategies Be Mindful of OpenGL State Variables 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 101You can avoid setting a state more than necessary by using dedicated setup or shutdown routines rather than putting such callsin a drawing loop. Setup and shutdown routines are also useful for turning on and off features that achieve a specific visual effect—for example, when drawing a wire-frame outline around a textured polygon. If you are drawing 2D images, disable all irrelevant state variables, similar to what's shown in Listing 9-3. Listing 9-3 Disabling state variables glDisable(GL_DITHER); glDisable(GL_ALPHA_TEST); glDisable(GL_BLEND); glDisable(GL_STENCIL_TEST); glDisable(GL_FOG); glDisable(GL_TEXTURE_2D); glDisable(GL_DEPTH_TEST); glPixelZoom(1.0,1.0); // Disable other state variables as appropriate. Replace State Changes with OpenGL Objects The “Be Mindful of OpenGL State Variables” (page 101) section suggests that reducing the number of state changes can improve performance. Some OpenGL extensions also allow you to create objects that collect multiple OpenGL state changes into an object that can be bound with a single function call. Where such techniques are available, they are recommended. For example, configuring the fixed-function pipeline requires many function calls to change the state of the various operators. Not only does this incur overhead for each function called, but the code is more complex and difficult to manage. Instead, use a shader. A shader, once compiled, can have the same effect but requires only a single call to glUseProgram. Other examples of objects that take the place of multiple state changes include the “Vertex Array Range Extension” (page 113) and “Uniform Buffers” (page 143). Use Optimal Data Types and Formats If you don't use data types and formats that are native to the graphics hardware, OpenGL must convert those data types into a format that the graphics hardware understands. OpenGL Application Design Strategies Replace State Changes with OpenGL Objects 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 102For vertex data, use GLfloat, GLshort, or GLubyte data types. Most graphics hardware handle these types natively. For texture data, you’ll get the best performance if you use the following format and data type combination: GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV These format and data type combinations also provide acceptable performance: GL_BGRA, GL_UNSIGNED_SHORT_1_5_5_5_REV GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE The combination GL_RGBA and GL_UNSIGNED_BYTE needs to be swizzled by many cards when the data is loaded, so it's not recommended. Use OpenGL Macros OpenGL performs a global context and renderer lookup for each command it executesto ensure that all OpenGL commands are issued to the correct rendering context and renderer. There is significant overhead associated with these lookups; applicationsthat have extremely high call frequenciesmay find that the overheadmeasurably affects performance. OS X allows your application to use macros to provide a local context variable and cache the current renderer in that variable. You get more benefit from using macros when your code makes millions of function calls per second. Before implementing this technique, consider carefully whether you can redesign your application to perform less function calls. Frequently changing OpenGL state, pushing or popping matrices, or even submitting one vertex at a time are all examples of techniques that should be replaced with more efficient operations. You can use the CGL macro header (CGL/CGLMacro.h) if your application uses CGL from a Cocoa application. You must define the local variable cgl_ctx to be equal to the current context. Listing 9-4 shows what's needed to set up macro use for the CGL API. First, you need to include the correct macro header. Then, you must set the current context. Listing 9-4 Using CGL macros #include // include the header CGL_MACRO_DECLARE_VARIABLES // set the current context glBegin (GL_QUADS); // This code now uses the macro // draw here glEnd (); OpenGL Application Design Strategies Use OpenGL Macros 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 103Complex shapes and detailed 3D models require large amounts of vertex data to describe them in OpenGL. Moving vertex data from your application to the graphics hardware incurs a performance cost that can be quite large depending on the size of the data set. Figure 10-1 Vertex data sets can be quite large Applications that use large vertex data sets can adopt one or more of the strategies described in “OpenGL Application Design Strategies” (page 89) to optimize how vertex data is delivered to OpenGL.This chapter expands on those best practices with specific techniques for working with vertex data. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 104 Best Practices for Working with Vertex DataUnderstand How Vertex Data Flows Through OpenGL Understanding how vertex data flows through OpenGL is important to choosing strategies for handling the data. Vertex data enters into the vertex stage, where it is processed by either the built-in fixed function vertex stage or a custom vertex. Figure 10-2 Vertex data path Rasterization Fragment shading and per-fragment operations Per-pixel operations Texture assembly Framebuffer Vertex shading and per-vertex operations Pixel data Vertex data Figure 10-3 takes a closer look at the vertex data path when using immediate mode. Without any optimizations, your vertex data may be copied at various points in the data path. If your application uses immediate mode to each vertex separately, calls to OpenGL first modify the current vertex, which is copied into the command buffer whenever your application makes a glVertex* call. Thisis not only expensive in terms of copy operations, but also in function overhead to specify each vertex. Figure 10-3 Immediate mode requires a copy of the current vertex data GPU VRAM Copy Copy Original Command buffer Current vertex Application Best Practices for Working with Vertex Data Understand How Vertex Data Flows Through OpenGL 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 105The OpenGL commands glDrawRangeElements, glDrawElements, and glDrawArrays render multiple geometric primitives from array data, using very few subroutine calls. Listing 10-1 shows a typical implementation. Your application creates a vertex structure that holds all the elements for each vertex. For each element , you enable a client array and provide a pointer and offset to OpenGL so that it knows how to find those elements. Listing 10-1 Submitting vertex data using glDrawElements. typedef struct _vertexStruct { GLfloat position[2]; GLubyte color[4]; } vertexStruct; void DrawGeometry() { const vertexStruct vertices[] = {...}; const GLubyte indices[] = {...}; glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2, GL_FLOAT, sizeof(vertexStruct), &vertices[0].position); glEnableClientState(GL_COLOR_ARRAY); glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(vertexStruct), &vertices[0].color); glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, indices); } Each time you call glDrawElements, OpenGL must copy all of the vertex data into the command buffer, which is later copied to the hardware. The copy overhead is still expensive. Best Practices for Working with Vertex Data Understand How Vertex Data Flows Through OpenGL 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 106Techniques for Handling Vertex Data Avoiding unnecessary copies of your vertex data is critical to application performance. Thissection summarizes common techniques for managing your vertex data using either built-in functionality or OpenGL extensions. Before using these techniques, you must ensure that the necessary functions are available to your application. See “Detecting Functionality” (page 83). ● Avoid the use of glBegin and glEnd to specify your vertex data. The function and copying overhead makes this path useful only for very small data sets. Also, applications written with glBegin and glEnd are not portable to OpenGL ES on iOS. ● Minimize data type conversions by supplying OpenGL data types for vertex data. Use GLfloat, GLshort, or GLubyte data types because most graphics processors handle these types natively. If you use some other type, then OpenGL may need to perform a costly data conversion. ● The preferred way to manage your vertex data is with vertex buffer objects. Vertex buffer objects are buffers owned by OpenGL that hold your vertex information. These buffers allow OpenGL to place your vertex data into memory that is accessible to the graphics hardware. See “Vertex Buffers” (page 107) for more information. ● If vertex buffer objects are not available, your application can search for the GL_APPLE_vertex_array_range and APPLE_fence extensions. Vertex array ranges allow you to prevent OpenGL from copying your vertex data into the command buffer. Instead, your application must avoid modifying or deleting the vertex data until OpenGL finishes executing drawing commands. This solution requires more effort from the application, and is not compatible with other platforms, including iOS. See “Vertex Array Range Extension” (page 113) for more information. ● Complex vertex operations require many array pointers to be enabled and set before you call glDrawElements. The GL_APPLE_vertex_array_object extension allows your application to consolidate a group of array pointers into a single object. Your application switches multiple pointers by binding a single vertex array object, reducing the overhead of changing state. See “Vertex Array Object” (page 116). ● Use double buffering to reduce resource contention between your application and OpenGL. See “Use Double Buffering to Avoid Resource Conflicts” (page 100). ● If you need to compute new vertex information between frames, consider using vertex shaders and buffer objects to perform and store the calculations. Vertex Buffers Vertex buffers are available as a core feature starting in OpenGL 1.5, and on earlier versions of OpenGL through the vertex buffer object extension (GL_ARB_vertex_buffer_object). Vertex buffers are used to improve the throughput of static or dynamic vertex data in your application. Best Practices for Working with Vertex Data Techniques for Handling Vertex Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 107A buffer object is a chunk of memory owned by OpenGL. Your application reads from or writes to the buffer using OpenGL callssuch as glBufferData, glBufferSubData, and glGetBufferSubData. Your application can also gain a pointer to this memory, an operation referred to as mapping a buffer. OpenGL prevents your application and itself from simultaneously using the data stored in the buffer. When your application maps a buffer or attempts to modify it, OpenGL may block until previous drawing commands have completed. Using Vertex Buffers You can set up and use vertex buffers by following these steps: 1. Call the function glGenBuffers to create a new name for a buffer object. void glGenBuffers(sizei n, uint *buffers ); n is the number of buffers you wish to create identifiers for. buffers specifies a pointer to memory to store the buffer names. 2. Call the function glBindBuffer to bind an unused name to a buffer object. After this call, the newly created buffer object is initialized with a memory buffer of size zero and a default state. (For the default setting, see the OpenGL specification for ARB_vertex_buffer_object.) void glBindBuffer(GLenum target, GLuint buffer); target must be set to GL_ARRAY_BUFFER. buffer specifies the unique name for the buffer object. 3. Fill the buffer object by calling the function glBufferData. Essentially, this call uploads your data to the GPU. void glBufferData(GLenum target, sizeiptr size, const GLvoid *data, GLenum usage); target must be set to GL_ARRAY_BUFFER. size specifies the size of the data store. *data points to the source data. If this is not NULL, the source data is copied to the data stored of the buffer object. If NULL, the contents of the data store are undefined. Best Practices for Working with Vertex Data Vertex Buffers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 108usage is a constant that provides a hint as to how your application plans to use the data stored in the buffer object. These examples use GL_STREAM_DRAW, which indicates that the application plans to both modify and draw using the buffer, and GL_STATIC_DRAW, which indicates that the application will define the data once but use it to draw many times. For more details on buffer hints,see “Buffer Usage Hints” (page 110) 4. Enable the vertex array by calling glEnableClientState and supplying the GL_VERTEX_ARRAY constant. 5. Point to the contents of the vertex buffer object by calling a function such as glVertexPointer. Instead of providing a pointer, you provide an offset into the vertex buffer object. 6. To update the data in the buffer object, your application calls glMapBuffer. Mapping the buffer prevents the GPU from operating on the data, and gives your application a pointer to memory it can use to update the buffer. void *glMapBuffer(GLenum target, GLenum access); target must be set to GL_ARRAY_BUFFER. access indicatesthe operations you plan to performon the data. You can supply READ_ONLY, WRITE_ONLY, or READ_WRITE. 7. Write pixel data to the pointer received from the call to glMapBuffer. 8. When your application hasfinished modifying the buffer contents, call the function glUnmapBuffer. You must supply GL_ARRAY_BUFFER as the parameter to this function. Once the buffer is unmapped, the pointer is no longer valid, and the buffer’s contents are uploaded again to the GPU. Listing 10-2 shows code that usesthe vertex buffer object extension for dynamic data. This example overwrites all of the vertex data during every draw operation. Listing 10-2 Using the vertex buffer object extension with dynamic data // To set up the vertex buffer object extension #define BUFFER_OFFSET(i) ((char*)NULL + (i)) glBindBuffer(GL_ARRAY_BUFFER, myBufferName); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, stride, BUFFER_OFFSET(0)); // When you want to draw using the vertex data draw_loop { Best Practices for Working with Vertex Data Vertex Buffers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 109glBufferData(GL_ARRAY_BUFFER, bufferSize, NULL, GL_STREAM_DRAW); my_vertex_pointer = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY); GenerateMyDynamicVertexData(my_vertex_pointer); glUnmapBuffer(GL_ARRAY_BUFFER); PerformDrawing(); } Listing 10-3 shows code that uses the vertex buffer object extension with static data. Listing 10-3 Using the vertex buffer object extension with static data // To set up the vertex buffer object extension #define BUFFER_OFFSET(i) ((char*)NULL + (i)) glBindBuffer(GL_ARRAY_BUFFER, myBufferName); glBufferData(GL_ARRAY_BUFFER, bufferSize, NULL, GL_STATIC_DRAW); GLvoid* my_vertex_pointer = glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY); GenerateMyStaticVertexData(my_vertex_pointer); glUnmapBuffer(GL_ARRAY_BUFFER); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, stride, BUFFER_OFFSET(0)); // When you want to draw using the vertex data draw_loop { PerformDrawing(); } Buffer Usage Hints A key advantage of buffer objectsisthat the application can provide information on how it usesthe data stored in each buffer. For example, Listing 10-2 and Listing 10-3 differentiated between cases where the data were expected to never change (GL_STATIC_DRAW) and cases where the buffer data might change (GL_DYNAMIC_DRAW). The usage parameter allows an OpenGL renderer to alter its strategy for allocating the vertex buffer to improve performance. For example, static buffers may be allocated directly in GPU memory, while dynamic buffers may be stored in main memory and retrieved by the GPU via DMA. If OpenGL ES compatibility is useful to you, you should limit your usage hints to one of three usage cases: Best Practices for Working with Vertex Data Vertex Buffers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 110● GL_STATIC_DRAW should be used for vertex data that isspecified once and never changed. Your application should create these vertex buffers during initialization and use them repeatedly until your application shuts down. ● GL_DYNAMIC_DRAW should be used when the buffer is expected to change after it is created. Your application should still allocate these buffers during initialization and periodically update them by mapping the buffer. ● GL_STREAM_DRAW is used when your application needs to create transient geometry that is rendered and then discarded. This is most useful when your application must dynamically change vertex data every frame in a way that cannot be performed in a vertex shader. To use a stream vertex buffer, your application initially fills the buffer using glBufferData, then alternates between drawing using the buffer and modifying the buffer. Other usage constants are detailed in the vertex buffer specification. If different elements in your vertex format have different usage characteristics, you may want to split the elements into one structure for each usage pattern and allocate a vertex buffer for each. Listing 10-4 shows how to implement this. In this example, position data is expected to be the same in each frame, while color data may be animated in every frame. Listing 10-4 Geometry with different usage patterns typedef struct _vertexStatic { GLfloat position[2]; } vertexStatic; typedef struct _vertexDynamic { GLubyte color[4]; } vertexDynamic; // Separate buffers for static and dynamic data. GLuint staticBuffer; GLuint dynamicBuffer; GLuint indexBuffer; const vertexStatic staticVertexData[] = {...}; vertexDynamic dynamicVertexData[] = {...}; Best Practices for Working with Vertex Data Vertex Buffers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 111const GLubyte indices[] = {...}; void CreateBuffers() { glGenBuffers(1, &staticBuffer); glGenBuffers(1, &dynamicBuffer); glGenBuffers(1, &indexBuffer); // Static position data glBindBuffer(GL_ARRAY_BUFFER, staticBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(staticVertexData), staticVertexData, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); // Dynamic color data // While not shown here, the expectation is that the data in this buffer changes between frames. glBindBuffer(GL_ARRAY_BUFFER, dynamicBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(dynamicVertexData), dynamicVertexData, GL_DYNAMIC_DRAW); } void DrawUsingVertexBuffers() { glBindBuffer(GL_ARRAY_BUFFER, staticBuffer); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2, GL_FLOAT, sizeof(vertexStatic), (void*)offsetof(vertexStatic,position)); glBindBuffer(GL_ARRAY_BUFFER, dynamicBuffer); glEnableClientState(GL_COLOR_ARRAY); glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(vertexDynamic), (void*)offsetof(vertexDynamic,color)); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuffer); Best Practices for Working with Vertex Data Vertex Buffers 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 112glDrawElements(GL_TRIANGLE_STRIP, sizeof(indices)/sizeof(GLubyte), GL_UNSIGNED_BYTE, (void*)0); } Flush Buffer Range Extension When your application unmaps a vertex buffer, the OpenGL implementation may copy the full contents of the buffer to the graphics hardware. If your application changes only a subset of a large buffer, this is inefficient. The APPLE_flush_buffer_range extension allows your application to tell OpenGL exactly which portions of the buffer were modified, allowing it to send only the changed data to the graphics hardware. To use the flush buffer range extension, follow these steps: 1. Turn on the flush buffer extension by calling glBufferParameteriAPPLE. glBufferParameteriAPPLE(GL_ARRAY_BUFFER,GL_BUFFER_FLUSHING_UNMAP_APPLE, GL_FALSE); This disables the normal flushing behavior of OpenGL. 2. Before you unmap a buffer, you must call glFlushMappedBufferRangeAPPLE for each range of the buffer that was modified by the application. void glFlushMappedBufferRangeAPPLE(enum target, intptr offset, sizeiptr size); target is the type of buffer being modified; for vertex data it’s ARRAY_BUFFER. offset is the offset into the buffer for the modified data. size is the length of the modified data in bytes. 3. Call glUnmapBuffer. OpenGL unmaps the buffer, but it is required to update only the portions of the buffer your application explicitly marked as changed. For more information see the APPLE_flush_buffer_range specification. Vertex Array Range Extension The vertex array range extension (APPLE_vertex_array_range) lets you define a region of memory for your vertex data. The OpenGL driver can optimize memory usage by creating a single memory mapping for your vertex data. You can also provide a hint as to how the data should be stored: cached or shared. The cached Best Practices for Working with Vertex Data Vertex Array Range Extension 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 113option specifiesto cache vertex data in video memory. The shared option indicatesthat data should be mapped into a region of memory that allows the GPU to access the vertex data directly using DMA transfer. This option is best for dynamic data. If you use shared memory, you'll need to double buffer your data. You can set up and use the vertex array range extension by following these steps: 1. Enable the extension by calling glEnableClientState and supplying the GL_VERTEX_ARRAY_RANGE_APPLE constant. 2. Allocate storage for the vertex data. You are responsible for maintaining storage for the data. 3. Define an array of vertex data by calling a function such as glVertexPointer. You need to supply a pointer to your data. 4. Optionally set up a hint about handling the storage of the array data by calling the function glVertexArrayParameteriAPPLE. GLvoid glVertexArrayParameteriAPPLE(GLenum pname, GLint param); pname must be VERTEX_ARRAY_STORAGE_HINT_APPLE. param is a hint that specifies how your application expects to use the data. OpenGL uses this hint to optimize performance. You can supply either STORAGE_SHARED_APPLE or STORAGE_CACHED_APPLE. The default value is STORAGE_SHARED_APPLE, which indicates that the vertex data is dynamic and that OpenGL should use optimization and flushing techniques suitable for this kind of data. If you expect the supplied data to be static, use STORAGE_CACHED_APPLE so that OpenGL can optimize appropriately. 5. Call the OpenGL function glVertexArrayRangeAPPLE to establish the data set. void glVertexArrayRangeAPPLE(GLsizei length, GLvoid *pointer); length specifies the length of the vertex array range. The length is typically the number of unsigned bytes. *pointer points to the base of the vertex array range. 6. Draw with the vertex data using standard OpenGL vertex array commands. 7. If you need to modify the vertex data,set a fence object after you’ve submitted all the drawing commands. See “Use Fences for Finer-Grained Synchronization” (page 98) 8. Perform other work so that the GPU has time to process the drawing commands that use the vertex array. 9. Call glFinishFenceAPPLE to gain access to the vertex array. 10. Modify the data in the vertex array. 11. Call glFlushVertexArrayRangeAPPLE. Best Practices for Working with Vertex Data Vertex Array Range Extension 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 114void glFlushVertexArrayRangeAPPLE(GLsizei length, GLvoid *pointer); length specifies the length of the vertex array range, in bytes. *pointer points to the base of the vertex array range. For dynamic data, each time you change the data, you need to maintain synchronicity by calling glFlushVertexArrayRangeAPPLE. You supply as parameters an array size and a pointer to an array, which can be a subset of the data, as long as it includes all of the data that changed. Contrary to the name of the function, glFlushVertexArrayRangeAPPLE doesn't actually flush data like the OpenGL function glFlush does. It simply makes OpenGL aware that the data has changed. Listing 10-5 shows code thatsets up and usesthe vertex array range extension with dynamic data. It overwrites all of the vertex data during each iteration through the drawing loop. The call to the glFinishFenceAPPLE command guaranteesthat the CPU and the GPU don't accessthe data at the same time. Although this example calls the glFinishFenceAPPLE function almost immediately after setting the fence, in reality you need to separate these calls to allow parallel operation of the GPU and CPU. To see how that's done, read “Use Double Buffering to Avoid Resource Conflicts” (page 100). Listing 10-5 Using the vertex array range extension with dynamic data // To set up the vertex array range extension glVertexArrayParameteriAPPLE(GL_VERTEX_ARRAY_STORAGE_HINT_APPLE, GL_STORAGE_SHARED_APPLE); glVertexArrayRangeAPPLE(buffer_size, my_vertex_pointer); glEnableClientState(GL_VERTEX_ARRAY_RANGE_APPLE); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, my_vertex_pointer); glSetFenceAPPLE(my_fence); // When you want to draw using the vertex data draw_loop { glFinishFenceAPPLE(my_fence); GenerateMyDynamicVertexData(my_vertex_pointer); glFlushVertexArrayRangeAPPLE(buffer_size, my_vertex_pointer); PerformDrawing(); glSetFenceAPPLE(my_fence); } Best Practices for Working with Vertex Data Vertex Array Range Extension 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 115Listing 10-6 shows code that usesthe vertex array range extension with static data. Unlike the setup for dynamic data, the setup forstatic data includes using the hint for cached data. Because the data isstatic, it's unnecessary to set a fence. Listing 10-6 Using the vertex array range extension with static data // To set up the vertex array range extension GenerateMyStaticVertexData(my_vertex_pointer); glVertexArrayParameteriAPPLE(GL_VERTEX_ARRAY_STORAGE_HINT_APPLE, GL_STORAGE_CACHED_APPLE); glVertexArrayRangeAPPLE(array_size, my_vertex_pointer); glEnableClientState(GL_VERTEX_ARRAY_RANGE_APPLE); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, stride, my_vertex_pointer); // When you want to draw using the vertex data draw_loop { PerformDrawing(); } For detailed information on this extension, see the OpenGL specification for the vertex array range extension. Vertex Array Object Look at the DrawUsingVertexBuffers function in Listing 10-4 (page 111). It configures buffer pointers for position, color, and indexing before calling glDrawElements. A more complex vertex structure may require additional buffer pointers to be enabled and changed before you can finally draw your geometry. If your application swaps frequently between multiple configurations of elements, changing these parameters adds significant overhead to your application. The APPLE_vertex_array_object extension allows you to combine a collection of buffer pointers into a single OpenGL object, allowing you to change all the buffer pointers by binding a different vertex array object. To use this extension, follow these steps during your application’s initialization routines: 1. Generate a vertex array object for a configuration of pointers you wish to use together. Best Practices for Working with Vertex Data Vertex Array Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 116void glGenVertexArraysAPPLE(sizei n, const uint *arrays); n is the number of arrays you wish to create identifiers for. arrays specifies a pointer to memory to store the array names. glGenVertexArraysAPPLE(1,&myArrayObject); 2. Bind the vertex array object you want to configure. void glBindVertexArrayAPPLE(uint array); array is the identifier for an array that you received from glGenVertexArraysAPPLE. glBindVertexArrayAPPLE(myArrayObject); 3. Call the pointer routines (glColorPointer and so forth.) that you would normally call inside your rendering loop. When a vertex array object is bound, these calls change the currently bound vertex array object instead of the default OpenGL state. glBindBuffer(GL_ARRAY_BUFFER, staticBuffer); glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2, GL_FLOAT, sizeof(vertexStatic), (void*)offsetof(vertexStatic,position)); ... 4. Repeat the previous steps for each configuration of vertex pointers. 5. Inside your rendering loop, replace the calls to configure the array pointers with a call to bind the vertex array object. glBindVertexArrayAPPLE(myArrayObject); glDrawArrays(...); 6. If you need to get back to the default OpenGL behavior, call glBindVertexArrayAPPLE and pass in 0. glBindVertexArrayAPPLE(0); Best Practices for Working with Vertex Data Vertex Array Object 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 117Textures add realism to OpenGL objects. They help objects defined by vertex data take on the material properties of real-world objects, such as wood, brick, metal, and fur. Texture data can originate from many sources, including images. Many of the same techniques your application uses on vertex data can also be used to improve texture performance. Figure 11-1 Textures add realism to a scene 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 118 Best Practices for Working with Texture DataTextures start as pixel data that flows through an OpenGL program, as shown in Figure 11-2. Figure 11-2 Texture data path Rasterization Fragment shading and per-fragment operations Per-pixel operations Texture assembly Framebuffer Vertex shading and per-vertex operations Pixel data Vertex data The precise route that texture data takesfrom your application to itsfinal destination can impact the performance of your application. The purpose of this chapter is to provide techniques you can use to ensure optimal processing of texture data in your application. This chapter ● shows how to use OpenGL extensions to optimize performance ● lists optimal data formats and types ● provides information on working with textures whose dimensions are not a power of two ● describes creating textures from image data ● shows how to download textures ● discusses using double buffers for texture data Using Extensions to Improve Texture Performance Without any optimizations, texture data flows through an OpenGL program as shown in Figure 11-3. Data from your application first goes to the OpenGL framework, which may make a copy of the data before handing it to the driver. If your data is not in a native format for the hardware (see “Optimal Data Formats and Types” (page Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 119128)), the driver may also make a copy of the data to convert it to a hardware-specific format for uploading to video memory. Video memory, in turn, can keep a copy of the data. Theoretically, there could be four copies of your texture data throughout the system. Figure 11-3 Data copies in an OpenGL program GPU VRAM OpenGL driver OpenGL framework Application Data flows at different rates through the system, as shown by the size of the arrows in Figure 11-3. The fastest data transfer happens between VRAM and the GPU. The slowest transfer occurs between the OpenGL driver and VRAM. Data moves between the application and the OpenGL framework, and between the framework and the driver at the same "medium" rate. Eliminating any of the data transfers, but the slowest one in particular, will improve application performance. There are several extensions you can use to eliminate one or more data copies and control how texture data travels from your application to the GPU: ● GL_ARB_pixel_buffer_object allows your application to use OpenGL buffer objectsto manage texture and image data. As with vertex buffer objects, they allow your application to hint how a buffer is used and to decide when data is copied to OpenGL. ● GL_APPLE_client_storage allows you to prevent OpenGL from copying your texture data into the client. Instead, OpenGL keepsthe memory pointer you provided when creating the texture. Your application must keep the texture data at that location until the referencing OpenGL texture is deleted. ● GL_APPLE_texture_range, along with a storage hint, either GL_STORAGE_CACHED_APPLE or GL_STORAGE_SHARED_APPLE, allows you to specify a single block of texture memory and manage it as you see fit. ● GL_ARB_texture_rectangle provides support for non-power of-two textures. Here are some recommendations: Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 120● If your application requires optimal texture upload performance, use GL_APPLE_client_storage and GL_APPLE_texture_range together to manage your textures. ● If your application requires optimal texture download performance, use pixel buffer objects. ● If your application requires cross-platform techniques, use pixel buffer objects for both texture uploads and texture downloads. ● Use GL_ARB_texture_rectangle when your source images are not aligned to a power-of-2 size. The sections that follow describe the extensions and show how to use them. Pixel Buffer Objects Pixel buffer objects are a core feature of OpenGL 2.1 and also available through the GL_ARB_pixel_buffer_object extension. The procedure for setting up a pixel buffer object is almost identical to that of vertex buffer objects. Using Pixel Buffer Objects to Efficiently Load Textures 1. Call the function glGenBuffers to create a new name for a buffer object. void glGenBuffers(sizei n, uint *buffers ); n is the number of buffers you wish to create identifiers for. buffers specifies a pointer to memory to store the buffer names. 2. Call the function glBindBuffer to bind an unused name to a buffer object. After this call, the newly created buffer object is initialized with a memory buffer of size zero and a default state. (For the default setting, see the OpenGL specification for ARB_vertex_buffer_object.) void glBindBuffer(GLenum target, GLuint buffer); target should be be set to GL_PIXEL_UNPACK_BUFFER to use the buffer as the source of pixel data. buffer specifies the unique name for the buffer object. 3. Create and initialize the data store of the buffer object by calling the function glBufferData. Essentially, this call uploads your data to the GPU. void glBufferData(GLenum target, sizeiptr size, const GLvoid *data, GLenum usage); Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 121target must be set to GL_PIXEL_UNPACK_BUFFER. size specifies the size of the data store. *data pointsto the source data. If thisis not NULL, the source data is copied to the data store of the buffer object. If NULL, the contents of the data store are undefined. usage is a constant that provides a hint as to how your application plans to use the data store. For more details on buffer hints, see “Buffer Usage Hints” (page 110) 4. Whenever you call glDrawPixels, glTexSubImage or similar functions that read pixel data from the application, those functions use the data in the bound pixel buffer object instead. 5. To update the data in the buffer object, your application calls glMapBuffer. Mapping the buffer prevents the GPU from operating on the data, and gives your application a pointer to memory it can use to update the buffer. void *glMapBuffer(GLenum target, GLenum access); target must be set to PIXEL_UNPACK_BUFFER. access indicatesthe operations you plan to performon the data. You can supply READ_ONLY, WRITE_ONLY, or READ_WRITE. 6. Modify the texture data using the pointer provided by map buffer. 7. When you have finished modifying the texture, call the function glUnmapBuffer. You should supplyPIXEL_UNPACK_BUFFER. Once the buffer is unmapped, your application can no longer access the buffer’s data through the pointer, and the buffer’s contents are uploaded again to the GPU. Using Pixel Buffer Objects for Asynchronous Pixel Transfers glReadPixels normally blocks until previous commands have completed, which includes the slow process of copying the pixel data to the application. However, if you call glReadPixels while a pixel buffer object is bound, the function returns immediately. It does not block until you actually map the pixel buffer object to read its content. 1. Call the function glGenBuffers to create a new name for a buffer object. void glGenBuffers(sizei n, uint *buffers ); n is the number of buffers you wish to create identifiers for. buffers specifies a pointer to memory to store the buffer names. Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 1222. Call the function glBindBuffer to bind an unused name to a buffer object. After this call, the newly created buffer object is initialized with a memory buffer of size zero and a default state. (For the default setting, see the OpenGL specification for ARB_vertex_buffer_object.) void glBindBuffer(GLenum target, GLuint buffer); target should be be set to GL_PIXEL_PACK_BUFFER to use the buffer as the destination for pixel data. buffer specifies the unique name for the buffer object. 3. Create and initialize the data store of the buffer object by calling the function glBufferData. void glBufferData(GLenum target, sizeiptr size, const GLvoid *data, GLenum usage); target must be set to GL_ARRAY_BUFFER. size specifies the size of the data store. *data pointsto the source data. If thisis not NULL, the source data is copied to the data store of the buffer object. If NULL, the contents of the data store are undefined. usage is a constant that provides a hint as to how your application plans to use the data store. For more details on buffer hints, see “Buffer Usage Hints” (page 110) 4. Call glReadPixels or a similar function. The function inserts a command to read the pixel data into the bound pixel buffer object and then returns. 5. To take advantage of asynchronous pixel reads, your application should perform other work. 6. To retrieve the data in the pixel buffer object, your application calls glMapBuffer. This blocks OpenGL until the previously queued glReadPixels command completes, maps the data, and provides a pointer to your application. void *glMapBuffer(GLenum target, GLenum access); target must be set to GL_PIXEL_PACK_BUFFER. access indicatesthe operations you plan to performon the data. You can supply READ_ONLY, WRITE_ONLY, or READ_WRITE. 7. Write vertex data to the pointer provided by map buffer. 8. When you no longer need the vertex data, call the function glUnmapBuffer. You should supply GL_PIXEL_PACK_BUFFER. Once the buffer is unmapped, the data is no longer accessible to your application. Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 123Using Pixel Buffer Objects to Keep Data on the GPU There is no difference between a vertex buffer object and a pixel buffer object except for the target to which they are bound. An application can take the results in one buffer and use them as another buffer type. For example, you could use the pixel resultsfrom a fragmentshader and reinterpret them as vertex data in a future pass, without ever leaving the GPU: 1. Set up your first pass and submit your drawing commands. 2. Bind a pixel buffer object and call glReadPixels to fetch the intermediate results into a buffer. 3. Bind the same buffer as a vertex buffer. 4. Set up the second pass of your algorithm and submit your drawing commands. Keeping your intermediate data inside the GPU when performing multiple passes can result in great performance increases. Apple Client Storage The Apple client storage extension (APPLE_client_storage) lets you provide OpenGL with a pointer to memory that your application allocates and maintains. OpenGL retains a pointer to your data but does not copy the data. Because OpenGL references your data, your application must retain its copy of the data until all referencing textures are deleted. By using this extension you can eliminate the OpenGL framework copy as shown in Figure 11-4. Note that a texture width must be a multiple of 32 bytes for OpenGL to bypass the copy operation from the application to the OpenGL framework. Figure 11-4 The client storage extension eliminates a data copy GPU VRAM OpenGL driver OpenGL framework Application The Apple clientstorage extension defines a pixelstorage parameter, GL_UNPACK_CLIENT_STORAGE_APPLE, that you pass to the OpenGL function glPixelStorei to specify that your application retains storage for textures. The following code sets up client storage: Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 124glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE); For detailed information, see the OpenGL specification for the Apple client storage extension. Apple Texture Range and Rectangle Texture The Apple texture range extension (APPLE_texture_range) lets you define a region of memory used for texture data. Typically you specify an address range that encompasses the storage for a set of textures. This allows the OpenGL driver to optimize memory usage by creating a single memory mapping for all of the textures. You can also provide a hint as to how the data should be stored: cached or shared. The cached hint specifies to cache texture data in video memory. This hint is recommended when you have textures that you plan to use multiple times or that use linear filtering. The shared hint indicates that data should be mapped into a region of memory that enables the GPU to access the texture data directly (via DMA) without the need to copy it. This hint is best when you are using large images only once, perform nearest-neighbor filtering, or need to scale down the size of an image. The texture range extension defines the following routine for making a single memory mapping for all of the textures used by your application: void glTextureRangeAPPLE(GLenum target, GLsizei length, GLvoid *pointer); target is a valid texture target, such as GL_TEXTURE_2D. length specifies the number of bytes in the address space referred to by the pointer parameter. *pointer points to the address space that your application provides for texture storage. You provide the hint parameter and a parameter value to to the OpenGL function glTexParameteri. The possible values for the storage hint parameter (GL_TEXTURE_STORAGE_HINT_APPLE) are GL_STORAGE_CACHED_APPLE or GL_STORAGE_SHARED_APPLE. Some hardware requires texture dimensions to be a power-of-two before the hardware can upload the data using DMA. The rectangle texture extension (ARB_texture_rectangle) was introduced to allow texture targets for textures of any dimensions—that is, rectangle textures (GL_TEXTURE_RECTANGLE_ARB). You need to use the rectangle texture extension together with the Apple texture range extension to ensure OpenGL uses DMA to access your texture data. These extensions allow you to bypass the OpenGL driver, as shown in Figure 11-5. Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 125Note that OpenGL does not use DMA for a power-of-two texture target (GL_TEXTURE_2D). So, unlike the rectangular texture, the power-of-two texture will incur one additional copy and performance won't be quite as fast. The performance typically isn't an issue because games, which are the applications most likely to use power-of-two textures, load textures at the start of a game or level and don't upload textures in real time as often as applications that use rectangular textures, which usually play video or display images. The next section has code examples that use the texture range and rectangle textures together with the Apple client storage extension. Figure 11-5 The texture range extension eliminates a data copy GPU VRAM OpenGL driver OpenGL framework Application For detailed information on these extensions,see the OpenGL specification for the Apple texture range extension and the OpenGL specification for the ARB texture rectangle extension. Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 126Combining Client Storage with Texture Ranges You can use the Apple client storage extension along with the Apple texture range extension to streamline the texture data path in your application. When used together, OpenGL moves texture data directly into video memory, as shown in Figure 11-6. The GPU directly accesses your data (via DMA). The set up is slightly different for rectangular and power-of-two textures. The code examples in this section upload textures to the GPU. You can also use these extensions to download textures, see “Downloading Texture Data” (page 136). Figure 11-6 Combining extensions to eliminate data copies GPU VRAM OpenGL driver OpenGL framework Application Listing 11-1 shows how to use the extensions for a rectangular texture. After enabling the texture rectangle extension you need to bind the rectangular texture to a target. Next, set up the storage hint. Call glPixelStorei to set up the Apple client storage extension. Finally, call the function glTexImage2D with a with a rectangular texture target and a pointer to your texture data. Note: The texture rectangle extension limits what can be done with rectangular textures. To understand the limitationsin detail, read the OpenGL extension for texture rectangles. See “Working with Non–Power-of-Two Textures” (page 129) for an overview of the limitations and an alternative to using this extension. Listing 11-1 Using texture extensions for a rectangular texture glEnable (GL_TEXTURE_RECTANGLE_ARB); glBindTexture(GL_TEXTURE_RECTANGLE_ARB, id); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_STORAGE_HINT_APPLE, GL_STORAGE_CACHED_APPLE); glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE); Best Practices for Working with Texture Data Using Extensions to Improve Texture Performance 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 127glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA, sizex, sizey, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, myImagePtr); Setting up a power-of-two texture to use these extensions is similar to what's needed to set up a rectangular texture, as you can see by looking at Listing 11-2. The difference is that the GL_TEXTURE_2D texture target replaces the GL_TEXTURE_RECTANGLE_ARB texture target. Listing 11-2 Using texture extensions for a power-of-two texture glBindTexture(GL_TEXTURE_2D, myTextureName); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_STORAGE_HINT_APPLE, GL_STORAGE_CACHED_APPLE); glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sizex, sizey, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, myImagePtr); Optimal Data Formats and Types The best format and data type combinations to use for texture data are: GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV GL_BGRA, GL_UNSIGNED_SHORT_1_5_5_5_REV) GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE The combination GL_RGBA and GL_UNSIGNED_BYTE needs to be swizzled by many cards when the data is loaded, so it's not recommended. Best Practices for Working with Texture Data Optimal Data Formats and Types 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 128Working with Non–Power-of-Two Textures OpenGL is often used to process video and images, which typically have dimensionsthat are not a power-of-two. Until OpenGL 2.0, the texture rectangle extension (ARB_texture_rectangle) provided the only option for a rectangular texture target. This extension, however, imposesthe following restrictions on rectangular textures: ● You can't use mipmap filtering with them. ● You can use only these wrap modes: GL_CLAMP, GL_CLAMP_TO_EDGE, and GL_CLAMP_TO_BORDER. ● The texture cannot have a border. ● The texture uses non-normalized texture coordinates. (See Figure 11-7.) OpenGL 2.0 adds another option for a rectangular texture target through the ARB_texture_non_power_of_two extension, which supports these textures without the limitations of the ARB_texture_rectangle extension. Before using it, you must check to make sure the functionality is available. You'll also want to consult the OpenGL specification for the non—power-of-two extension. Figure 11-7 Normalized and non-normalized coordinates Normalized Non-normalized 0 1 1 0 Width Height If your code runs on a system that does not support either the ARB_texture_rectangle or ARB_texture_non_power_of_two extensions you have these options for working with with rectangular images: ● Use the OpenGL function gluScaleImage to scale the image so that it fitsin a rectangle whose dimensions are a power of two. The image undoes the scaling effect when you draw the image from the properly sized rectangle back into a polygon that has the correct aspect ratio for the image. Note: This option can result in the loss of some data. But if your application runs on hardware that doesn'tsupport the ARB_texture_rectangle extension, you may need to use this option. Best Practices for Working with Texture Data Working with Non–Power-of-Two Textures 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 129● Segment the image into power-of-two rectangles, as shown in Figure 11-8 by using one image buffer and different texture pointers. Notice how the sides and corners of the image shown in Figure 11-8 are segmented into increasingly smaller rectangles to ensure that every rectangle has dimensions that are a power of two. Special care may be needed at the borders between each segment to avoid filtering artifacts if the texture is scaled or rotated. Figure 11-8 An image segmented into power-of-two tiles Best Practices for Working with Texture Data Working with Non–Power-of-Two Textures 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 130Creating Textures from Image Data OpenGL on the Macintosh provides several options for creating high-quality textures from image data. OS X supports floating-point pixel values, multiple image file formats, and a variety of color spaces. You can import a floating-point image into a floating-point texture. Figure 11-9 shows an image used to texture a cube. Figure 11-9 Using an image as a texture for a cube For Cocoa, you need to provide a bitmap representation. You can create an NSBitmapImageRep object from the contents of an NSView object. You can use the Image I/O framework (see CGImageSource Reference ). This framework has support for many different file formats, floating-point data, and a variety of color spaces. Furthermore, it is easy to use. You can import image data as a texture simply by supplying a CFURL object that specifies the location of the texture. There is no need for you to convert the image to an intermediate integer RGB format. Creating a Texture from a Cocoa View You can use the NSView class or a subclass of it for texturing in OpenGL. The process is to first store the image data from an NSView object in an NSBitmapImageRep object so that the image data is in a format that can be readily used as texture data by OpenGL. Then, after setting up the texture target, you supply the bitmap data to the OpenGL function glTexImage2D. Note that you must have a valid, current OpenGL context set up. Best Practices for Working with Texture Data Creating Textures from Image Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 131Note: You can't create an OpenGL texture from image data that's provided by a view created from the following classes: NSProgressIndicator, NSMovieView, and NSOpenGLView. Thisis because these views do not use the window backing store, which is what the method initWithFocusedViewRect: reads from. Listing 11-3 shows a routine that uses this process to create a texture from the contents of an NSView object. A detailed explanation for each numbered line of code appears following the listing. Listing 11-3 Building an OpenGL texture from an NSView object -(void)myTextureFromView:(NSView*)theView textureName:(GLuint*)texName { NSBitmapImageRep * bitmap = [theView bitmapImageRepForCachingDisplayInRect: [theView visibleRect]]; // 1 int samplesPerPixel = 0; [theView cacheDisplayInRect:[theView visibleRect] toBitmapImageRep:bitmap]; // 2 samplesPerPixel = [bitmap samplesPerPixel]; // 3 glPixelStorei(GL_UNPACK_ROW_LENGTH, [bitmap bytesPerRow]/samplesPerPixel); // 4 glPixelStorei (GL_UNPACK_ALIGNMENT, 1); // 5 if (*texName == 0) // 6 glGenTextures (1, texName); glBindTexture (GL_TEXTURE_RECTANGLE_ARB, *texName); // 7 glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // 8 if(![bitmap isPlanar] && (samplesPerPixel == 3 || samplesPerPixel == 4)) { // 9 glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, samplesPerPixel == 4 ? GL_RGBA8 : GL_RGB8, [bitmap pixelsWide], [bitmap pixelsHigh], 0, Best Practices for Working with Texture Data Creating Textures from Image Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 132samplesPerPixel == 4 ? GL_RGBA : GL_RGB, GL_UNSIGNED_BYTE, [bitmap bitmapData]); } else { // Your code to report unsupported bitmap data } } Here's what the code does: 1. Allocates an NSBitmapImageRep object. 2. Initializes the NSBitmapImageRep object with bitmap data from the current view. 3. Gets the number of samples per pixel. 4. Sets the appropriate unpacking row length for the bitmap. 5. Sets the byte-aligned unpacking that's needed for bitmaps that are 3 bytes per pixel. 6. If a texture object is not passed in, generates a new texture object. 7. Binds the texture name to the texture target. 8. Sets filtering so that it does not use a mipmap, which would be redundant for the texture rectangle extension. 9. Checks to see if the bitmap is nonplanar and is either a 24-bit RGB bitmap or a 32-bit RGBA bitmap. If so, retrievesthe pixel data using the bitmapData method, passing it along with other appropriate parameters to the OpenGL function for specifying a 2D texture image. Creating a Texture from a Quartz Image Source Quartz images (CGImageRef data type) are defined in the Core Graphics framework (ApplicationServices/CoreGraphics.framework/CGImage.h) while the image source data type for reading image data and creating Quartz images from an image source is declared in the Image I/O framework (ApplicationServices/ImageIO.framework/CGImageSource.h). Quartz provides routines that read a wide variety of image data. To use a Quartz image as a texture source, follow these steps: 1. Create a Quartz image source by supplying a CFURL object to the function CGImageSourceCreateWithURL. 2. Create a Quartz image by extracting an image from the image source, using the function CGImageSourceCreateImageAtIndex. Best Practices for Working with Texture Data Creating Textures from Image Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 1333. Extract the image dimensions using the function CGImageGetWidth and CGImageGetHeight. You'll need these to calculate the storage required for the texture. 4. Allocate storage for the texture. 5. Create a color space for the image data. 6. Create a Quartz bitmap graphics context for drawing. Make sure to set up the context for pre-multiplied alpha. 7. Draw the image to the bitmap context. 8. Release the bitmap context. 9. Set the pixel storage mode by calling the function glPixelStorei. 10. Create and bind the texture. 11. Set up the appropriate texture parameters. 12. Call glTexImage2D, supplying the image data. 13. Free the image data. Listing 11-4 shows a code fragment that performsthese steps. Note that you must have a valid, current OpenGL context. Listing 11-4 Using a Quartz image as a texture source CGImageSourceRef myImageSourceRef = CGImageSourceCreateWithURL(url, NULL); CGImageRef myImageRef = CGImageSourceCreateImageAtIndex (myImageSourceRef, 0, NULL); GLint myTextureName; size_t width = CGImageGetWidth(myImageRef); size_t height = CGImageGetHeight(myImageRef); CGRect rect = {{0, 0}, {width, height}}; void * myData = calloc(width * 4, height); CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB(); CGContextRef myBitmapContext = CGBitmapContextCreate (myData, width, height, 8, width*4, space, kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst); CGContextSetBlendMode(myBitmapContext, kCGBlendModeCopy); CGContextDrawImage(myBitmapContext, rect, myImageRef); Best Practices for Working with Texture Data Creating Textures from Image Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 134CGContextRelease(myBitmapContext); glPixelStorei(GL_UNPACK_ROW_LENGTH, width); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glGenTextures(1, &myTextureName); glBindTexture(GL_TEXTURE_RECTANGLE_ARB, myTextureName); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA8, width, height, 0, GL_BGRA_EXT, GL_UNSIGNED_INT_8_8_8_8_REV, myData); free(myData); For more information on using Quartz, see Quartz 2D Programming Guide , CGImage Reference , and CGImageSource Reference . Getting Decompressed Raw Pixel Data from a Source Image You can use the Image I/O framework together with a Quartz data provider to obtain decompressed raw pixel data from a source image, as shown in Listing 11-5. You can then use the pixel data for your OpenGL texture. The data has the same format as the source image, so you need to make sure that you use a source image that has the layout you need. Alpha is not premultiplied for the pixel data obtained in Listing 11-5, but alpha is premultiplied for the pixel data you get when using the code described in “Creating a Texture from a Cocoa View” (page 131) and “Creating a Texture from a Quartz Image Source” (page 133). Listing 11-5 Getting pixel data from a source image CGImageSourceRef myImageSourceRef = CGImageSourceCreateWithURL(url, NULL); CGImageRef myImageRef = CGImageSourceCreateImageAtIndex (myImageSourceRef, 0, NULL); CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(myImageRef)); void *pixelData = CFDataGetBytePtr(data); Best Practices for Working with Texture Data Creating Textures from Image Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 135Downloading Texture Data A texture download operation uses the same data path as an upload operation except that the data path is reversed. Downloading transfers texture data, using direct memory access (DMA), from VRAM into a texture that can then be accessed directly by your application. You can use the Apple client range, texture range, and texture rectangle extensions for downloading, just as you would for uploading. To download texture data using the Apple client storage, texture range, and texture rectangle extensions: ● Bind a texture name to a texture target. ● Set up the extensions ● Call the function glCopyTexSubImage2D to copy a texture subimage from the specified window coordinates. This call initiates an asynchronous DMA transfer to system memory the next time you call a flush routine. The CPU doesn't wait for this call to complete. ● Call the function glGetTexImage to transfer the texture into system memory. Note that the parameters must match the ones that you used to set up the texture when you called the function glTexImage2D. This call is the synchronization point; it waits until the transfer is finished. Listing 11-6 shows a code fragment that downloads a rectangular texture that uses cached memory. Your application processes data between the glCopyTexSubImage2D and glGetTexImage calls. How much processing? Enough so that your application does not need to wait for the GPU. Listing 11-6 Code that downloads texture data glBindTexture(GL_TEXTURE_RECTANGLE_ARB, myTextureName); glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_STORAGE_HINT_APPLE, GL_STORAGE_SHARED_APPLE); glPixelStorei(GL_UNPACK_CLIENT_STORAGE_APPLE, GL_TRUE); glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGBA, sizex, sizey, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, myImagePtr); glCopyTexSubImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, 0, 0, 0, 0, image_width, image_height); glFlush(); // Do other work processing here, using a double or triple buffer glGetTexImage(GL_TEXTURE_RECTANGLE_ARB, 0, GL_BGRA, Best Practices for Working with Texture Data Downloading Texture Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 136GL_UNSIGNED_INT_8_8_8_8_REV, pixels); Double Buffering Texture Data When you use any technique that allowsthe GPU to access your texture data directly,such asthe texture range extension, it's possible for the GPU and CPU to access the data at the same time. To avoid such a collision, you must synchronize the GPU and the CPU. The simplest way is shown in Figure 11-10. Your application works on the data, flushes it to the GPU and waits until the GPU is finished before working on the data again. One technique for ensuring that the GPU is finished executing commands before your application sends more data is to insert a token into the command stream and use that to determine when the CPU can touch the data again, as described in “Use Fences for Finer-Grained Synchronization” (page 98). Figure 11-10 uses the fence extension command glFinishObject to synchronize buffer updates for a stream of single-buffered texture data. Notice that when the CPU is processing texture data, the GPU is idle. Similarly, when the GPU is processing texture data, the CPU is idle. It's much more efficient for the GPU and CPU to work asynchronously than to work synchronously. Double buffering data is a technique that allows you to process data asynchronously, as shown in Figure 11-11 (page 138). Figure 11-10 Single-buffered data CPU GPU glFinishObject(..., 1) glFinishObject(..., 1) TIME Frame 1 Frame 2 glFlush glFlush Best Practices for Working with Texture Data Double Buffering Texture Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 137To double buffer data, you must supply two sets of data to work on. Note in Figure 11-11 that while the GPU is rendering one frame of data, the CPU processes the next. After the initial startup, neither processing unit is idle. Using the glFinishObject function provided by the fence extension ensures that buffer updating is synchronized. Figure 11-11 Double-buffered data CPU GPU glFinishObject(..., 1) glFinishObject(..., 1) glFinishObject(..., 2) glFinishObject(..., 2) Time Frame 1 Frame 2 Frame 3 Frame 4 glFlush glFlush glFlush glFlush Best Practices for Working with Texture Data Double Buffering Texture Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 138OpenGL 1.x used fixed functions to deliver a useful graphics pipeline to application developers. To configure the various stages of the pipeline shown in Figure 12-1, applications called OpenGL functions to tweak the calculationsthat were performed for each vertex and fragment. Complex algorithmsrequired multiple rendering passes and dozens of function calls to configure the calculations that the programmer desired. Extensions offered new configuration options, but did not change the complex nature of OpenGL programming. Figure 12-1 OpenGL fixed-function pipeline Geometry Fragment Framebuffer operations Texturing Fog Alpha, stencil, and depth tests Framebuffer blending Primitive assembly Clipping Vertex Application Primitives and image data Transform and lighting 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 139 Customizing the OpenGL Pipeline with ShadersStarting with OpenGL 2.0, some stages of the OpenGL pipeline can be replaced with shaders. A shader is a program written in a special shading language. This program is compiled by OpenGL and uploaded directly into the graphics hardware. Figure 12-2 shows where your applications can hook into the pipeline with shaders. Figure 12-2 OpenGL shader pipeline Geometry Fragment Framebuffer operations Fragment shaders Alpha, stencil, and depth tests Framebuffer blending Geometry shaders Clipping Vertex Application Primitives and image data Vertex shaders Shaders offer a considerable number of advantages to your application: ● Shaders give you precise control over the operations that are performed to render your images. ● Shaders allow for algorithmsto be written in a terse, expressive format. Rather than writing complex blocks of configuration callsto implement a mathematical operation, you write code that expressesthe algorithm directly. ● Older graphics processors implemented the fixed-function pipeline in hardware or microcode, but now graphics processors are general-purpose computing devices. The fixed function pipeline is itself implemented as a shader. ● Shaders allow for longer and more complex algorithms to be implemented using a single rendering pass. Because you have extensive control over the pipeline, it is also easier to implement multipass algorithms without requiring the data to be read back from the GPU. ● Your application can switch between different shaders with a single function call. In contrast, configuring the fixed-function pipeline incurs significant function-call overhead. If your application uses the fixed-function pipeline, a critical task is to replace those tasks with shaders. Customizing the OpenGL Pipeline with Shaders 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 140If you are new to shaders, OpenGL Shading Language , by Randi J. Rost, is an excellent guide for those looking to learn more about writing shaders and integrating them into your application. The rest of this chapter provides some boilerplate code, briefly describe the extensions that implement shaders, and discusses tools that Apple provides to assist you in writing shaders. Shader Basics OpenGL 2.0 offers vertex and fragmentshaders, to take over the processing of those two stages of the graphics pipeline. These same capabilities are also offered by the ARB_shader_objects, ARB_vertex_shader and ARB_fragment_shaderextensions. Vertex shading is available on all hardware running OS X v10.5 or later. Fragment shading is available on all hardware running OS X v10.6 and the majority of hardware running OS X v10.5. Creating a shader program is an expensive operation compared to other OpenGL state changes. Listing 12-1 presents a typical strategy to load, compile, and verify a shader program. Listing 12-1 Loading a Shader /** Initialization-time for shader **/ GLuint shader, prog; GLchar *shaderText = "... shader text ..."; // Create ID for shader shader = glCreateShader(GL_VERTEX_SHADER); // Define shader text glShaderSource(shaderText); // Compile shader glCompileShader(shader); // Associate shader with program glAttachShader(prog, shader); // Link program glLinkProgram(prog); // Validate program glValidateProgram(prog); // Check the status of the compile/link glGetProgramiv(prog, GL_INFO_LOG_LENGTH, &logLen); if(logLen > 0) { // Show any errors as appropriate glGetProgramInfoLog(prog, logLen, &logLen, log); Customizing the OpenGL Pipeline with Shaders Shader Basics 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 141fprintf(stderr, "Prog Info Log: %s\n", log); } // Retrieve all uniform locations that are determined during link phase for(i = 0; i < uniformCt; i++) { uniformLoc[i] = glGetUniformLocation(prog, uniformName); } // Retrieve all attrib locations that are determined during link phase for(i = 0; i < attribCt; i++) { attribLoc[i] = glGetAttribLocation(prog, attribName); } /** Render stage for shaders **/ glUseProgram(prog); This code loads the text source for a vertex shader, compiles it, and adds it to the program. A more complex example might also attach fragment and geometry shaders. The program islinked and validated for correctness. Finally, the program retrieves information about the inputs to the shader and stores then in its own arrays. When the application is ready to use the shader, it calls glUseProgram to make it the current shader. For best performance, your application should create shaders when your application is initialized, and not inside the rendering loop. Inside your rendering loop, you can quickly switch in the appropriate shaders by calling glUseProgram. For best performance, use the vertex array object extension to also switch in the vertex pointers. See “Vertex Array Object” (page 116) for more information. Advanced Shading Extensions In addition to the standard shader,some Macs offer additionalshading extensionsto reveal advanced hardware capabilities. Not all of these extensions are available on all hardware,so you need to assess whether the features of each extension are worth implementing in your application. Transform Feedback The EXT_transform_feedback extension is available on all hardware running OS X v10.5 or later. With the feedback extension, you can capture the results of the vertex shader into a buffer object, which can be used as an input to future commands. This is similar to the pixel buffer object technique described in “Using Pixel Buffer Objects to Keep Data on the GPU” (page 124), but more directly captures the results you desire. Customizing the OpenGL Pipeline with Shaders Advanced Shading Extensions 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 142GPU Shader 4 The EXT_gpu_shader4 extension extends the OpenGL shading language to offer new operations, including: ● Full integer support. ● Built-in shader variable to reference the current vertex. ● Built-in shader variable to reference the current primitive. This makes it easier to use a shader to use the same static vertex data to render multiple primitives, using a shader and uniform variables to customize each instance of that primitive. ● Unfiltered texture fetches using integer coordinates. ● Querying the size of a texture within a shader. ● Offset texture lookups. ● Explicit gradient and LOD texture lookups. ● Depth Cubemaps. Geometry Shaders The EXT_geometry_shader4 extension allows your create geometry shaders. A geometry shader accepts transformed vertices and can add or remove vertices before passing them down to the rasterizer. This allows the application to add or remove geometry based on the calculated values in the vertex. For example, given a triangle and its neighboring vertices, your application could emit additional vertices to better create a more accurate appearance of a curved surface. Uniform Buffers The EXT_bindable_uniform extension allows your application to allocate buffer objects and use them as the source for uniform data in your shaders. Instead of relying on a single block of uniform memory supplied by OpenGL, your application allocates buffer objects using the same API that it uses to implement vertex buffer objects (“Vertex Buffers” (page 107)). Instead of making a function call for each uniform variable you want to change, you can swap all of the uniform data by binding to a different uniform buffer. Customizing the OpenGL Pipeline with Shaders Advanced Shading Extensions 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 143Aliasing is the bane of the digital domain. In the early days of the personal computer, jagged edges and blocky graphics were accepted by the user simply because not much could be done to correct them. Now with faster hardware and higher-resolution displays, there are several antialiasing techniques that can smooth edges to achieve a more realistic scene. OpenGL supports antialiasing that operates at the level of lines and polygons as well as at the level of the full scene. This chapter discusses techniques for full scene antialiasing (FSAA). If your application needs point or line antialiasing instead of full scene antialiasing, use the built in OpenGL point and line antialiasing functions. These are described in Section 3.4.2 in the OpenGL Specification. The three antialiasing techniques in use today are multisampling, supersampling, and alpha channel blending: ● Multisampling defines a technique for sampling pixel content at multiple locations for each pixel. This is a good technique to use if you want to smooth polygon edges. ● Supersampling renders at a much higher resolution than what's needed for the display. Prior to drawing the content to the display, OpenGL scales and filters the content to the appropriate resolution. This is a good technique to use when you want to smooth texture interiors in addition to polygon edges. ● Alpha channel blending uses the alpha value of a fragment to control how to blend the fragment with the pixel values that are already in the framebuffer. It's a good technique to use when you want to ensure that foreground and background images are composited smoothly. The ARB_multisample extension defines a specification for full scene antialiasing. It describes multisampling and alpha channel sampling. The specification does not specifically mention supersampling but its wording doesn't preclude supersampling. The antialiasing methods that are available depend on the hardware and the actual implementation depends on the vendor. Some graphics cards support antialiasing using a mixture of multisampling and supersampling. The methodology used to select the samples can vary as well. Your best approach is to query the renderer to find out exactly what is supported. OpenGL lets you provide a hint to the renderer asto which antialiasing technique you prefer. Hints are available asrenderer attributesthat you supply when you create a pixel format object. A smallersubset of rendererssupport the EXT_framebuffer_blit and EXT_framebuffer_multisample extensions. These extensions allow your application to create multisampled offscreen frame buffer objects, render detailed scenesto them, with precise control over when the multisampled renderbuffer isresolved to a single displayable color per pixel. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 144 Techniques for Scene AntialiasingGuidelines Keep the following in mind when you set up full scene antialiasing: ● Although a system may have enough VRAM to accommodate a multisample buffer, a large buffer can affect the ability of OpenGL to maintain a properly working texture set. Keep in mind that the buffers associated with the rendering context—depth and stencil—increase in size by a factor equal to number of samples per pixel. ● The OpenGL driver allocates the memory needed for the multisample buffer; your application should not allocate this memory. ● Any antialiasing algorithm that operates on the full scene requires additional computing resources. There is a tradeoff between performance and quality. For that reason, you may want to provide a user interface that allows the user to enable and disable FSAA, or to choose the level of quality for antialiasing. ● The commands glEnable(GL_MULTISAMPLE) and glDisable(GL_MULTISAMPLE) are ignored on some hardware because some graphics cards have the feature enabled all the time. That doesn't mean that you should not call these commands because you'll certainly need them on hardware that doesn't ignore them. ● A hint as to the variant of sampling you want is a suggestion, not a command. Not all hardware supports all types of antialiasing. Other hardware mixes multisampling with supersampling techniques. The driver dictates the type of antialiasing that's actually used in your application. ● The best way to find out which sample modes are supported is to call the CGL function CGLDescribeRenderer with the renderer property kCGLRPSampleModes or kCGLRPSampleAlpha. You can also determine how many samples the renderer supports by calling CGLDescribeRenderer with the renderer property kCGLRPMaxSamples. General Approach The general approach to setting up full scene antialiasing is as follows: 1. Check to see what's supported. Not all renderers support the ARB multisample extension, so you need to check for this functionality (see “Detecting Functionality” (page 83)). To find out what type of antialiasing a specific renderersupports, call the function CGLDescribeRenderer. Supply the renderer property kCGLRPSampleModes to find out whether the renderer supports multisampling and supersampling. Supply kCGLRPSampleAlpha to see whether the renderer supports alpha sampling. Techniques for Scene Antialiasing Guidelines 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 145You can choose to exclude unsupported hardware from the pixel format search by specifying only the hardware thatsupports multisample antialiasing. Keep in mind that if you exclude unsupported hardware, the unsupported displays will not render anything. If you include unsupported hardware, OpenGL uses normal aliased rendering to the unsupported displays and multisampled rendering to supported displays. 2. Include these buffer attributes in the attributes array: ● The appropriate sample buffer attribute constant (NSOpenGLPFASampleBuffers or kCGLPFASampleBuffers) along with the number of multisample buffers. At thistime the specification allows only one multisample buffer. ● The appropriate samples constant (NSOpenGLPFASamples or kCGLPFASamples) along with the number ofsamples per pixel. You can supply 2, 4, 6, or more depending on what the renderersupports and the amount of VRAM available. The value that you supply affects the quality, memory use, and speed of the multisampling operation. For fastest performance, and to use the least amount of video memory, specify 2 samples. When you need more quality, specify 4 or more. ● The no recovery attribute ( NSOpenGLPFANoRecovery or kCGLPFANoRecovery). Although enabling this attribute is not mandatory, it's recommended to prevent OpenGL from using software fallback as a renderer. Multisampled antialiasing performance is slow in the software renderer. 3. Optionally provide a hint for the type of antialiasing you want—multisampling, supersampling, or alpha sampling. See “Hinting for a Specific Antialiasing Technique” (page 147). 4. Enable multisampling with the following command: glEnable(GL_MULTISAMPLE); Regardless of the enabled state, OpenGL always uses the multisample buffer if you supply the appropriate buffer attributes when you set up the pixel format object. If you haven'tsupplied the appropriate attributes, enabling multisampling has no effect. When multisampling is disabled, all coverage values are set to 1, which gives the appearance of rendering without multisampling. Some graphics hardware leaves multisampling enabled all the time. However, don't rely on hardware to have multisampling enabled; use glEnable to programmatically turn on this feature. 5. Optionally provide hints for the rendering algorithm. You perform this optional step only if you want OpenGL to compute coverage values by a method other than uniformly weighting samples and averaging them. Some hardware supports a multisample filter hint through an OpenGL extension—GL_NV_multisample_filter_hint. This hint allows an OpenGL implementation to use an alternative method of resolving the color of multisampled pixels. Techniques for Scene Antialiasing General Approach 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 146You can specify that OpenGL usesfaster or nicer rendering by calling the OpenGL function glHint, passing the constant GL_MULTISAMPLE_FILTER_HINT_NV asthe target parameter and GL_FASTEST or GL_NICEST as the mode parameter. Hints allow the hardware to optimize the output if it can. There is no performance penalty or returned error for issuing a hint that's not supported. For more information, see the OpenGL extension registry for NV_multisample_filter_hint. Hinting for a Specific Antialiasing Technique When you set up your renderer and buffer attributes for full scene antialiasing, you can specify a hint to prefer one antialiasing technique over the others. If the underlying renderer does not have sufficient resources to support what you request, OpenGL ignores the hint. If you do not supply the appropriate buffer attributes when you create a pixel format object, then the hint does nothing. Table 13-1 lists the hinting constants available for the NSOpenGLPixelFormat class and CGL. Table 13-1 Antialiasing hints Multisampling Supersampling Alpha blending NSOpenGLPFAMultisample NSOpenGLPFASupersample NSOpenGLPFASampleAlpha kCGLPFAMultisample kCGLPFASupersample kCGLPFASampleAlpha Techniques for Scene Antialiasing Hinting for a Specific Antialiasing Technique 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 147Concurrency is the notion of multiple things happening at the same time. In the context of computers, concurrency usually refers to executing tasks on more than one processor at the same time. By performing work in parallel, tasks complete sooner, and applications become more responsive to the user. The good news isthat well-designed OpenGL applications already exhibit a specific form of concurrency—concurrency between application processing on the CPU and OpenGL processing on the GPU. Many of the techniques introduced in “OpenGL Application Design Strategies” (page 89) are aimed specifically at creating OpenGL applications that exhibit great CPU-GPU parallelism. However, modern computers not only contain a powerful GPU, but also contain multiple CPUs. Sometimesthose CPUs have multiple cores, each capable of performing calculations independently of the others. It is critical that applications be designed to take advantage of concurrency where possible. Designing a concurrent application means decomposing the work your application performs into subtasks and identifying which tasks can safely operate in parallel and which tasks must be executed sequentially—that is, which tasks are dependent on either resources used by other tasks or results returned from those tasks. Each process in OS X is made up of one or more threads. A thread is a stream of execution that runs code for the process. Multicore systems offer true concurrency by allowing multiple threads to execute simultaneously. Apple offers both traditional threads and a feature called Grand CentralDispatch (GCD). Grand Central Dispatch allows you to decompose your application into smaller tasks without requiring the application to manage threads. GCD allocates threads based on the number of cores available on the system and automatically schedules tasks to those threads. At a higher level, Cocoa offers NSOperation and NSOperationQueue to provide an Objective-C abstraction for creating and scheduling units of work. On OS X v10.6, operation queues use GCD to dispatch work; on OS X v10.5, operation queues create threads to execute your application’s tasks. This chapter does not attempt describe these technologiesin detail. Before you consider how to add concurrency to your OpenGL application, you should first readConcurrency Programming Guide . If you plan on managing threads manually, you should also read Threading Programming Guide . Regardless of which technique you use, there are additional restrictions when calling OpenGL on multithreaded systems. This chapter helps you understand when multithreading improves your OpenGL application’s performance, the restrictions OpenGL places on multithreaded applications, and common design strategies you might use to implement concurrency in an OpenGL application. Some of these design techniques can get you an improvement in just a few lines of code. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 148 Concurrency and OpenGLIdentifying Whether an OpenGL Application Can Benefit from Concurrency Creating a multithreaded application requires significant effort in the design, implementation, and testing of your application. Threads also add complexity and overhead to an application. For example, your application may need to copy data so that it can be handed to a worker thread, or multiple threads may need to synchronize access to the same resources. Before you attempt to implement concurrency in an OpenGL application, you should optimize your OpenGL code in a single-threaded environment using the techniques described in “OpenGL Application Design Strategies” (page 89). Focus on achieving great CPU-GPU parallelism first and then assess whether concurrent programming can provide an additional performance benefit. A good candidate has either or both of the following characteristics: ● The application performs many tasks on the CPU that are independent of OpenGL rendering. Games, for example, simulate the game world, calculate artificial intelligence from computer-controlled opponents, and play sound. You can exploit parallelism in thisscenario because many of these tasks are not dependent on your OpenGL drawing code. ● Profiling your application has shown that your OpenGL rendering code spends a lot of time in the CPU. In this scenario, the GPU is idle because your application is incapable of feeding it commands fast enough. If your CPU-bound code has already been optimized, you may be able to improve its performance further by splitting the work into tasks that execute concurrently. If your application is blocked waiting for the GPU, and has no work it can perform in parallel with its OpenGL drawing commands, then it is not a good candidate for concurrency. If the CPU and GPU are both idle, then your OpenGL needs are probably simple enough that no further tuning is useful. For more information on how to determine where your application spends its time, see “Tuning Your OpenGL Application” (page 155). OpenGL Restricts Each Context to a Single Thread Each thread in an OS X process has a single current OpenGL rendering context. Every time your application calls an OpenGL function, OpenGL implicitly looks up the context associated with the current thread and modifies the state or objects associated with that context. OpenGL is not reentrant. If you modify the same context from multiple threads simultaneously, the results are unpredictable. Your application might crash or it might render improperly. If for some reason you decide to set more than one thread to target the same context, then you must synchronize threads by placing a mutex around all OpenGL calls to the context, such as gl* and CGL*. OpenGL commands that block—such as fence commands—do not synchronize threads. Concurrency and OpenGL Identifying Whether an OpenGL Application Can Benefit from Concurrency 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 149GCD and NSOperationQueue objects can both execute your tasks on a thread of their choosing. They may create a thread specifically for that task, or they may reuse an existing thread. But in either case, you cannot guarantee which thread executes the task. For an OpenGL application, that means: ● Each task must set the context before executing any OpenGL commands. ● Your application must ensure that two tasks that access the same context are not allowed to execute concurrently. Strategies for Implementing Concurrency in OpenGL Applications A concurrent OpenGL application wants to focus on CPU parallelism so that OpenGL can provide more work to the GPU. Here are a few recommended strategies for implementing concurrency in an OpenGL application: ● Decompose your application into OpenGL and non-OpenGL tasks that can execute concurrently. Your OpenGL rendering code executes as a single task, so it still executes in a single thread. This strategy works best when your application has other tasks that require significant CPU processing. ● If performance profiling reveals that your application spends a lot of CPU time inside OpenGL, you can move some of that processing to another thread by enabling the multithreading in the OpenGL engine. The advantage of this method is its simplicity; enabling the multithreaded OpenGL engine takes just a few lines of code. See “Multithreaded OpenGL” (page 150). ● If your application spends a lot of CPU time preparing data to send to openGL, you can divide the work between tasks that prepare rendering data and tasks that submit rendering commands to OpenGL. See “Perform OpenGL Computations in a Worker Task” (page 151) ● If your application has multiple scenes it can render simultaneously or work it can perform in multiple contexts, it can create multiple tasks, with an OpenGL context per task. If the contexts can share the same resources, you can use contextsharing when the contexts are created to share surfaces or OpenGL objects: display lists, textures, vertex and fragment programs, vertex array objects, and so on. See “Use Multiple OpenGL Contexts” (page 153) Multithreaded OpenGL Whenever your application calls OpenGL, the renderer processes the parameters to put them in a format that the hardware understands. The time required to process these commands varies depending on whether the inputs are already in a hardware-friendly format, but there is always some overhead in preparing commands for the hardware. Concurrency and OpenGL Strategies for Implementing Concurrency in OpenGL Applications 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 150If your application spends a lot of time performing calculations inside OpenGL, and you’ve already taken steps to pick ideal data formats, your application might gain an additional benefit by enabling multithreading inside the OpenGL engine. The multithreaded OpenGL engine automatically creates a worker thread and transfers some of its calculationsto that thread. On a multicore system, this allowsinternal OpenGL calculations performed on the CPU to act in parallel with your application, improving performance. Synchronizing functions continue to block the calling thread. Listing 14-1 shows the code required to enable the multithreaded OpenGL engine. Listing 14-1 Enabling the multithreaded OpenGL engine CGLError err = 0; CGLContextObj ctx = CGLGetCurrentContext(); // Enable the multithreading err = CGLEnable( ctx, kCGLCEMPEngine); if (err != kCGLNoError ) { // Multithreaded execution may not be available // Insert your code to take appropriate action } Note: Enabling or disabling multithreaded execution causes OpenGL to flush previous commands as well as incurring the overhead of setting up the additional thread. You should enable or disable multithreaded execution in an initialization function rather than in the rendering loop. Enabling multithreading comes at a cost—OpenGL must copy parameters to transmit them to the worker thread. Because of this overhead, you should always test your application with and without multithreading enabled to determine whether it provides a substantial performance improvement. Perform OpenGL Computations in a Worker Task Some applications perform lots of calculations on their data before passing that data down to the OpenGL renderer. For example, the application might create new geometry or animate existing geometry. Where possible,such calculationsshould be performed inside OpenGL. For example, vertex shaders and the transform Concurrency and OpenGL Perform OpenGL Computations in a Worker Task 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 151feedback extension might allow you to perform these calculations entirely within OpenGL. Thistakes advantage of the greater parallelism available inside the GPU, and reduces the overhead of copying results between your application and OpenGL. The approach described in Figure 9-3 (page 92) alternates between updating OpenGL objects and executing rendering commands that use those objects. OpenGL renders on the GPU in parallel with your application’s updates running on the CPU. If the calculations performed on the CPU take more processing time than those on the GPU, then the GPU spends more time idle. In this situation, you may be able to take advantage of parallelism on systems with multiple CPUs. Split your OpenGL rendering code into separate calculation and processing tasks, and run them in parallel. Figure 14-1 shows a clear division of labor. One task produces data that is consumed by the second and submitted to OpenGL. Figure 14-1 CPU processing and OpenGL on separate threads CPU Processing Shared data Framebuffer OpenGL context Texture data Vertex data OpenGL state OpenGL surface Thread 1 Thread 2 For best performance, your application should avoid copying data between the tasks. For example, rather than calculating the data in one task and copying it into a vertex buffer object in the other, map the vertex buffer object in the setup code and hand the pointer directly to the worker task. If your application can further decompose the modifications task into subtasks, you may see better benefits. For example, assume two or more vertex buffers, each of which needsto be updated before submitting drawing commands. Each can be recalculated independently of the others. In this scenario, the modifications to each buffer becomes an operation, using an NSOperationQueue object to manage the work: 1. Set the current context. 2. Map the first buffer. 3. Create an NSOperation object whose task is to fill that buffer. 4. Queue that operation on the operation queue. Concurrency and OpenGL Perform OpenGL Computations in a Worker Task 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 1525. Perform steps 2 through 4 for the other buffers. 6. Call waitUntilAllOperationsAreFinished on the operation queue. 7. Unmap the buffers. 8. Execute rendering commands. On a multicore system, multiple threads of execution may allow the buffers to be filled simultaneously. Steps 7 and 8 could even be performed by a separate operation queued onto the same operation queue, provided that operation set the proper dependencies. Use Multiple OpenGL Contexts If your application has multiple scenes that can be rendered in parallel, you can use a context for each scene you need to render. Create one context for each scene and assign each context to an operation or task. Because each task has its own context, all can submit rendering commands in parallel. The Apple-specific OpenGL APIs also provide the option for sharing data between contexts, as shown in Figure 14-2. Shared resources are automatically set up as mutual exclusion (mutex) objects. Notice that thread 2 draws to a pixel buffer that is linked to the shared state as a texture. Thread 1 can then draw using that texture. Figure 14-2 Two contexts on separate threads Pbuffer surface Framebuffer OpenGL context 1 OpenGL state 1 OpenGL context 2 OpenGL state 2 OpenGL surface Thread 1 Thread 2 OpenGL shared state OpenGL shared state OpenGL shared state This is the most complex model for designing an application. Changes to objects in one context must be flushed so that other contextssee the changes. Similarly, when your application finishes operating on an object, it must flush those commands before exiting, to ensure that all rendering commands have been submitted to the hardware. Concurrency and OpenGL Use Multiple OpenGL Contexts 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 153Guidelines for Threading OpenGL Applications Follow these guidelines to ensure successful threading in an application that uses OpenGL: ● Use only one thread per context. OpenGL commands for a specific context are not thread safe. You should never have more than one thread accessing a single context simultaneously. ● Contexts that are on different threads can share object resources. For example, it is acceptable for one context in one thread to modify a texture, and a second context in a second thread to modify the same texture. The shared object handling provided by the Apple APIs automatically protects against thread errors. And, your application is following the "one thread per context" guideline. ● When you use an NSOpenGLView object with OpenGL calls that are issued from a thread other than the main one, you must set up mutex locking. Mutex locking is necessary because unless you override the default behavior, the main thread may need to communicate with the view for such things as resizing. Applications that use Objective-C with multithreading can lock contexts using the functions CGLLockContext and CGLUnlockContext. If you want to perform rendering in a thread other than the main one, you can lock the context that you want to access and safely execute OpenGL commands. The locking calls must be placed around all of your OpenGL calls in all threads. CGLLockContext blocks the thread it is on until all other threads have unlocked the same context using the function CGLUnlockContext. You can use CGLLockContext recursively. Context-specific CGL calls by themselves do not require locking, but you can guarantee serial processing for a group of calls by surrounding them with CGLLockContext and CGLUnlockContext. Keep in mind that calls from the OpenGL API (the API provided by the Khronos OpenGL Working Group) require locking. ● Keep track of the current context. When switching threadsit is easy to switch contextsinadvertently, which causes unforeseen effects on the execution of graphic commands. You must set a current context when switching to a newly created thread. Concurrency and OpenGL Guidelines for Threading OpenGL Applications 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 154After you design and implement your application, it is important that you spend some time analyzing its performance. The key to performance tuning your OpenGL application is to successively refine the design and implementation of your application. You do this by alternating between measuring your application, identifying where the bottleneck is, and removing the bottleneck. If you are unfamiliar with general performance issues on the Macintosh platform, you will want to read Getting Started with Performance and Performance Overview. Performance Overview contains general performance tips that are useful to all applications. It also describes most of the performance tools provided with OS X. Next, take a close look at Instruments. Instruments consolidates many measurement tools into a single comprehensive performance-tuning application. There are two tools other than OpenGL Profiler that are specific for OpenGL development—OpenGL Driver Monitor and OpenGL Shader Builder. OpenGL Driver Monitor collectsreal-time data from the hardware. OpenGL Shader Builder provides immediate feedback on vertex and fragment programs that you write. For more information on these tools, see: ● OpenGL Tools for Serious Graphics Development ● Optimizing with Shark: Big Payoff, Small Effort ● Instruments User Guide ● Shark User Guide ● Real world profiling with the OpenGL Profiler ● OpenGL Driver Monitor User Guide ● OpenGL Shader Builder User Guide The following books contain many techniques for getting the most performance from the GPU: ● GPU Gems: Programming Techniques, Tips and Tricks for Real Time Graphics, Randima Fernando. In particular, Graphics Pipeline Performance is a critical article for understanding how to find the bottlenecks in your OpenGL application. ● GPU Gems 2: Programming Techniques for High-Performance Graphics and General-Purpose Computation , Matt Pharr and Randima Fernando. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 155 Tuning Your OpenGL ApplicationThis chapter focuses on two main topics: ● “Gathering and Analyzing Baseline Performance Data” (page 156) shows how to use top and OpenGL Profiler to obtain and interpret baseline performance data. ● “Identifying Bottlenecks with Shark” (page 161) discussesthe patterns of usage that the Shark performance tool can make apparent and that indicate places in your code that you may want to improve. Gathering and Analyzing Baseline Performance Data Analyzing performance is a systematic process that starts with gathering baseline data. OS X provides several applications that you can use to assess baseline performance for an OpenGL application: ● top is a command-line utility that you run in the Terminal window. You can use top to assess how much CPU time your application consumes. ● OpenGL Profiler is an application that determines how much time an application spends in OpenGL. It also provides function traces that you can use to look for redundant calls. ● OpenGL Driver Monitor lets you gather real-time data on the operation of the GPU and lets you look at information (OpenGL extensions supported, buffer modes, sample modes, and so forth) for the available renderers. For more information, see OpenGL Tools for Serious Graphics Development. This section shows how to use top along with OpenGL Profiler to analyze where to spend your optimization efforts—in your OpenGL code, your other application code, or in both. You'll see how to gather baseline data and how to determine the relationship of OpenGL performance to overall application performance. 1. Launch your OpenGL application. 2. Open a Terminal window and place it side-by-side with your application window. 3. In the Terminal window, type top and press Return. You'll see output similar to that shown in Figure 15-1. Tuning Your OpenGL Application Gathering and Analyzing Baseline Performance Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 156The top program indicates the amount of CPU time that an application uses. The CPU time serves as a good baseline value for gauging how much tuning your code needs. Figure 15-1 shows the percentage of CPU time for the OpenGL application GLCarbon1C (highlighted). Note this application utilizes 31.5% of CPU resources. Figure 15-1 Output produced by the top application Tuning Your OpenGL Application Gathering and Analyzing Baseline Performance Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 1574. Open the OpenGL Profiler application, located in /Developer/Applications/Graphics Tools/. In the window that appears, select the options to collect a trace and include backtraces, as shown in Figure 15-2. Figure 15-2 The OpenGL Profiler window 5. Select the option “Attach to application”, then select your application from the Application list. You may see small pauses or stutters in the application, particularly when OpenGL Profiler is collecting a function trace. This is normal and does not significantly affect the performance statistics. The glitches are due to the large amount of data that OpenGL Profiler is writing out. 6. Click Suspend to stop data collection. 7. Open the Statistics and Trace windows by choosing them from the Views menu. Figure 15-3 provides an example of what the Statistics window looks like. Figure 15-4 (page 160) shows a Trace window. The estimated percentage of time spent in OpenGL is shown at the bottom of Figure 15-3. Note that for this example, it is 28.91%. The higher this number, the more time the application is spending in OpenGL and the more opportunity there may be to improve application performance by optimizing OpenGL code. Tuning Your OpenGL Application Gathering and Analyzing Baseline Performance Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 158You can use the amount of time spent in OpenGL along with the CPU time to calculate a ratio of the application time versus OpenGL time. Thisratio indicates where to spend most of your optimization efforts. Figure 15-3 A statistics window 8. In the Trace window, look for duplicate function calls and redundant or unnecessary state changes. Look for back-to-back function calls with the same or similar data. These are areas that can typically be optimized. Functions that are called more than necessary include glTexParameter, glPixelStore, glEnable, and glDisable. For most applications, these functions can be called once from a setup or state modification routine and called only when necessary. It's generally good practice to keep state changes out of rendering loops(which can be seen in the function trace as the same sequence of state changes and drawing over and over again) as much as possible and use separate routines to adjust state as necessary. Tuning Your OpenGL Application Gathering and Analyzing Baseline Performance Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 159Look at the time value to the left of each function call to determine the cost of the call. Figure 15-4 A Trace window Use these to determine the cost of a call 9. Determine what the performance gain would be if it were possible to reduce the time to execute all OpenGL calls to zero. For example, take the performance data from the GLCarbon1C application used in thissection to determine the performance attributable to the OpenGL calls. Total Application Time (from top) = 31.5% Total Time in OpenGL (from OpenGL Profiler) = 28.91% At first glance, you might think that optimizing the OpenGL code could improve application performance by almost 29%, thusreducing the total application time by 29%. Thisisn't the case. Calculate the theoretical performance increase by multiplying the total CPU time by the percentage of time spent in OpenGL. The theoretical performance improvement for this example is: 31.5 X .2891 = 9.11% If OpenGL took no time at all to execute, the application would see a 9.11% increase in performance. So, if the application runs at 60 frames per second (FPS), it would perform as follows: New FPS = previous FPS * (1 +(% performance increase)) = 60 fps *(1.0911) = 65.47 fps The application gains almost 5.5 frames per second by reducing OpenGL from 28.91% to 0%. This shows that the relationship of OpenGL performance to application performance is not linear. Simply reducing the amount of time spent in OpenGL may or may not offer any noticeable benefit in application performance. Tuning Your OpenGL Application Gathering and Analyzing Baseline Performance Data 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 160Using OpenGL Driver Monitor to Measure Stalls You can use OpenGL Driver Monitor to measure how long the CPU waits for the GPU, as shown in Figure 15-5. OpenGL Driver Monitor is useful for analyzing other parameters as well. You can choose which parameters to monitor simply by clicking a parameter name from the drawer shown in the figure. Figure 15-5 The graph view in OpenGL Driver Monitor Identifying Bottlenecks with Shark Shark is an extremely useful tool for identifying places in your code that are slow and could benefit from optimization. Once you learn the basics, you can use it on your OpenGL applications to identify bottlenecks. There are three issues to watch out for in Shark when using it to analyze OpenGL performance: ● Costly data conversions. If you notice the glgProcessPixels call (in the libGLImage.dylib library) showing up in the analysis, it's an indication that the driver is not handling a texture upload optimally. The call is used when your application makes a glTexImage or glTexSubImage call using data that is in a nonnative format for the driver, which meansthe data must be converted before the driver can upload it. You can improve performance by changing your data so that it is in a native format for the driver. See “Use Optimal Data Types and Formats” (page 102). Tuning Your OpenGL Application Identifying Bottlenecks with Shark 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 161Note: If your data needs only to be swizzled, glgProcessPixels performs the swizzling reasonably fast although not as fast as if the data didn't need swizzling. But non-native data formats are converted one byte at a time and incurs a performance cost that is best to avoid. ● Time in the mach_kernel library. If you see time spent waiting for a timestamp or waiting for the driver, it indicates that your application is waiting for the GPU to finish processing. You see this during a texture upload, for example. ● Misleading symbols. You may see a symbol, such as glgGetString, that appears to be taking time but shouldn't be taking time in your application. Thatsometimes happens because the underlying optimizations performed by the system don't have any symbols attached to them on the driver side. Without a symbol to display, Shark shows the last symbol. You need to look for the call that your application made prior to that symbol and focus your attention there. You don't need to concern yourself with the calls that were made "underneath" your call. Tuning Your OpenGL Application Identifying Bottlenecks with Shark 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 162OpenGL functionality changes with each version of the OpenGL API. This appendix describes the functionality that was added with each version. See the official OpenGL specification for detailed information. The functionality for each version is guaranteed to be available through the OpenGL API even if a particular renderer does not support all of the extensions in a version. For example, a renderer that claims to support OpenGL 1.3 might not export the GL_ARB_texture_env_combine or GL_EXT_texture_env_combine extensions. It's important that you query both the renderer version and extension string to make sure that the renderer supports any functionality that you want to use. Note: It's possible for vendor and ARB extensions to provide similar functionality. As particular functionality becomes widely adopted, it can be moved into the core OpenGL API. As a result, functionality that you want to use could be included as an extension, as part of the core API, or both. You should read the extensions and the core OpenGL specifications carefully to see the differences. Furthermore, as an extension is promoted, the API associated with that functionality can change. For more information,see “Determining the OpenGL Capabilities Supported by the Renderer” (page 83). In the following tables, the extensions describe the feature that the core functionality is based on. The core functionality might not be the same as the extension. For example, compare the core texture crossbar functionality with the extension that it's based on. Version 1.1 Table A-1 Functionality added in OpenGL 1.1 Functionality Extension Copy texture and subtexture GL_EXT_copy_texture and GL_EXT_subtexture Logical operation GL_EXT_blend_logic_op Polygon offset GL_EXT_polygon_offset Texture image formats GL_EXT_texture 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 163 Legacy OpenGL Functionality by VersionFunctionality Extension Texture objects GL_EXT_texture_object Texture proxies GL_EXT_texture Texture replace environment GL_EXT_texture Vertex array GL_EXT_vertex_array There were a number of other minor changes outlined in Appendix C section 9 of the OpenGL specification. See http://www.opengl.org. Version 1.2 Table A-2 Functionality added in OpenGL 1.2 Functionality Extension BGRA pixel formats GL_EXT_bgra GL_SGI_color_table , GL_EXT_color_subtable, GL_EXT_convolution,GL_HP_convolution_border_modes, GL_SGI_color_matrix, GL_EXT_histogram, GL_EXT_blend_minmax, and GL_EXT_blend_subtract Imaging subset (optional) Normal rescaling GL_EXT_rescale_normal Packed pixel formats GL_EXT_packed_pixels Separate specular color GL_EXT_separate_specular_color Texture coordinate edge clamping GL_SGIS_texture_edge_clamp Texture level of detail control GL_SGIS_texture_lod Three-dimensional texturing GL_EXT_texture3D Vertex array draw element range GL_EXT_draw_range_elements Legacy OpenGL Functionality by Version Version 1.2 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 164Note: The imaging subset might not be present on all implementations; you must verify by checking for the ARB_imaging extension. OpenGL 1.2.1 introduced ARB extensions with no specific core API changes. Version 1.3 Table A-3 Functionality added in OpenGL 1.3 Functionality Extension Compressed textures GL_ARB_texture_compression Cube map textures GL_ARB_texture_cube_map Multisample GL_ARB_multisample Multitexture GL_ARB_multitexture Texture add environment mode GL_ARB_texture_env_add Texture border clamp GL_ARB_texture_border_clamp Texture combine environment mode GL_ARB_texture_env_combine Texture dot3 environment mode GL_ARB_texture_env_dot3 Transpose matrix GL_ARB_transpose_matrix Version 1.4 Table A-4 Functionality added in OpenGL 1.4 Functionality Extension Automatic mipmap generation GL_SGIS_generate_mipmap Blend function separate GL_ARB_blend_func_separate Blend squaring GL_NV_blend_square Depth textures GL_ARB_depth_texture Legacy OpenGL Functionality by Version Version 1.3 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 165Functionality Extension Fog coordinate GL_EXT_fog_coord Multiple draw arrays GL_EXT_multi_draw_arrays Point parameters GL_ARB_point_parameters Secondary color GL_EXT_secondary_color Separate blend functions GL_EXT_blend_func_separate, GL_EXT_blend_color Shadows GL_ARB_shadow Stencil wrap GL_EXT_stencil_wrap Texture crossbar environment mode GL_ARB_texture_env_crossbar Texture level of detail bias GL_EXT_texture_lod_bias Texture mirrored repeat GL_ARB_texture_mirrored_repeat Window raster position GL_ARB_window_pos Version 1.5 Table A-5 Functionality added in OpenGL 1.5 Functionality Extension Buffer objects GL_ARB_vertex_buffer_object Occlusion queries GL_ARB_occlusion_query Shadow functions GL_EXT_shadow_funcs Version 2.0 Table A-6 Functionality added in OpenGL 2.0 Functionality Extension Multiple render targets GL_ARB_draw_buffers Legacy OpenGL Functionality by Version Version 1.5 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 166Functionality Extension Non–power-of-two textures GL_ARB_texture_non_power_of_two Point sprites GL_ARB_point_sprite Separate blend equation GL_EXT_blend_equation_separate GL_ATI_separate_stencil GL_EXT_stencil_two_side Separate stencil Shading language GL_ARB_shading_language_100 Shader objects GL_ARB_shader_objects GL_ARB_fragment_shader GL_ARB_vertex_shader Shader programs Version 2.1 Table A-7 Functionality added in OpenGL 2.1 Functionality Extension Pixel buffer objects GL_ARB_pixel_buffer_object sRGB textures GL_EXT_texture_sRGB Legacy OpenGL Functionality by Version Version 2.1 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 167The OpenGL 3.0 specification deprecated many areas of functionality defined in earlier versions of the OpenGL specification. The OpenGL 3.2 Core profile explicitly removesthese deprecated features and adjusts other parts of the specification to provide a streamlined, clean programming interface to OpenGL. Use this chapter to assist you in migrating your application away from this deprecated functionality. Removed Functionality The features that were removed from OpenGL are described in in Appendix E of the OpenGL 3.2 Core specification, and you should use that as the definitive guide for the changes you need to make in your application. Here is a summary of most significant areas that changed: ● If your application uses the fixed-function pipeline, it must be rewritten to use shaders instead. ● If your application uses shaders, you must rewrite your shaders to use OpenGL Shading Language 1.5; many built-in shader variables provided in earlier versions of the OpenGL Shading Language were explicitly removed from the OpenGL Shading Language 1.5 specification. Similarly, your application may no longer provide vertex data using the fixed-function routines; all vertex attributes are now specified as generic vertex attributes. ● Your application must explicitly generate object names using the OpenGL API. ● Vertex data must be provided to OpenGL using buffer objects. ● The built-in matrix stack functionality from earlier versions of OpenGL has been removed; you must recreate this functionality using shader inputs. ● Support for auxiliary and accumulation buffers has been removed; use framebuffer objects instead. ● Your application no longer fetches the list of extensions as a single string. Instead, you first fetch the number of extensions and then separately fetch each extension string. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 168 Updating an Application to Support the OpenGL 3.2 Core SpecificationExtension Changes on OS X OpenGL 3.2 providesfunctionality that earlier versions ofOpenGL provided through extensions.Other extensions that were previously supported on OS X are no longer supported when your application uses the OpenGL 3.2 Core profile. Table B-1 lists extensions described elsewhere in this guide; use this table to determine whether the extension is supported, and if not, what equivalent functionality is supported. Table B-1 Extensions described in this guide Extension Status Obsolete. Use the ARB_Sync functionality provided by OpenGL 3.2 (Core). APPLE_fence ARB_vertex_buffer_object Functionality provided by OpenGL 3.2 (Core). Obsolete. Use the ARB_vertex_array_object functionality provided by OpenGL 3.2 (Core). APPLE_vertex_array_object Obsolete. Use the ARB_map_buffer_range functionality provided by OpenGL 3.2 (Core). APPLE_vertex_array_range Obsolete. Use the ARB_map_buffer_range functionality provided by OpenGL 3.2 (Core). APPLE_flush_buffer_range APPLE_client_storage Supported. APPLE_texture_range Supported. ARB_texture_rectangle Functionality provided by OpenGL 3.2 (Core). ARB_shader_objects Functionality provided by OpenGL 3.2 (Core). ARB_vertex_shader Functionality provided by OpenGL 3.2 (Core). ARB_fragment_shader Functionality provided by OpenGL 3.2 (Core). EXT_transform_feedback Functionality provided by OpenGL 3.2 (Core). EXT_gpu_shader4 Obsolete. Functionality included in GLSL 1.5 EXT_geometry_shader4 Functionality provided by OpenGL 3.2 (Core). Obsolete. Use the ARB_uniform_buffer_object functionality provided by OpenGL 3.2 (Core). EXT_bindable_uniform ARB_pixel_buffer_object Functionality provided by OpenGL 3.2 (Core). Updating an Application to Support the OpenGL 3.2 Core Specification Extension Changes on OS X 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 169Extension Status Obsolete. Use the ARB_framebuffer_object functionality provided by OpenGL 3.2 (Core). EXT_framebuffer_object APPLE_pixel_buffer Obsolete. Use framebuffer objects instead. Obsolete. Use multisampled renderbuffers to precisely control multisampling. NV_multisample_filter_hint Updating an Application to Support the OpenGL 3.2 Core Specification Extension Changes on OS X 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 170Function pointers to OpenGL routines allow you to deploy your application across multiple versions of OS X regardless of whether the entry point is supported at link time or runtime. This practice also provides support for code that needs to run cross-platform—in both OS X and Windows. Note: If you are deploying your application only in OS X v10.4 or later, you do not need to read this chapter. Instead, consider the alternative, which is to set the gcc attribute that allows weak linking of symbols. Keep in mind, however, that weak linking may impact your application's performance. For more information, see “Frameworks and Weak Linking”. This appendix discusses the tasks needed to set up and use function pointers as entry points to OpenGL routines: ● “Obtaining a Function Pointer to an Arbitrary OpenGL Entry Point” (page 171)shows how to write a generic routine that you can reuse for any OpenGL application on the Macintosh platform. ● “Initializing Entry Points” (page 172) describes how to declare function pointer type definitions and initialize them with the appropriate OpenGL command entry points for your application. Obtaining a Function Pointer to an Arbitrary OpenGL Entry Point Getting a pointer to an OpenGL entry point function is fairly straightforward from Cocoa. You can use the Dynamic Loader function NSLookupAndBindSymbol to get the address of an OpenGL entry point. Keep in mind that getting a valid function pointer means that the entry point is exported by the OpenGL framework; it does not guarantee that a particular routine is supported and valid to call from within your application. You still need to check for OpenGL functionality on a per-renderer basis as described in “Detecting Functionality” (page 83). Listing C-1 shows how to use NSLookupAndBindSymbol from within the function MyNSGLGetProcAddress. When provided a symbol name, this application-defined function returns the appropriate function pointer from the global symbol table. A detailed explanation for each numbered line of code appears following the listing. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 171 Setting Up Function Pointers to OpenGL RoutinesListing C-1 Using NSLookupAndBindSymbol to obtain a symbol for a symbol name #import #import #import void * MyNSGLGetProcAddress (const char *name) { NSSymbol symbol; char *symbolName; symbolName = malloc (strlen (name) + 2); // 1 strcpy(symbolName + 1, name); // 2 symbolName[0] = '_'; // 3 symbol = NULL; if (NSIsSymbolNameDefined (symbolName)) // 4 symbol = NSLookupAndBindSymbol (symbolName); free (symbolName); // 5 return symbol ? NSAddressOfSymbol (symbol) : NULL; // 6 } Here's what the code does: 1. Allocates storage for the symbol name plus an underscore character ('_'). The underscore character is part of the UNIX C symbol-mangling convention, so make sure that you provide storage for it. 2. Copiesthe symbol name into the string variable,starting at the second character, to leave room for prefixing the underscore character. 3. Copies the underscore character into the first character of the symbol name string. 4. Checks to make sure that the symbol name is defined, and if it is, looks up the symbol. 5. Frees the symbol name string because it is no longer needed. 6. Returns the appropriate pointer if successful, or NULL if not successful. Before using this pointer, you should make sure that is it valid. Initializing Entry Points Listing C-2 shows how to use the MyNSGLGetProcAddress function from Listing C-1 (page 172) to obtain a few OpenGL entry points. A detailed explanation for each numbered line of code appears following the listing. Setting Up Function Pointers to OpenGL Routines Initializing Entry Points 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 172Listing C-2 Using NSGLGetProcAddress to obtain an OpenGL entry point #import "MyNSGLGetProcAddress.h" // 1 static void InitEntryPoints (void); static void DeallocEntryPoints (void); // Function pointer type definitions typedef void (*glBlendColorProcPtr)(GLclampf red,GLclampf green, GLclampf blue,GLclampf alpha); typedef void (*glBlendEquationProcPtr)(GLenum mode); typedef void (*glDrawRangeElementsProcPtr)(GLenum mode, GLuint start, GLuint end,GLsizei count,GLenum type,const GLvoid *indices); glBlendColorProcPtr pfglBlendColor = NULL; // 2 glBlendEquationProcPtr pfglBlendEquation = NULL; glDrawRangeElementsProcPtr pfglDrawRangeElements = NULL; static void InitEntryPoints (void) // 3 { pfglBlendColor = (glBlendColorProcPtr) MyNSGLGetProcAddress ("glBlendColor"); pfglBlendEquation = (glBlendEquationProcPtr)MyNSGLGetProcAddress ("glBlendEquation"); pfglDrawRangeElements = (glDrawRangeElementsProcPtr)MyNSGLGetProcAddress ("glDrawRangeElements"); } // ------------------------- static void DeallocEntryPoints (void) // 4 { pfglBlendColor = NULL; pfglBlendEquation = NULL; pfglDrawRangeElements = NULL;; } Here's what the code does: 1. Imports the header file that contains the MyNSGLProcAddress function from Listing C-1 (page 172). Setting Up Function Pointers to OpenGL Routines Initializing Entry Points 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 1732. Declares function pointers for the functions of interest. Note that each function pointer uses the prefix pf to distinguish it from the function it points to. Although using this prefix is not a requirement, it's best to avoid using the exact function names. 3. Initializes the entry points. This function repeatedly calls the MyNSGLProcAddress function to obtain function pointers for each of the functions of interest—glBlendColor, glBlendEquation, and glDrawRangeElements. 4. Sets each of the function pointers to NULL when they are no longer needed. Setting Up Function Pointers to OpenGL Routines Initializing Entry Points 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 174This table describes the changes to OpenGL Programming Guide for Mac . Date Notes 2012-07-23 Updated with information on supporting high-resolution displays. 2011-06-06 Added new context options. 2010-11-15 Fixed a few small errors in the texture chapter. Updated the recommendations on when to use each texture uploading and downloading technique. Updated the code for creating a texture from a view’s contents to use newer, better supported techniques. 2010-06-14 Corrected texture creation code snippets. 2010-03-24 Minor updates and clarifications. Substantial revisions to describe behaviors for OpenGL on OS X v10.5 and OS X v10.6. Removed information on obsolete and deprecated behaviors. 2010-02-24 Corrected errors in code listings. Pixel format attribute lists should be terminated with 0, not NULL. One call to glTexImage2D had an incorrect number of parameters. 2009-08-28 Updated the Cocoa OpenGL tutorial and made numerous other minor changes. 2008-06-09 Fixed compilation errors in Listing 8-1 (page 84). Added “Getting Decompressed Raw Pixel Data from a Source Image” (page 135). Updated links to OpenGL extensions. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 175 Document Revision HistoryDate Notes Made several minor edits. 2007-12-04 Corrected minor typographical and technical errors. Added “Ensuring That Back Buffer Contents Remain the Same” (page 66). Revised “Deprecated Attributes” (page 70). 2007-08-07 Fixed several technical issues. 2007-05-29 Fixed a broken link. 2007-05-17 Fixed a few technical inaccuracies in the code listings. Changed attribs to attributes in Listing 6-2 (page 68). Fixed drawRect method implementation in “Drawing to a Window or View” (page 35). 2006-12-20 Fixed minor errors. Added information concerning the Apple client storage extension. Fixed a typographical error. 2006-11-07 Added information about performance issues and processor queries. See “Determining Whether Vertex and Fragment Processing Happens on the GPU” (page 78). 2006-10-03 Added a section on checking for GPU processing. Added “Determining Whether Vertex and Fragment Processing Happens on the GPU” (page 78). Fixed a number of minor typos in the code and in the text. 2006-09-05 Fixed minor technical problems. 2006-07-24 Made minor technical and typograhical changes throughout. Added information to “Surface Drawing Order Specifies the Position of the OpenGL Surface Relative to the Window” (page 77). Document Revision History 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 176Date Notes Changed glCopyTexSubImage to glCopyTexSubImage2D in “Downloading Texture Data” (page 136). Made minor improvements to Listing 11-6 (page 136). Removed information about 1-D textures. 2006-06-28 Made several minor technical corrections. Redirected links to the OpenGL specification for the framebuffer object extension so that they point to the SGI Open Source website, which hosts the most up-to-date version of this specification. Removed the logic operation blending entry from Table A-6 (page 166) because this functionality is not available in OpenGL 2.0. 2006-05-23 First version. This document replaces Macintosh OpenGL Programming Guide and AGL Programming Guide . This document incorporates information from the following Technical Notes: TN2007 “The CGDirectDisplay API” TN2014 “Insights on OpenGL” TN2080 “Understanding and Detecting OpenGL Functionality” TN2093 “OpenGL Performance Optimization: The Basics” This document incorporates information from the following Technical Q&As: Technical Q&A OGL01 “aglChoosePixelFormat, The Inside Scoop” Technical Q&A OGL02 “Correct Setup of an AGLDrawable” Technical Q&A QA1158 “glFlush() vs. glFinish()” Technical Q&A QA1167 “Using Interface Builder's NSOpenGLView or Custom View objects for an OpenGL application” Technical Q&A QA1188 “GetProcAdress and OpenGL Entry Points” Document Revision History 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 177Date Notes Technical Q&A QA1209 “Updating OpenGL Contexts” Technical Q&A QA1248 “Context Sharing Tips” Technical Q&A QA1268 “Sharpening Full Scene Anti-Aliasing Details” Technical Q&A QA1269 “OS X OpenGL Interfaces” Technical Q&A QA1325 “Creating an OpenGL texture from an NSView” Document Revision History 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 178This glossary containstermsthat are used specifically for the Apple implementation of OpenGL and a few terms that are common in graphics programming. For definitions of additional OpenGL terms, see OpenGL Programming Guide, by the Khronos OpenGL Working Group aliased Said of graphics whose edges appear jagged; can be remedied by performing antialiasing operations. antialiasing In graphics, a technique used to smooth and soften the jagged (or aliased) edges that are sometimes apparent when graphical objects such as text, line art, and images are drawn. ARB The Khronos OpenGL Working Group, which is the group that oversees the OpenGL specification and extensions to it. attach To establish a connection between two existing objects. Compare bind. bind To create a new object and then establish a connection between that object and a rendering context. Compare attach. bitmap A rectangular array of bits. bitplane A rectangular array of pixels. buffer A block of memory dedicated to storing a specific kind of data, such as depth values, green color values, stencil index values, and color index values. CGL (Core OpenGL) framework The Apple framework for using OpenGL graphics in OS X applications that need low-level access to OpenGL. clipping An operation that identifies the area of drawing. Anything not in the clipping region is not drawn. clip coordinates The coordinate system used for view-volume clipping. Clip coordinates are applied after applying the projection matrix and prior to perspective division. color lookup table A table of values used to map color indexes into actual color values. completeness A state that indicates whether a framebuffer object meets all the requirements for drawing. context A set of OpenGL state variables that affect how drawing is performed for a drawable object attached to that context. Also called a rendering context. culling Eliminating parts of a scene that can't be seen by the observer. current context The rendering context to which OpenGL routes commands issued by your application. current matrix A matrix used by OpenGL to transform coordinates in one system to those of another system, such as the modelview matrix, the perspective matrix, and the texture matrix. GL shading language allows user-defined matrices. 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 179 Glossarydepth In OpenGL, refers to the z coordinate and specifies how far a pixel lies from the observer. depth buffer A block of memory used to store a depth value for each pixel. The depth buffer is used to determine whether or not a pixel can be seen by the observer. Those that are hidden are typically removed. display list A list of OpenGL commands that have an associated name and that are uploaded to the GPU, preprocessed, and then executed at a later time. Display lists are often used for computing-intensive commands. double buffering The practice of using a front and back color buffer to achieve smooth animation. The back buffer is not displayed, but swapped with the front buffer. drawable object In OS X, an object allocated outside of OpenGL that can serve as an OpenGL framebuffer. A drawable object can be any of the following: a window, a view, a pixel buffer, offscreen memory, or a full-screen graphics device. See also framebuffer object extension A feature of OpenGL that's not part of the OpenGL core API and therefore not guaranteed to be supported by every implementation of OpenGL. The naming conventions used for extensions indicate how widely accepted the extension is. The name of an extension supported only by a specific company includes an abbreviation of the company name. If more then one company adoptsthe extension,the extension name is changed to include EXT instead of a company abbreviation. If the Khronos OpenGL Working Group approves an extension, the extension name changes to include ARB instead of EXT or a company abbreviation. eye coordinates The coordinate system with the observer at the origin. Eye coordinates are produced by the modelview matrix and passed to the projection matrix. fence A token used by the GL_APPLE_fence extension to determine whether a given command has completed or not. filtering A process that modifies an image by combining pixels or texels. fog An effect achieved by fading colors to a background color based on the distance from the observer. Fog provides depth cues to the observer. fragment The color and depth values for a single pixel; can also include texture coordinate values. A fragment is the result of rasterizing primitives. framebuffer The collection of buffers associated with a window or a rendering context. framebuffer attachable image The rendering destination for a framebuffer object. framebuffer object An OpenGL extension that allows rendering to a destination other than the usual OpenGL buffers or destinations provided by the windowing system. A framebuffer object (FBO) contains state information for the OpenGL framebuffer and its set of images. A framebuffer object is similar to a drawable object, except that a drawable object is a window-system specific object whereas a framebuffer object is a window-agnostic object. The context that's bound to a framebuffer object can be bound to a window-system-provided drawable object for the purpose of displaying the content associated with the framebuffer object. frustum The region of space that is seen by the observer and that is warped by perspective division. Glossary 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 180FSAA (full scene antialiasing) A technique that takes multiple samples at a pixel and combinesthem with coverage values to arrive at a final fragment. gamma correction A function that changes color intensity valuesto correct for the nonlinear response of the eye or of a display. GLU Graphics library utilities. GL Graphics library. GLUT Graphics Library Utilities Toolkit, which is independent of the window system. In OS X, GLUT is implemented on top of Cocoa. GLX An OpenGL extension that supports using OpenGL within a window provided by the X Window system. image A rectangular array of pixels. immediatemode The practice ofOpenGL executing commands at the time an application issues them. To prevent commands from being issued immediately, an application can use a display list. interleaved data Arrays of dissimilar data that are grouped together, such as vertex data and texture coordinates. Interleaving can speed data retrieval. mipmaps A set of texture maps, provided at various resolutions, whose purpose is to minimize artifacts that can occur when a texture is applied to a geometric primitive whose onscreen resolution doesn't match the source texture map. Mipmapping derivesfromthe latin phrasemultumin parvo , which means "many things in a small place." modelview matrix A 4 X 4 matrix used by OpenGL to transforms points, lines, polygons, and positions from object coordinates to eye coordinates. mutex A mutual exclusion object in a multithreaded application. NURBS (nonuniform rational basis spline) A methodology use to specify parametric curves and surfaces. packing Converting pixel color components from a buffer into the format needed by an application. pbuffer See pixel buffer. pixel A picture element; the smallest element that the graphics hardware can display on the screen. A pixel is made up of all the bits at the location x , y , in all the bitplanes in the framebuffer. pixel buffer A type of drawable object that allows the use of offscreen buffers as sources for OpenGL texturing. Pixel buffers allow hardware-accelerated rendering to a texture. pixel depth The number of bits per pixel in a pixel image. pixel format A format used to store pixel data in memory. The format describesthe pixel components (that is, red, blue, green, alpha), the number and order of components, and other relevant information,such as whether a pixel containsstencil and depth values. primitives The simplest elements in OpenGL—points, lines, polygons, bitmaps, and images. projection matrix A matrix that OpenGL uses to transform points, lines, polygons, and positionsfrom eye coordinates to clip coordinates. rasterization The process of converting vertex and pixel data to fragments, each of which corresponds to a pixel in the framebuffer. renderbuffer A rendering destination for a 2D pixel image, used for generalized offscreen rendering, as defined in the OpenGL specification for the GL_EXT_framebuffer_object extension. Glossary 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 181renderer A combination of hardware and software that OpenGL uses to create an image from a view and a model. The hardware portion of a renderer is associated with a particular display device and supports specific capabilities, such as the ability to support a certain color depth or buffering mode. A renderer that uses only software is called a software renderer and is typically used as a fallback. rendering context A container forstate information. rendering pipeline The order of operations used by OpenGL to transform pixel and vertex data to an image in the framebuffer. render-to-texture An operation that draws content directly to a texture target. RGBA Red, green, blue, and alpha color components. shader A programthat computessurface properties. shading language A high-level language, accessible in C, used to produce advanced imaging effects. stencil buffer Memory used specifically for stencil testing. A stencil test is typically used to identify masking regions, to identify solid geometry that needs to be capped, and to overlap translucent polygons. surface The internal representation of a single buffer that OpenGL actually drawsto and readsfrom. For windowed drawable objects, thissurface is what the OS X window server uses to composite OpenGL content on the desktop. tearing A visual anomaly caused when part of the current frame overwrites previous frame data in the framebuffer before the current frame is fully rendered on the screen. tessellation An operation that reduces a surface to a mesh of polygons, or a curve to a sequence of lines. texel A texture element used to specify the color to apply to a fragment. texture Image data used to modify the color of rasterized fragments; can be one-, two-, or threedimensional or be a cube map. texture mapping The process of applying a texture to a primitive. texture matrix A 4 x 4 matrix that OpenGL uses to transform texture coordinates to the coordinates that are used for interpolation and texture lookup. texture object An opaque data structure used to store all data related to a texture. A texture object can include such things as an image, a mipmap, and texture parameters (width, height, internal format, resolution, wrapping modes, and so forth). vertex A three-dimensional point. A set of vertices specify the geometry of a shape. Vertices can have a number of additional attributes such as color and texture coordinates. See vertex array. vertex array A data structure that stores a block of data thatspecifiessuch things as vertex coordinates, texture coordinates, surface normals, RGBA colors, color indices, and edge flags. virtual screen A combination of hardware, renderer, and pixel format that OpenGL selects as suitable for an imaging task. When the current virtual screen changes, the current renderer typically changes. Glossary 2012-07-23 | © 2004, 2012 Apple Inc. All Rights Reserved. 182Apple Inc. © 2004, 2012 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Carbon, Cocoa, iChat, Instruments, iPhoto, Logic, Mac, Macintosh, Objective-C, OS X, Pages, Quartz, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. OpenCL is a trademark of Apple Inc. DEC is a trademark of Digital Equipment Corporation. OpenGL is a registered trademark of Silicon Graphics, Inc. UNIX is a registered trademark of The Open Group. X Window System is a trademark of the Massachusetts Institute of Technology. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. View Programming Guide for iOSContents About Windows and Views 7 At a Glance 7 Views Manage Your Application’s Visual Content 7 Windows Coordinate the Display of Your Views 8 Animations Provide the User with Visible Feedback for Interface Changes 8 The Role of Interface Builder 8 See Also 9 View and Window Architecture 10 View Architecture Fundamentals 10 View Hierarchies and Subview Management 11 The View Drawing Cycle 12 Content Modes 13 Stretchable Views 15 Built-In Animation Support 16 View Geometry and Coordinate Systems 17 The Relationship of the Frame, Bounds, and Center Properties 18 Coordinate System Transformations 20 Points Versus Pixels 21 The Runtime Interaction Model for Views 23 Tips for Using Views Effectively 25 Views Do Not Always Have a Corresponding View Controller 25 Minimize Custom Drawing 26 Take Advantage of Content Modes 26 Declare Views as Opaque Whenever Possible 26 Adjust Your View’s Drawing Behavior When Scrolling 26 Do Not Customize Controls by Embedding Subviews 27 Windows 28 Tasks That Involve Windows 28 Creating and Configuring a Window 29 Creating Windows in Interface Builder 29 Creating a Window Programmatically 30 Adding Content to Your Window 30 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 2Changing the Window Level 31 Monitoring Window Changes 31 Displaying Content on an External Display 32 Handling Screen Connection and Disconnection Notifications 33 Configuring a Window for an External Display 35 Configuring the Screen Mode of an External Display 37 Views 38 Creating and Configuring View Objects 38 Creating View Objects Using Interface Builder 39 Creating View Objects Programmatically 39 Setting the Properties of a View 40 Tagging Views for Future Identification 42 Creating and Managing a View Hierarchy 42 Adding and Removing Subviews 43 Hiding Views 46 Locating Views in a View Hierarchy 47 Translating, Scaling, and Rotating Views 47 Converting Coordinates in the View Hierarchy 50 Adjusting the Size and Position of Views at Runtime 51 Being Prepared for Layout Changes 51 Handling Layout Changes Automatically Using Autoresizing Rules 52 Tweaking the Layout of Your Views Manually 54 Modifying Views at Runtime 54 Interacting with Core Animation Layers 56 Changing the Layer Class Associated with a View 56 Embedding Layer Objects in a View 57 Defining a Custom View 58 Checklist for Implementing a Custom View 58 Initializing Your Custom View 59 Implementing Your Drawing Code 60 Responding to Events 62 Cleaning Up After Your View 63 Animations 64 What Can Be Animated? 64 Animating Property Changes in a View 66 Starting Animations Using the Block-Based Methods 66 Starting Animations Using the Begin/Commit Methods 68 Nesting Animation Blocks 72 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 3 ContentsImplementing Animations That Reverse Themselves 73 Creating Animated Transitions Between Views 73 Changing the Subviews of a View 74 Replacing a View with a Different View 76 Linking Multiple Animations Together 77 Animating View and Layer Changes Together 77 Document Revision History 80 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 4 ContentsFigures, Tables, and Listings View and Window Architecture 10 Figure 1-1 Architecture of the views in a sample application 11 Figure 1-2 Content mode comparisons 14 Figure 1-3 Stretching the background of a button 15 Figure 1-4 Coordinate system orientation in UIKit 17 Figure 1-5 Relationship between a view's frame and bounds 19 Figure 1-6 Rotating a view and its content 21 Figure 1-7 UIKit interactions with your view objects 23 Table 1-1 Screen dimensions for iOS-based devices 22 Windows 28 Listing 2-1 Registering for screen connect and disconnect notifications 33 Listing 2-2 Handling connect and disconnect notifications 34 Listing 2-3 Configuring a window for an external display 35 Views 38 Figure 3-1 Layered views in the Clock application 43 Figure 3-2 Rotating a view 45 degrees 49 Figure 3-3 Converting values in a rotated view 51 Figure 3-4 View autoresizing mask constants 53 Table 3-1 Usage of some key view properties 40 Table 3-2 Autoresizing mask constants 52 Listing 3-1 Adding a view to a window 44 Listing 3-2 Adding views to an existing view hierarchy 45 Listing 3-3 Adding a custom layer to a view 57 Listing 3-4 Initializing a view subclass 59 Listing 3-5 A drawing method 61 Listing 3-6 Implementing the dealloc method 63 Animations 64 Table 4-1 Animatable UIView properties 64 Table 4-2 Methods for configuring animation blocks 69 Listing 4-1 Performing a simple block-based animation 66 Listing 4-2 Creating an animation block with custom options 67 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 5Listing 4-3 Performing a simple begin/commit animation 69 Listing 4-4 Configuring animation parameters using the begin/commit methods 70 Listing 4-5 Nesting animations that have different configurations 72 Listing 4-6 Swapping an empty text view for an existing one 74 Listing 4-7 Changing subviews using the begin/commit methods 75 Listing 4-8 Toggling between two views in a view controller 76 Listing 4-9 Mixing view and layer animations 78 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 6 Figures, Tables, and ListingsIn iOS, you use windows and views to present your application’s content on the screen. Windows do not have any visible content themselves but provide a basic container for your application’s views. Views define a portion of a window that you want to fill with some content. For example, you might have views that display images, text, shapes, or some combination thereof. You can also use views to organize and manage other views. At a Glance Every application has at least one window and one view for presenting its content. UIKit and other system frameworks provide predefined viewsthat you can use to present your content. These viewsrange from simple buttons and text labels to more complex views such as table views, picker views, and scroll views. In places where the predefined views do not provide what you need, you can also define custom views and manage the drawing and event handling yourself. Views Manage Your Application’s Visual Content A view is an instance of the UIView class (or one of its subclasses) and manages a rectangular area in your application window. Views are responsible for drawing content, handling multitouch events, and managing the layout of any subviews. Drawing involves using graphics technologies such as Core Graphics, OpenGL ES, or UIKit to draw shapes, images, and text inside a view’s rectangular area. A view responds to touch events in its rectangular area either by using gesture recognizers or by handling touch events directly. In the view hierarchy, parent views are responsible for positioning and sizing their child views and can do so dynamically. This ability to modify child views dynamically lets your views adjust to changing conditions, such as interface rotations and animations. You can think of views as building blocks that you use to construct your user interface. Rather than use one view to present all of your content, you often use several views to build a view hierarchy. Each view in the hierarchy presents a particular portion of your user interface and is generally optimized for a specific type of content. For example, UIKit has views specifically for presenting images, text and other types of content. 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 7 About Windows and ViewsRelevant Chapters: “View and Window Architecture” (page 10) “Views” (page 38) Windows Coordinate the Display of Your Views A window is an instance of the UIWindow class and handles the overall presentation of your application’s user interface. Windows work with views (and their owning view controllers) to manage interactions with, and changes to, the visible view hierarchy. For the most part, your application’s window never changes. After the window is created, it stays the same and only the views displayed by it change. Every application has at least one window that displays the application’s user interface on a device’s main screen. If an external display is connected to the device, applications can create a second window to present content on that screen as well. Relevant Chapters: “Windows” (page 28) Animations Provide the User with Visible Feedback for Interface Changes Animations provide users with visible feedback about changes to your view hierarchy. The system defines standard animationsfor presenting modal views and transitioning between different groups of views. However, many attributes of a view can also be animated directly. For example, through animation you can change the transparency of a view, its position on the screen, its size, its background color, or other attributes. And if you work directly with the view’s underlying Core Animation layer object, you can perform many other animations as well. Relevant Chapters: “Animations” (page 64) The Role of Interface Builder Interface Builder is an application that you use to graphically construct and configure your application’s windows and views. Using Interface Builder, you assemble your views and place them in a nib file, which is a resource file that stores a freeze-dried version of your views and other objects. When you load a nib file at runtime, the objects inside it are reconstituted into actual objects that your code can then manipulate programmatically. Interface Builder greatly simplifiesthe work you have to do in creating your application’s user interface. Because support for Interface Builder and nib files is incorporated throughout iOS, little effort is required to incorporate nib files into your application’s design. About Windows and Views The Role of Interface Builder 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 8For more information about how to use Interface Builder, see Interface Builder User Guide . For information about how view controllers manage the nib files containing their views, see “Custom View Controllers” in View Controller Programming Guide for iOS . See Also Because views are very sophisticated and flexible objects, it would be impossible to cover all of their behaviors in one document. However, other documents are available to help you learn about other aspects of managing views and your user interface as a whole. ● View controllers are an important part of managing your application’s views. A view controller presides over all of the viewsin a single view hierarchy and facilitatesthe presentation of those views on the screen. For more information about view controllers and the role they play, see View Controller Programming Guide for iOS . ● Views are the key recipients of gesture and touch events in your application. For more information about using gesture recognizers and handling touch events directly, see Event Handling Guide for iOS . ● Custom views must use the available drawing technologies to render their content. For information about using these technologies to draw within your views, see Drawing and Printing Guide for iOS . ● In places where the standard view animations are notsufficient, you can use Core Animation. For information about implementing animations using Core Animation, see Core Animation Programming Guide . About Windows and Views See Also 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 9Views and windows present your application’s user interface and handle the interactions with that interface. UIKit and other system frameworks provide a number of views that you can use as-is with little or no modification. You can also define custom views for places where you need to present content differently than the standard views allow. Whether you use the system views or create your own custom views, you need to understand the infrastructure provided by the UIView and UIWindow classes. These classes provide sophisticated facilities for managing the layout and presentation of views. Understanding how those facilities work is important for making sure your views behave appropriately when changes occur in your application. View Architecture Fundamentals Most of the things you might want to do visually are done with view objects—instances of the UIView class. A view object defines a rectangular region on the screen and handles the drawing and touch events in that region. A view can also act as a parent for other views and coordinate the placement and sizing of those views. The UIView class does most of the work in managing these relationships between views, but you can also customize the default behavior as needed. Views work in conjunction with Core Animation layers to handle the rendering and animating of a view’s content. Every view in UIKit is backed by a layer object (usually an instance of the CALayer class), which manages the backing store for the view and handles view-related animations. Most operations you perform should be through the UIView interface. However, in situations where you need more control over the rendering or animation behavior of your view, you can perform operations through its layer instead. To understand the relationship between views and layers, it helps to look at an example. Figure 1-1 shows the view architecture from the ViewTransitions sample application along with the relationship to the underlying Core Animation layers. The views in the application include a window (which is also a view), a generic UIView object that acts as a container view, an image view, a toolbar for displaying controls, and a bar button item (which is not a view itself but which manages a view internally). (The actual ViewTransitions sample application includes an additional image view that is used to implement transitions. For simplicity, and because that view is usually hidden, it is not included in Figure 1-1.) Every view has a corresponding layer object that can be 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 10 View and Window Architectureaccessed from that view’s layer property. (Because a bar button item is not a view, you cannot access its layer directly.) Behind those layer objects are Core Animation rendering objects and ultimately the hardware buffers used to manage the actual bits on the screen. Figure 1-1 Architecture of the views in a sample application UIKit views Core Animation layers UIImageView UIView UIWindow UIToolbar UIBarButtonItem (internal view) The use of Core Animation layer objects has important implications for performance. The actual drawing code of a view object is called as little as possible, and when the code is called, the results are cached by Core Animation and reused as much as possible later. Reusing already-rendered content eliminates the expensive drawing cycle usually needed to update views. Reuse of this content is especially important during animations, where the existing content can be manipulated. Such reuse is much less expensive than creating new content. View Hierarchies and Subview Management In addition to providing its own content, a view can act as a container for other views. When one view contains another, a parent-child relationship is created between the two views. The child view in the relationship is known asthe subview and the parent view is known asthe superview. The creation of thistype of relationship has implications for both the visual appearance of your application and the application’s behavior. Visually, the content of a subview obscures all or part of the content of its parent view. If the subview is totally opaque, then the area occupied by the subview completely obscures the corresponding area of the parent. If the subview is partially transparent, the content from the two viewsis blended together prior to being displayed View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 11on the screen. Each superview stores its subviews in an ordered array and the order in that array also affects the visibility of each subview. If two sibling subviews overlap each other, the one that was added last (or was moved to the end of the subview array) appears on top of the other. The superview-subview relationship also impacts several view behaviors. Changing the size of a parent view has a ripple effect that can cause the size and position of any subviews to change too. When you change the size of a parent view, you can control the resizing behavior of each subview by configuring the view appropriately. Other changes that affect subviews include hiding a superview, changing a superview’s alpha (transparency), or applying a mathematical transform to a superview’s coordinate system. The arrangement of views in a view hierarchy also determines how your application responds to events. When a touch occurs inside a specific view, the system sends an event object with the touch information directly to that view for handling. However, if the view does not handle a particular touch event, it can pass the event object along to its superview. If the superview does not handle the event, it passes the event object to its superview, and so on up the responder chain. Specific views can also pass the event object to an intervening responder object, such as a view controller. If no object handles the event, it eventually reaches the application object, which generally discards it. For more information about how to create view hierarchies,see “Creating and Managing a View Hierarchy” (page 42). The View Drawing Cycle The UIView class uses an on-demand drawing model for presenting content. When a view first appears on the screen, the system asks it to draw its content. The system captures a snapshot of this content and uses that snapshot as the view’s visual representation. If you never change the view’s content, the view’s drawing code may never be called again. The snapshot image is reused for most operations involving the view. If you do change the content, you notify the system that the view has changed. The view then repeats the process of drawing the view and capturing a snapshot of the new results. When the contents of your view change, you do not redraw those changes directly. Instead, you invalidate the view using either the setNeedsDisplay or setNeedsDisplayInRect: method. These methods tell the system that the contents of the view changed and need to be redrawn at the next opportunity. The system waits until the end of the current run loop before initiating any drawing operations. This delay gives you a chance to invalidate multiple views, add or remove views from your hierarchy, hide views, resize views, and reposition views all at once. All of the changes you make are then reflected at the same time. View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 12Note: Changing a view’s geometry does not automatically cause the system to redraw the view’s content. The view’s contentMode property determines how changes to the view’s geometry are interpreted. Most content modes stretch or reposition the existing snapshot within the view’s boundaries and do not create a new one. For more information about how content modes affect the drawing cycle of your view, see “Content Modes” (page 13). When the time comes to render your view’s content, the actual drawing process varies depending on the view and its configuration. System views typically implement private drawing methods to render their content. Those same system views often expose interfaces that you can use to configure the view’s actual appearance. For custom UIView subclasses, you typically override the drawRect: method of your view and use that method to draw your view’s content. There are also other ways to provide a view’s content, such as setting the contents of the underlying layer directly, but overriding the drawRect: method is the most common technique. For more information about how to draw content for custom views, see “Implementing Your Drawing Code” (page 60). Content Modes Each view has a content mode that controls how the view recycles its content in response to changes in the view’s geometry and whether it recycles its content at all. When a view is first displayed, it renders its content as usual and the results are captured in an underlying bitmap. After that, changes to the view’s geometry do not always cause the bitmap to be recreated. Instead, the value in the contentMode property determines whether the bitmap should be scaled to fit the new bounds or simply pinned to one corner or edge of the view. The content mode of a view is applied whenever you do the following: ● Change the width or height of the view’s frame or bounds rectangles. ● Assign a transform that includes a scaling factor to the view’s transform property. View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 13By default, the contentMode property for most views is set to UIViewContentModeScaleToFill, which causes the view’s contents to be scaled to fit the new frame size. Figure 1-2 shows the results that occur for some content modes that are available. As you can see from the figure, not all content modes result in the view’s bounds being filled entirely, and those that do might distort the view’s content. Figure 1-2 Content mode comparisons UIViewContentModeScaleToFill Distorting Nondistorting UIViewContentModeScaleAspectFit UIViewContentModeScaleAspectFill Nondistorting Nondistorting UIViewContentModeLeft View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 14Content modes are good for recycling the contents of your view, but you can also set the content mode to the UIViewContentModeRedraw value when you specifically want your custom views to redraw themselves during scaling and resizing operations. Setting your view’s content mode to this value forces the system to call your view’s drawRect: method in response to geometry changes. In general, you should avoid using this value whenever possible, and you should certainly not use it with the standard system views. For more information about the available content modes, see UIView Class Reference . Stretchable Views You can designate a portion of a view asstretchable so that when the size of the view changes only the content in the stretchable portion is affected. You typically use stretchable areas for buttons or other views where part of the view defines a repeatable pattern. The stretchable area you specify can allow for stretching along one or both axes of the view. Of course, when stretching a view along two axes, the edges of the view must also define a repeatable pattern to avoid any distortion. Figure 1-3 shows how this distortion manifests itself in a view. The color from each of the view’s original pixels is replicated to fill the corresponding area in the larger view. Figure 1-3 Stretching the background of a button (0,0) (1,1) View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 15You specify the stretchable area of a view using the contentStretch property. This property accepts a rectangle whose values are normalized to the range 0.0 to 1.0. When stretching the view, the system multiplies these normalized values by the view’s current bounds and scale factor to determine which pixel or pixels need to be stretched. The use of normalized values alleviates the need for you to update the contentStretch property every time the bounds of your view change. The view’s content mode also plays a role in determining how the view’s stretchable area is used. Stretchable areas are only used when the content mode would cause the view’s content to be scaled. This means that stretchable views are supported only with the UIViewContentModeScaleToFill, UIViewContentModeScaleAspectFit, and UIViewContentModeScaleAspectFill content modes. If you specify a content mode that pins the content to an edge or corner (and thus does not actually scale the content), the view ignores the stretchable area. Note: The use of the contentStretch property isrecommended over the creation of a stretchable UIImage object when specifying the background for a view. Stretchable views are handled entirely in the Core Animation layer, which typically offers better performance. Built-In Animation Support One of the benefits of having a layer object behind every view is that you can animate many view-related changes easily. Animations are a useful way to communicate information to the user and should always be considered during the design of your application. Many properties of the UIView class are animatable—that is, semiautomatic support exists for animating from one value to another. To perform an animation for one of these animatable properties, all you have to do is: 1. Tell UIKit that you want to perform an animation. 2. Change the value of the property. Among the properties you can animate on a UIView object are the following: frame—Use this to animate position and size changes for the view. bounds—Use this to animate changes to the size of the view. center—Use this to animate the position of the view. transform—Use this to rotate or scale the view. alpha—Use this to change the transparency of the view. backgroundColor—Use this to change the background color of the view. contentStretch—Use this to change how the view’s contents stretch. View and Window Architecture View Architecture Fundamentals 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 16One place where animations are very important is when transitioning from one set of viewsto another. Typically, you use a view controller to manage the animations associated with major changes between parts of your user interface. For example, for interfaces that involve navigating from higher-level to lower-level information, you typically use a navigation controller to manage the transitions between the views displaying each successive level of data. However, you can also create transitions between two sets of views using animations instead of a view controller. You might do so in places where the standard view-controller animations do not yield the results you want. In addition to the animations you create using UIKit classes, you can also create animations using Core Animation layers. Dropping down to the layer level gives you much more control over the timing and properties of your animations. For details about how to perform view-based animations, see “Animations” (page 64). For more information about creating animations using Core Animation, see Core Animation Programming Guide and Core Animation Cookbook . View Geometry and Coordinate Systems The default coordinate system in UIKit has its origin in the top-left corner and has axes that extend down and to the right from the origin point. Coordinate values are represented using floating-point numbers, which allow for precise layout and positioning of content regardless of the underlying screen resolution. Figure 1-4 shows this coordinate system relative to the screen. In addition to the screen coordinate system, windows and views define their own local coordinate systems that allow you to specify coordinates relative to the view or window origin instead of relative to the screen. Figure 1-4 Coordinate system orientation in UIKit y x (0,0) View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 17Because every view and window defines its own local coordinate system, you need to be aware of which coordinate system is in effect at any given time. Every time you draw into a view or change its geometry, you do so relative to some coordinate system. In the case of drawing, you specify coordinates relative to the view’s own coordinate system. In the case of geometry changes, you specify coordinates relative to the superview’s coordinate system. The UIWindow and UIView classes both include methods to help you convert from one coordinate system to another. Important: Some iOS technologies define default coordinate systems whose origin point and orientation differ from those used by UIKit. For example, Core Graphics and OpenGL ES use a coordinate system whose origin lies in the lower-left corner of the view or window and whose y-axis points upward relative to the screen. Your code must take such differences into account when drawing or creating content and adjust coordinate values (or the default orientation of the coordinate system) as needed. The Relationship of the Frame, Bounds, and Center Properties A view object tracks its size and location using its frame, bounds, and center properties: ● The frame property contains the frame rectangle, which specifies the size and location of the view in its superview’s coordinate system. ● The bounds property contains the bounds rectangle, which specifies the size of the view (and its content origin) in the view’s own local coordinate system. ● The center property contains the known center point of the view in the superview’s coordinate system. You use the center and frame properties primarily for manipulating the geometry of the current view. For example, you use these properties when building your view hierarchy or changing the position or size of a view at runtime. If you are changing only the position of the view (and not its size), the center property is the preferred way to do so. The value in the center property is always valid, even if scaling or rotation factors have been added to the view’s transform. The same is not true for the value in the frame property, which is considered invalid if the view’s transform is not equal to the identity transform. You use the bounds property primarily during drawing. The bounds rectangle is expressed in the view’s own local coordinate system. The default origin of this rectangle is (0, 0) and its size matches the size of the frame rectangle. Anything you draw inside this rectangle is part of the view’s visible content. If you change the origin of the boundsrectangle, anything you draw inside the new rectangle becomes part of the view’s visible content. View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 18Figure 1-5 shows the relationship between the frame and bounds rectangles for an image view. In the figure, the upper-left corner of the image view is located at the point (40, 40) in its superview’s coordinate system and the size of the rectangle is 240 by 380 points. For the bounds rectangle, the origin point is (0, 0) and the size of the rectangle is similarly 240 by 380 points. Figure 1-5 Relationship between a view's frame and bounds Frame rectangle Center (160,230) Bounds rectangle 240 240 380 380 (40,40) (0,0) Although you can change the frame, bounds, and center properties independent of the others, changes to one property affect the others in the following ways: ● When you set the frame property, the size value in the bounds property changes to match the new size of the frame rectangle. The value in the center property similarly changes to match the new center point of the frame rectangle. ● When you set the center property, the origin value in the frame changes accordingly. ● When you set the size of the bounds property, the size value in the frame property changes to match the new size of the bounds rectangle. By default, a view’s frame is not clipped to its superview’s frame. Thus, any subviews that lie outside of their superview’sframe are rendered in their entirety. You can change this behavior, though, by setting the superview’s clipsToBounds property to YES. Regardless of whether or not subviews are clipped visually, touch events always respect the bounds rectangle of the target view’s superview. In other words, touch events occurring in a part of a view that lies outside of its superview’s bounds rectangle are not delivered to that view. View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 19Coordinate System Transformations Coordinate system transformations offer a way to alter your view (or its contents) quickly and easily. An affine transform is a mathematical matrix that specifies how points in one coordinate system map to points in a different coordinate system. You can apply affine transforms to your entire view to change the size, location, or orientation of the view relative to its superview. You can also use affine transforms in your drawing code to perform the same types of manipulations to individual pieces of rendered content. How you apply the affine transform therefore depends on context: ● To modify your entire view, modify the affine transform in the transform property of your view. ● To modify specific pieces of content in your view’s drawRect: method, modify the affine transform associated with the active graphics context. You typically modify the transform property of a view when you want to implement animations. For example, you could use this property to create an animation of your view rotating around its center point. You would not use this property to make permanent changes to your view, such as modifying its position or size a view within its superview’s coordinate space. For that type of change, you should modify the frame rectangle of your view instead. Note: When modifying the transform property of your view, all transformations are performed relative to the center point of the view. In your view’s drawRect: method, you use affine transforms to position and orient the items you plan to draw. Rather than fix the position of an object at some location in your view, it is simpler to create each object relative to a fixed point, typically (0, 0), and use a transform to position the object immediately prior to drawing. That way, if the position of the object changes in your view, all you have to do is modify the transform, which is much faster and less expensive than recreating the object at its new location. You can retrieve the affine transform associated with a graphics context using the CGContextGetCTM function and you can use the related Core Graphics functions to set or modify this transform during drawing. The current transformation matrix (CTM) is the affine transform in use at any given time. When manipulating the geometry of your entire view, the CTM is the affine transform stored in your view’s transform property. Inside your drawRect: method, the CTM is the affine transform associated with the active graphics context. The coordinate system of each subview builds upon the coordinate systems of its ancestors. So when you modify a view’s transform property, that change affects the view and all of its subviews. However, these changes affect only the final rendering of the views on the screen. Because each view draws its content and lays out its subviews relative to its own bounds, it can ignore its superview’s transform during drawing and layout. View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 20Figure 1-6 demonstrates how two different rotation factors combine visually when rendered. Inside the view’s drawRect: method, applying a 45 degree rotation factor to a shape causes that shape to appear rotated by 45 degrees. Applying a separate 45 degree rotation factor to the view then causes the shape to appear to be rotated by 90 degrees. The shape is still rotated by only 45 degrees relative to the view that drew it, but the view rotation makes it appear to be rotated by more. Figure 1-6 Rotating a view and its content No rotations Shape rotated 45˚ during drawing Shape and view each rotated 45˚ Important: If a view’s transform property is not the identity transform, the value of that view’s frame property is undefined and must be ignored. When applying transforms to a view, you must use the view’s bounds and center properties to get the size and position of the view. The frame rectangles of any subviews are still valid because they are relative to the view’s bounds. For information about modifying your view’s transform property at runtime, see “Translating, Scaling, and Rotating Views” (page 47). For information about how to use transforms to position content during drawing, see Drawing and Printing Guide for iOS . Points Versus Pixels In iOS, all coordinate values and distances are specified using floating-point valuesin unitsreferred to as points. The measurable size of a point variesfrom device to device and islargely irrelevant. The main thing to understand about points is that they provide a fixed frame of reference for drawing. View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 21Table 1-1 lists the screen dimensions (measured in points) for different types of iOS-based devices in a portrait orientation. The width dimension is listed first, followed by the height dimension of the screen. As long as you design your interface to these screen sizes, your views will display correctly on the corresponding type of device. Table 1-1 Screen dimensions for iOS-based devices Device Screen dimensions (in points) iPhone and iPod touch 320 x 480 iPad 768 x 1024 The point-based measuring system used for each type of device defines what is known as the user coordinate space. This is the standard coordinate space you use for nearly all of your code. For example, you use points and the user coordinate space when manipulating the geometry of a view or calling Core Graphics functions to draw the contents of your view. Although coordinates in the user coordinate space sometimes map directly to the pixels on the device’s screen, you should never assume that this is the case. Instead, you should always remember the following: One point does not necessarily correspond to one pixel on the screen. At the device level, all coordinates you specify in your view must be converted to pixels atsome point. However, the mapping of pointsin the user coordinate space to pixelsin the device coordinate space is normally handled by the system. Both UIKit and Core Graphics use a primarily vector-based drawing model where all coordinate values are specified using points. Thus, if you draw a curve using Core Graphics, you specify the curve using the same values, regardless of the resolution of the underlying screen. When you need to work with images or other pixel-based technologies such as OpenGL ES, iOS provides help in managing those pixels. For static image files stored as resources in your application bundle, iOS defines conventionsforspecifying your images at different pixel densities and for loading the image that best matches the current screen resolution. Views also provide information about the current scale factor so that you can adjust any pixel-based drawing code manually to accommodate higher-resolution screens. The techniques for dealing with pixel-based content at different screen resolutions is described in “Supporting High-Resolution Screens” in Drawing and Printing Guide for iOS . View and Window Architecture View Geometry and Coordinate Systems 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 22The Runtime Interaction Model for Views Any time a user interacts with your user interface, or any time your own code programmatically changes something, a complex sequence of events takes place inside of UIKit to handle that interaction. At specific points during that sequence, UIKit calls out to your view classes and gives them a chance to respond on behalf of your application. Understanding these callout points is important to understanding where your views fit into the system. Figure 1-7 shows the basic sequence of events that starts with the user touching the screen and ends with the graphics system updating the screen content in response. The same sequence of events would also occur for any programmatically initiated actions. Figure 1-7 UIKit interactions with your view objects Your Application iPhone OS Touches • Buffers • Images • Attributes • Geometry • Animations touches layoutSubviews drawRect Compositor Draw images, text, etc. Touch Framework Graphics hardware UIKit setNeedsDisplay frame, alpha, etc. setNeedsLayout setNeedsDisplay frame, alpha, etc. The following steps break the event sequence in Figure 1-7 (page 23) down even further and explain what happens at each stage and how you might want your application to react in response. 1. The user touches the screen. 2. The hardware reports the touch event to the UIKit framework. 3. The UIKit framework packages the touch into a UIEvent object and dispatches it to the appropriate view. (For a detailed explanation of how UIKit delivers events to your views, see Event Handling Guide for iOS .) 4. The event-handling code of your view responds to the event. For example, your code might: ● Change the properties (frame, bounds, alpha, and so on) of the view or its subviews. ● Call the setNeedsLayout method to mark the view (or its subviews) as needing a layout update. ● Call the setNeedsDisplay or setNeedsDisplayInRect: method to mark the view (or itssubviews) as needing to be redrawn. ● Notify a controller about changes to some piece of data. View and Window Architecture The Runtime Interaction Model for Views 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 23Of course, it is up to you to decide which of these things the view should do and which methods it should call. 5. If the geometry of a view changed for any reason, UIKit updates its subviews according to the following rules: a. If you have configured autoresizing rules for your views, UIKit adjusts each view according to those rules. For more information about how autoresizing rules work, see “Handling Layout Changes Automatically Using Autoresizing Rules” (page 52). b. If the view implements the layoutSubviews method, UIKit calls it. You can override this method in your custom views and use it to adjust the position and size of any subviews. For example, a view that provides a large scrollable area would need to use severalsubviews as “tiles” rather than create one large view, which is not likely to fit in memory anyway. In its implementation of this method, the view would hide any subviewsthat are now offscreen or reposition them and use them to draw newly exposed content. As part of this process, the view’s layout code can also invalidate any views that need to be redrawn. 6. If any part of any view was marked as needing to be redrawn, UIKit asks the view to redraw itself. For custom viewsthat explicitly define a drawRect: method, UIKit callsthat method. Your implementation of this method should redraw the specified area of the view as quickly as possible and nothing else. Do not make additional layout changes at this point and do not make other changes to your application’s data model. The purpose of this method is to update the visual content of your view. Standard system viewstypically do not implement a drawRect: method but instead manage their drawing at this time. 7. Any updated views are composited with the rest of the application’s visible content and sent to the graphics hardware for display. 8. The graphics hardware transfers the rendered content to the screen. Note: The preceding update model applies primarily to applicationsthat use standard system views and drawing techniques. Applications that use OpenGL ES for drawing typically configure a single full-screen view and draw directly to the associated OpenGL graphics context. In such a case, the view would still handle touch events but, because it is full-screen, it would not need to lay out subviews or implement a drawRect: method. For more information about using OpenGL ES, see OpenGL ES Programming Guide for iOS . In the preceding set of steps, the primary integration points for your own custom views are: ● The event-handling methods: ● touchesBegan:withEvent: View and Window Architecture The Runtime Interaction Model for Views 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 24● touchesMoved:withEvent: ● touchesEnded:withEvent: ● touchesCancelled:withEvent: ● The layoutSubviews method ● The drawRect: method These are the most commonly overridden methods for views but you may not need to override all of them. If you use gesture recognizersto handle events, you do not need to override any of the event-handling methods. Similarly, if your view does not contain subviews or its size does not change, there is no reason to override the layoutSubviews method. Finally, the drawRect: method is needed only when the contents of your view can change at runtime and you are using native technologiessuch as UIKit or Core Graphicsto do your drawing. It is also important to remember that these are the primary integration points but not the only ones. Several methods of the UIView class are designed to be override points for subclasses. You should look at the method descriptions in UIView Class Reference to see which methods might be appropriate for you to override in your custom implementations. Tips for Using Views Effectively Custom views are useful for situations where you need to draw something the standard system views do not provide, but it is your responsibility to ensure that the performance of your views is good enough. UIKit does everything it can to optimize view-related behaviors and help you achieve good performance in your custom views. However, you can help UIKit in this aspect by considering the following tips. Important: Before optimizing your drawing code, you should always gather data about your view’s current performance. Measuring the current performance lets you confirm whether there actually is a problem and, if there is, gives you a baseline measurement against which you can compare future optimizations. Views Do Not Always Have a Corresponding View Controller There is rarely a one-to-one relationship between individual views and view controllers in your application. The job of a view controller is to manage a view hierarchy, which often consists of more than one view used to implement some self-contained feature. For iPhone applications, each view hierarchy typically fills the entire screen, although for iPad applications a view hierarchy may fill only part of the screen. View and Window Architecture Tips for Using Views Effectively 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 25As you design your application’s user interface, it is important to consider the role that view controllers will play. View controllers provide a lot of important behaviors, such as coordinating the presentation of views on the screen, coordinating the removal of those views from the screen, releasing memory in response to low-memory warnings, and rotating views in response to interface orientation changes. Circumventing these behaviors could cause your application to behave incorrectly or in unexpected ways. For more information view controllers and their role in applications, see View Controller Programming Guide for iOS . Minimize Custom Drawing Although custom drawing is necessary at times, it is also something you should avoid whenever possible. The only time you should truly do any custom drawing is when the existing system view classes do not provide the appearance or capabilities that you need. Any time your content can be assembled with a combination of existing views, your best bet is to combine those view objects into a custom view hierarchy. Take Advantage of Content Modes Content modes minimize the amount of time spent redrawing your views. By default, views use the UIViewContentModeScaleToFill content mode, which scales the view’s existing contents to fit the view’s frame rectangle. You can change this mode as needed to adjust your content differently, but you should avoid using the UIViewContentModeRedraw content mode if you can. Regardless of which content mode is in effect, you can always force your view to redraw its contents by calling setNeedsDisplay or setNeedsDisplayInRect:. Declare Views as Opaque Whenever Possible UIKit uses the opaque property of each view to determine whether the view can optimize compositing operations. Setting the value of this property to YES for a custom view tells UIKit that it does not need to render any content behind your view. Less rendering can lead to increased performance for your drawing code and is generally encouraged. Of course, if you set the opaque property to YES, your view must fills its bounds rectangle completely with fully opaque content. Adjust Your View’s Drawing Behavior When Scrolling Scrolling can incur numerous view updates in a short amount of time. If your view’s drawing code is not tuned appropriately, scrolling performance for your view could be sluggish. Rather than trying to ensure that your view’s content is pristine at all times, consider changing your view’s behavior when a scrolling operation begins. View and Window Architecture Tips for Using Views Effectively 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 26For example, you can reduce the quality of your rendered content temporarily or change the content mode while a scroll isin progress. When scrolling stops, you can then return your view to its previousstate and update the contents as needed. Do Not Customize Controls by Embedding Subviews Although it is technically possible to add subviews to the standard system controls—objects that inherit from UIControl—you should never customize them in this way. Controlsthatsupport customizations do so through explicit and well-documented interfaces in the control class itself. For example, the UIButton class contains methods for setting the title and background images for the button. Using the defined customization points meansthat your code will always work correctly. Circumventing these methods, by embedding a custom image view or label inside the button, might cause your application to behave incorrectly now or at some point in the future if the button’s implementation changes. View and Window Architecture Tips for Using Views Effectively 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 27Every iOS application needs at least one window—an instance of the UIWindow class—and some may include more than one window. A window object has several responsibilities: ● It contains your application’s visible content. ● It plays a key role in the delivery of touch events to your views and other application objects. ● It works with your application’s view controllers to facilitate orientation changes. In iOS, windows do not have title bars, close boxes, or any other visual adornments. A window is always just a blank container for one or more views. Also, applications do not change their content by showing new windows. When you want to change the displayed content, you change the frontmost views of your window instead. Most iOS applications create and use only one window during their lifetime. This window spans the entire main screen of the device and is loaded from the application’s main nib file (or created programmatically) early in the life of the application. However, if an application supports the use of an external display for video out, it can create an additional window to display content on that external display. All other windows are typically created by the system, and are usually created in response to specific events, such as an incoming phone call. Tasks That Involve Windows For many applications, the only time the application interacts with its window is when it creates the window at startup. However, you can use your application’s window object to perform a few application-related tasks: ● Use the window object to convert points and rectangles to or from the window’s local coordinate system. For example, if you are provided with a value in window coordinates, you might want to convert it to the coordinate system of a specific view before trying to use it. For information on how to convert coordinates, see “Converting Coordinates in the View Hierarchy” (page 50). ● Use window notifications to track window-related changes. Windows generate notifications when they are shown or hidden or when they accept or resign the key status. You can use these notifications to perform actions in other parts of your application. For more information, see “Monitoring Window Changes” (page 31). 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 28 WindowsCreating and Configuring a Window You can create and configure your application’s main window programmatically or using Interface Builder. In either case, you create the window at launch time and should retain it and store a reference to it in your application delegate object. If your application creates additional windows, have the application create them lazily when they are needed. For example, if your application supports displaying content on an external display, it should wait until a display is connected before creating the corresponding window. You should always create your application’s main window at launch time regardless of whether your application is being launched into the foreground or background. Creating and configuring a window is not an expensive operation by itself. However, if your application is launched straight into the background, you should avoid making the window visible until your application enters the foreground. Creating Windows in Interface Builder Creating your application’s main window using Interface Builder issimple because the Xcode project templates do it for you. Every new Xcode application project includes a main nib file (usually with the name MainWindow.xib or some variant thereof) that includes the application’s main window. In addition, these templates also define an outlet for that window in the application delegate object. You use this outlet to access the window object in your code. Important: When creating your window in Interface Builder, it is recommended that you enable the Full Screen at Launch option in the attributes inspector. If this option is not enabled and your window is smaller than the screen of the target device, touch events will not be received by some of your views. Thisis because windows (like all views) do not receive touch events outside of their bounds rectangle. Because views are not clipped to the window’s bounds by default, the views still appear visible but events do not reach them. Enabling the Full Screen at Launch option ensures that the window is sized appropriately for the current screen. If you are retrofitting a project to use Interface Builder, creating a window using Interface Builder is a simple matter of dragging a window object to your nib file. Of course, you should also do the following: ● To access the window at runtime, you should connect the window to an outlet, typically one defined in your application delegate or the File’s Owner of the nib file. ● If your retrofit plans include making your new nib file the main nib file of your application, you must also set the NSMainNibFile key in your application’s Info.plist file to the name of your nib file. Changing the value of this key ensures that the nib file is loaded and available for use by the time the application:didFinishLaunchingWithOptions: method of your application delegate is called. For more information about creating and configuring nib files,see Interface Builder User Guide . For information about how to load nib files into your application at runtime, see “Nib Files” in Resource Programming Guide . Windows Creating and Configuring a Window 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 29Creating a Window Programmatically If you prefer to create your application’s main window programmatically, you should include code similar to the following in the application:didFinishLaunchingWithOptions: method of your application delegate: self.window = [[[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]] autorelease]; In the preceding example, self.window is assumed to be a declared property of your application delegate that is configured to retain the window object. If you were creating a window for an external display instead, you would assign it to a different variable and you would need to specify the bounds of the non main UIScreen object representing that display. When creating windows, you should always set the size of the window to the full bounds of the screen. You should not reduce the size of the window to accommodate the status bar or any other items. The status bar always floats on top of the window anyway, so the only thing you should shrink to accommodate the status bar is the view you put into your window. And if you are using view controllers, the view controller should handle the sizing of your views automatically. Adding Content to Your Window Each window typically has a single root view object (managed by a corresponding view controller) that contains all of the other views representing your content. Using a single root view simplifies the process of changing your interface; to display new content, all you have to do is replace the root view. To install a view in your window, use the addSubview: method. For example, to install a view that is managed by a view controller, you would use code similar to the following: [window addSubview:viewController.view]; In place of the preceding code, you can alternatively configure the rootViewController property of the window in your nib file. This property offers a convenient way to configure the root view of the window using a nib file instead of programmatically. If this property is set when the window is loaded from its nib file, UIKit automatically installsthe view from the associated view controller asthe root view of the window. This property is used only to install the root view and is not used by the window to communicate with the view controller. You can use any view you want for a window’s root view. Depending on your interface design, the root view can be a generic UIView object that acts as a container for one or more subviews, the root view can be a standard system view, or the root view can be a custom view that you define. Some standard system views that are commonly used as root views include scroll views, table views, and image views. Windows Creating and Configuring a Window 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 30When configuring the root view of the window, you are responsible for setting its initial size and position within the window. For applications that do not include a status bar, or that display a translucent status bar, set the view size to match the size of the window. For applications that show an opaque status bar, position your view below the status bar and reduce its size accordingly. Subtracting the status bar height from the height of your view prevents the top portion of your view from being obscured. Note: If the root view of your window is provided by a container view controller (such as a tab bar controller, navigation controller, or split-view controller), you do not need to set the initial size of the view yourself. The container view controller automatically sizes its view appropriately based on whether the status bar is visible. Changing the Window Level Each UIWindow object has a configurable windowLevel property that determines how that window is positioned relative to other windows. For the most part, you should not need to change the level of your application’s windows. New windows are automatically assigned to the normal window level at creation time. The normal window level indicates that the window presents application-related content. Higher window levels are reserved for information that needs to float above the application content, such as the system status bar or alert messages. And although you can assign windows to these levels yourself, the system usually does this for you when you use specific interfaces. For example, when you show or hide the status bar or display an alert view, the system automatically creates the needed windows to display those items. Monitoring Window Changes If you want to track the appearance or disappearance of windows inside your application, you can do so using these window-related notifications: ● UIWindowDidBecomeVisibleNotification ● UIWindowDidBecomeHiddenNotification ● UIWindowDidBecomeKeyNotification ● UIWindowDidResignKeyNotification These notifications are delivered in response to programmatic changes in your application’s windows. Thus, when your application shows or hides a window, the UIWindowDidBecomeVisibleNotification and UIWindowDidBecomeHiddenNotification notifications are delivered accordingly. These notifications are not delivered when your application moves into the background execution state. Even though your window is not displayed on the screen while your application is in the background, it is still considered visible within the context of your application. Windows Monitoring Window Changes 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 31The UIWindowDidBecomeKeyNotification and UIWindowDidResignKeyNotification notifications help your application keep track of which window is the key window—that is, which window is currently receiving keyboard events and other non touch-related events. Whereas touch events are delivered to the window in which the touch occurred, events that do not have an associated coordinate value are delivered to the key window of your application. Only one window at a time may be key. Displaying Content on an External Display To display content on an external display, you must create an additional window for your application and associate it with the screen object representing the external display. New windows are normally associated with the main screen by default. Changing the window’s associated screen object causes the contents of that window to be rerouted to the corresponding display. Once the window is associated with the correct screen, you can add views to it and show it just like you do for your application’s main screen. The UIScreen class maintains a list of screen objects representing the available hardware displays. Normally, there is only one screen object representing the main display for any iOS-based device, but devicesthatsupport connecting to an external display can have an additional screen object available. Devices that support an external display include iPhone and iPod touch devices that have Retina displays and the iPad. Older devices, such as iPhone 3GS, do not support external displays. Note: Because external displays are essentially a video-out connection, you should not expect touch events for views and controls in a window that is associated with an external display. In addition, it is your application’s responsibility to update the contents of the window as needed. Thus, to mirror the contents of your main window, your application would need to create a duplicate set of views for the external display’s window and update them in tandem with the views in your main window. The process for displaying content on an external display is described in the following sections. However, the following steps summarize the basic process: 1. At application startup, register for the screen connection and disconnection notifications. 2. When it is time to display content on the external display, create and configure a window. ● Use the screens property of UIScreen to obtain the screen object for the external display. ● Create a UIWindow object and size it appropriately for the screen (or for your content). ● Assign the UIScreen object for the external display to the screen property of the window. ● Adjust the resolution of the screen object as needed to support your content. ● Add any appropriate views to the window. Windows Displaying Content on an External Display 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 323. Show the window and update it normally. Handling Screen Connection and Disconnection Notifications Screen connection and disconnection notifications are crucial for handling changes to external displays gracefully. When the user connects or disconnects a display, the system sends appropriate notifications to your application. You should use these notifications to update your application state and create or release the window associated with the external display. The important thing to remember about the connection and disconnection notificationsisthat they can come at any time, even when your application is suspended in the background. Therefore, it is best to observe the notifications from an object that is going to exist for the duration of your application’s runtime, such as your application delegate. If your application is suspended, the notifications are queued until your application exits the suspended state and starts running in either the foreground or background. Listing 2-1 shows the code used to register for connection and disconnection notifications. This method is called by the application delegate at initialization time but you could register for these notificationsfrom other places in your application, too. The implementation of the handler methods is shown in Listing 2-2 (page 34). Listing 2-1 Registering for screen connect and disconnect notifications - (void)setupScreenConnectionNotificationHandlers { NSNotificationCenter* center = [NSNotificationCenter defaultCenter]; [center addObserver:self selector:@selector(handleScreenConnectNotification:) name:UIScreenDidConnectNotification object:nil]; [center addObserver:self selector:@selector(handleScreenDisconnectNotification:) name:UIScreenDidDisconnectNotification object:nil]; } If your application is active when an external display is attached to the device, itshould create a second window for that display and fill it with some content. The content does not need to be the final content you want to present. For example, if your application is not ready to use the extra screen, it can use the second window to display some placeholder content. If you do not create a window for the screen, or if you create a window but do not show it, a black field is displayed on the external display. Windows Displaying Content on an External Display 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 33Listing 2-2 shows how to create a secondary window and fill it with some content. In this example, the application creates the window in the handler methods it uses to receive screen connection notifications. (For information about registering for connection and disconnection notifications, see Listing 2-1 (page 33).) The handler method for the connection notification creates a secondary window, associates it with the newly connected screen and calls a method of the application’s main view controller to add some content to the window and show it. The handler method for the disconnection notification releases the window and notifies the main view controller so that it can adjust its presentation accordingly. Listing 2-2 Handling connect and disconnect notifications - (void)handleScreenConnectNotification:(NSNotification*)aNotification { UIScreen* newScreen = [aNotification object]; CGRect screenBounds = newScreen.bounds; if (!_secondWindow) { _secondWindow = [[UIWindow alloc] initWithFrame:screenBounds]; _secondWindow.screen = newScreen; // Set the initial UI for the window. [viewController displaySelectionInSecondaryWindow:_secondWindow]; } } - (void)handleScreenDisconnectNotification:(NSNotification*)aNotification { if (_secondWindow) { // Hide and then delete the window. _secondWindow.hidden = YES; [_secondWindow release]; _secondWindow = nil; // Update the main screen based on what is showing here. [viewController displaySelectionOnMainScreen]; Windows Displaying Content on an External Display 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 34} } Configuring a Window for an External Display To display a window on an external screen, you must associate it with the correct screen object. This process involves locating the proper UIScreen object and assigning it to the window’s screen property. You can get the list of screen objects from the screens class method of UIScreen. The array returned by this method always contains at least one object representing the main screen. If a second object is present, that object represents a connected external display. Listing 2-3 shows a method that is called at application startup to see if an external display is already attached. If it is, the method creates a window, associatesit with the external display, and addssome placeholder content before showing the window. In this case, the placeholder content is a white background and a label indicating that there is no content to display. To show the window, this method changes the value of its hidden property rather than calling makeKeyAndVisible. It does this because the window contains only static content and is not used to handle events. Listing 2-3 Configuring a window for an external display - (void)checkForExistingScreenAndInitializeIfPresent { if ([[UIScreen screens] count] > 1) { // Associate the window with the second screen. // The main screen is always at index 0. UIScreen* secondScreen = [[UIScreen screens] objectAtIndex:1]; CGRect screenBounds = secondScreen.bounds; _secondWindow = [[UIWindow alloc] initWithFrame:screenBounds]; _secondWindow.screen = secondScreen; // Add a white background to the window UIView* whiteField = [[UIView alloc] initWithFrame:screenBounds]; whiteField.backgroundColor = [UIColor whiteColor]; [_secondWindow addSubview:whiteField]; Windows Displaying Content on an External Display 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 35[whiteField release]; // Center a label in the view. NSString* noContentString = [NSString stringWithFormat:@""]; CGSize stringSize = [noContentString sizeWithFont:[UIFont systemFontOfSize:18]]; CGRect labelSize = CGRectMake((screenBounds.size.width - stringSize.width) / 2.0, (screenBounds.size.height - stringSize.height) / 2.0, stringSize.width, stringSize.height); UILabel* noContentLabel = [[UILabel alloc] initWithFrame:labelSize]; noContentLabel.text = noContentString; noContentLabel.font = [UIFont systemFontOfSize:18]; [whiteField addSubview:noContentLabel]; // Go ahead and show the window. _secondWindow.hidden = NO; } } Important: You should always associate a screen with a window before showing the window. While it is possible to change screens for a window that is currently visible, doing so is an expensive operation and should be avoided. Assoon asthe window for an externalscreen is displayed, your application can begin updating it like any other window. You can add and remove subviews as needed, change the contents of subviews, animate changes to the views, and invalidate their contents as needed. Windows Displaying Content on an External Display 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 36Configuring the Screen Mode of an External Display Depending on your content, you might want to change the screen mode before associating your window with it. Many screens support multiple resolutions, some of which use different pixel aspect ratios. Screen objects use the most common screen mode by default, but you can change that mode to one that is more suitable for your content. For example, if you are implementing a game using OpenGL ES and your textures are designed for a 640 x 480 pixel screen, you might change the screen mode for screens with higher default resolutions. If you plan to use a screen mode other than the default one, you should apply that mode to the UIScreen object before associating the screen with a window. The UIScreenMode class definesthe attributes of a single screen mode. You can get a list of the modes supported by a screen from its availableModes property and iterate through the list for one that matches your needs. For more information about screen modes, see UIScreenMode Class Reference . Windows Displaying Content on an External Display 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 37Because view objects are the main way your application interacts with the user, they have many responsibilities. Here are just a few: ● Layout and subview management ● A view defines its own default resizing behaviors in relation to its parent view. ● A view can manage a list of subviews. ● A view can override the size and position of its subviews as needed. ● A view can convert points in its coordinate system to the coordinate systems of other views or the window. ● Drawing and animation ● A view draws content in its rectangular area. ● Some view properties can be animated to new values. ● Event handling ● A view can receive touch events. ● A view participates in the responder chain. This chapter focuses on the steps for creating, managing, and drawing views and for handling the layout and management of view hierarchies. For information about how to handle touch events (and other events) in your views, see Event Handling Guide for iOS . Creating and Configuring View Objects You create views as self-contained objects either programmatically or using Interface Builder, and then you assemble them into view hierarchies for use. 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 38 ViewsCreating View Objects Using Interface Builder The simplest way to create viewsisto assemble them graphically using Interface Builder. From Interface Builder, you can add views to your interface, arrange those views into hierarchies, configure each view’s settings, and connect view-related behaviors to your code. Because Interface Builder uses live view objects—that is, actual instances of the view classes—what you see at design time is what you get at runtime. You then save those live objects in a nib file, which is a resource file that preserves the state and configuration of your objects. You usually create nib filesin order to store an entire view hierarchy for one of your application’s view controllers. The top level of the nib file usually contains a single view object that represents your view controller’s view. (The view controller itself is typically represented by the File’s Owner object.) The top-level view should be sized appropriately for the target device and contain all of the other views that are to be presented. It is rare to use a nib file to store only a portion of your view controller’s view hierarchy. When using nib files with a view controller, all you have to do is initialize the view controller with the nib file information. The view controller handles the loading and unloading of your views at the appropriate times. However, if your nib file is not associated with a view controller, you can load the nib file contents manually using an NSBundle or UINib object, which use the data in the nib file to reconstitute your view objects. For more information about how to use Interface Builder to create and configure your views, see Interface Builder User Guide . For information about how view controllers load and manage their associated nib files, see “Custom View Controllers” in View Controller Programming Guide for iOS . For more information about how to load views programmatically from a nib file, see “Nib Files” in Resource Programming Guide . Creating View Objects Programmatically If you prefer to create views programmatically, you can do so using the standard allocation/initialization pattern. The default initialization method for views is the initWithFrame: method, which sets the initial size and position of the view relative to its (soon-to-be-established) parent view. For example, to create a new generic UIView object, you could use code similar to the following: CGRect viewRect = CGRectMake(0, 0, 100, 100); UIView* myView = [[UIView alloc] initWithFrame:viewRect]; Views Creating and Configuring View Objects 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 39Note: Although all views support the initWithFrame: method, some may have a preferred initialization method that you should use instead. For information about any custom initialization methods, see the reference documentation for the class. After you create a view, you must add it to a window (or to another view in a window) before it can become visible. For information on how to add viewsto your view hierarchy,see “Adding and Removing Subviews” (page 43). Setting the Properties of a View The UIView class hasseveral declared properties for controlling the appearance and behavior of the view. These properties are for manipulating the size and position of the view, the view’stransparency, its background color, and its rendering behavior. All of these properties have appropriate default values that you can change later as needed. You can also configure many of these propertiesfrom Interface Builder using the Inspector window. Table 3-1 lists some of the more commonly used properties (and some methods) and describes their usage. Related properties are listed together so that you can see the options you have for affecting certain aspects of the view. Table 3-1 Usage of some key view properties Properties Usage These properties affect the opacity of the view. The alpha and hidden properties change the view’s opacity directly. The opaque property tells the system how it should composite your view. Set this property to YES if your view’s content is fully opaque and therefore does not reveal any of the underlying view’s content. Setting this property to YES improves performance by eliminating unnecessary compositing operations. alpha, hidden, opaque Views Creating and Configuring View Objects 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 40Properties Usage These properties affect the size and position of the view. The center and frame properties represent the position of the view relative to its parent view. The frame also includes the size of the view. The bounds property defines the view’s visible content area in its own coordinate system. The transform property is used to animate or move the entire view in complex ways. For example, you would use a transform to rotate or scale the view. If the current transform is not the identity transform, the frame property is undefined and should be ignored. For information about the relationship between the bounds, frame, and center properties, see “The Relationship of the Frame, Bounds, and Center Properties” (page 18). For information about how transforms affect a view, see “Coordinate System Transformations” (page 20). bounds, frame, center, transform These properties affect the automatic resizing behavior of the view and its subviews. The autoresizingMask property controls how a view responds to changes in its parent view’s bounds. The autoresizesSubviews property controls whether the current view’s subviews are resized at all. autoresizingMask, autoresizesSubviews These properties affect the rendering behavior of content inside the view. The contentMode and contentStretch properties determine how the content is treated when the view’s width or height changes. The contentScaleFactor property is used only when you need to customize the drawing behavior of your view for high-resolution screens. For more information on how the content mode affects your view, see “Content Modes” (page 13). For information about how the content stretch rectangle affects your view, see “Stretchable Views” (page 15). For information about how to handle scale factors, see “Supporting High-Resolution Screens” in Drawing and Printing Guide for iOS . contentMode, contentStretch, contentScaleFactor These properties affect how your view processes touch events. The gestureRecognizers property contains gesture recognizers attached to the view. The other properties control what touch events the view supports. For information about how to respond to eventsin your views,see Event Handling Guide for iOS . gestureRecognizers, userInteractionEnabled, multipleTouchEnabled, exclusiveTouch Views Creating and Configuring View Objects 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 41Properties Usage These properties and methods help you manage the actual content of your view. For simple views, you can set a background color and add one ormore subviews. The subviews property itself contains a read-only list of subviews, but there are several methods for adding and rearranging subviews. For views with custom drawing behavior, you must override the drawRect: method. For more advanced content, you can work directly with the view’s Core Animation layer. To specify an entirely different type of layer for the view (such as a layer that supports OpenGL ES drawing calls), you must override the layerClass method. backgroundColor, subviews, drawRect: method, layer, (layerClass method) Forinformation aboutthe basic properties common to all views,seeUIViewClass Reference . Formore information about specific properties of a view, see the reference documentation for that view. Tagging Views for Future Identification The UIView class contains a tag property that you can use to tag individual view objects with an integer value. You can use tags to uniquely identify views inside your view hierarchy and to perform searches for those views at runtime. (Tag-based searches are faster than iterating the view hierarchy yourself.) The default value for the tag property is 0. To search for a tagged view, use the viewWithTag: method of UIView. This method performs a depth-first search of the receiver and its subviews. It does not search superviews or other parts of the view hierarchy. Thus, calling this method from the root view of a hierarchy searches all views in the hierarchy but calling it from a specific subview searches only a subset of views. Creating and Managing a View Hierarchy Managing view hierarchies is a crucial part of developing your application’s user interface. The organization of your views influences both the visual appearance of your application and how your application responds to changes and events. For example, the parent-child relationships in the view hierarchy determine which objects might handle a specific touch event. Similarly, parent-child relationships define how each view responds to interface orientation changes. Views Creating and Managing a View Hierarchy 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 42Figure 3-1 shows an example of how the layering of views creates the desired visual effect for an application. In the case of the Clock application, the view hierarchy is composed of a mixture of views derived from different sources. The tab bar and navigation views are special view hierarchies provided by the tab bar and navigation controller objects to manage portions of the overall user interface. Everything between those bars belongs to the custom view hierarchy that the Clock application provides. Figure 3-1 Layered views in the Clock application Window Tab bar view Navigation view Custom view hierarchy There are several ways to build view hierarchies in iOS applications, including graphically in Interface Builder and programmatically in your code. The following sections show you how to assemble your view hierarchies and, having done that, how to find views in the hierarchy and convert between different view coordinate systems. Adding and Removing Subviews Interface Builder is the most convenient way to build view hierarchies because you assemble your views graphically, see the relationships between the views, and see exactly how those views will appear at runtime. When using Interface Builder, you save your resulting view hierarchy in a nib file, which you load at runtime as the corresponding views are needed. If you prefer to create your views programmatically instead, you create and initialize them and then use the following methods to arrange them into hierarchies: Views Creating and Managing a View Hierarchy 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 43● To add a subview to a parent, call the addSubview: method of the parent view. This method adds the subview to the end of the parent’s list of subviews. ● To insert a subview in the middle of the parent’s list of subviews, call any of the insertSubview:... methods of the parent view. Inserting a subview in the middle of the list visually places that view behind any views that come later in the list. ● To reorder existing subviewsinside their parent, callthe bringSubviewToFront:, sendSubviewToBack:, or exchangeSubviewAtIndex:withSubviewAtIndex: methods of the parent view. Using these methods is faster than removing the subviews and reinserting them. ● To remove a subview from its parent, call the removeFromSuperview method of the subview (not the parent view). When adding a subview to its parent, the subview’s current frame rectangle denotes its initial position inside the parent view. A subview whose frame lies outside of its superview’s visible bounds is not clipped by default. If you want yoursubview to be clipped to the superview’s bounds, you must explicitly set the clipsToBounds property of the superview to YES. The most common example of adding a subview to another view occurs in the application:didFinishLaunchingWithOptions: method of almost every application. Listing 3-1 shows a version of this method that installs the view from the application’s main view controller into the application window. Both the window and the view controller are stored in the application’s main nib file, which is loaded before the method is called. However, the view hierarchy managed by the view controller is not actually loaded until the view property is accessed. Listing 3-1 Adding a view to a window - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // Override point for customization after application launch. // Add the view controller's view to the window and display. [window addSubview:viewController.view]; [window makeKeyAndVisible]; return YES; } Views Creating and Managing a View Hierarchy 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 44Another common place where you might add subviewsto a view hierarchy isin the loadView or viewDidLoad methods of a view controller. If you are building your views programmatically, you put your view creation code in the loadView method of your view controller. Whether you create your views programmatically or load them from a nib file, you could include additional view configuration code in the viewDidLoad method. Listing 3-2 showsthe viewDidLoad method of the TransitionsViewController classfrom the UICatalog sample application. The TransitionsViewController class manages the animations associated with transitioning between two views. The application’s initial view hierarchy (consisting of a root view and toolbar) is loaded from a nib file. The code in the viewDidLoad method subsequently creates the container view and image views used to manage the transitions. The purpose of the container view is to simplify the code needed to implement the transition animations between the two image views. The container view has no real content of its own. Listing 3-2 Adding views to an existing view hierarchy - (void)viewDidLoad { [super viewDidLoad]; self.title = NSLocalizedString(@"TransitionsTitle", @""); // create the container view which we will use for transition animation (centered horizontally) CGRect frame = CGRectMake(round((self.view.bounds.size.width - kImageWidth) / 2.0), kTopPlacement, kImageWidth, kImageHeight); self.containerView = [[[UIView alloc] initWithFrame:frame] autorelease]; [self.view addSubview:self.containerView]; // The container view can represent the images for accessibility. [self.containerView setIsAccessibilityElement:YES]; [self.containerView setAccessibilityLabel:NSLocalizedString(@"ImagesTitle", @"")]; // create the initial image view frame = CGRectMake(0.0, 0.0, kImageWidth, kImageHeight); self.mainView = [[[UIImageView alloc] initWithFrame:frame] autorelease]; self.mainView.image = [UIImage imageNamed:@"scene1.jpg"]; Views Creating and Managing a View Hierarchy 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 45[self.containerView addSubview:self.mainView]; // create the alternate image view (to transition between) CGRect imageFrame = CGRectMake(0.0, 0.0, kImageWidth, kImageHeight); self.flipToView = [[[UIImageView alloc] initWithFrame:imageFrame] autorelease]; self.flipToView.image = [UIImage imageNamed:@"scene2.jpg"]; } Important: Superviews automatically retain their subviews, so after embedding a subview it is safe to release that subview. In fact, doing so is recommended because it prevents your application from retaining the view one time too many and causing a memory leak later. Just remember that if you remove a subview from its superview and intend to reuse it, you must retain the subview again. The removeFromSuperview method autoreleases a subview before removing it from its superview. If you do not retain the view before the next event loop cycle, the view will be released. Formore information about Cocoamemorymanagement conventions,see AdvancedMemoryManagement Programming Guide . When you add a subview to another view, UIKit notifies both the parent and child views of the change. If you implement custom views, you can intercept these notifications by overriding one or more of the willMoveToSuperview:, willMoveToWindow:, willRemoveSubview:, didAddSubview:, didMoveToSuperview, or didMoveToWindow methods. You can use these notifications to update any state information related to your view hierarchy or to perform additional tasks. After creating a view hierarchy, you can navigate it programmatically using the superview and subviews properties of your views. The window property of each view containsthe window in which that view is currently displayed (if any). Because the root view in a view hierarchy has no parent, its superview property is set to nil. For views that are currently onscreen, the window object is the root view of the view hierarchy. Hiding Views To hide a view visually, you can either set its hidden property to YES or change its alpha property to 0.0. A hidden view does not receive touch events from the system. However, hidden views do participate in autoresizing and other layout operations associated with the view hierarchy. Thus, hiding a view is often a convenient alternative to removing views from your view hierarchy, especially if you plan to show the views again at some point soon. Views Creating and Managing a View Hierarchy 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 46Important: If you hide a view that is currently the first responder, the view does not automatically resign its first responder status. Events targeted at the first responder are still delivered to the hidden view. To prevent this from happening, you should force your view to resign the first responder status when you hide it. For more information about the responder chain, see Event Handling Guide for iOS . If you want to animate a view’s transition from visible to hidden (or the reverse), you must do so using the view’s alpha property. The hidden property is not an animatable property, so any changes you make to it take effect immediately. Locating Views in a View Hierarchy There are two ways to locate views in a view hierarchy: ● Store pointers to any relevant views in an appropriate location, such as in the view controller that owns the views. ● Assign a unique integer to each view’s tag property and use the viewWithTag: method to locate it. Storing references to relevant views is the most common approach to locating views and makes accessing those views very convenient. If you used Interface Builder to create your views, you can connect objects in your nib file (including the File’s Owner object that represents the managing controller object) to one another using outlets. For views you create programmatically, you can store referencesto those viewsin private member variables. Whether you use outlets or private member variables, you are responsible for retaining the views as needed and then releasing them as well. The best way to ensure objects are retained and released properly is to use declared properties. Tags are a useful way to reduce hard-coded dependencies and support more dynamic and flexible solutions. Rather than storing a pointer to a view, you could locate it using its tag. Tags are also a more persistent way of referring to views. For example, if you wanted to save the list of views that are currently visible in your application, you would write out the tags of each visible view to a file. This is simpler than archiving the actual view objects, especially in situations where you are tracking only which views are currently visible. When your application is subsequently loaded, you would then re-create your views and use the saved list of tags to set the visibility of each view, and thereby return your view hierarchy to its previous state. Translating, Scaling, and Rotating Views Every view has an associated affine transform that you can use to translate, scale, or rotate the view’s content. View transforms alter the final rendered appearance of the view and are often used to implement scrolling, animations, or other visual effects. Views Creating and Managing a View Hierarchy 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 47The transform property of UIView contains a CGAffineTransform structure with the transformations to apply. By default, this property is set to the identity transform, which does not modify the appearance of the view. You can assign a new transform to this property at any time. For example, to rotate a view by 45 degrees, you could use the following code: // M_PI/4.0 is one quarter of a half circle, or 45 degrees. CGAffineTransform xform = CGAffineTransformMakeRotation(M_PI/4.0); self.view.transform = xform; Views Creating and Managing a View Hierarchy 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 48Applying the transform in the preceding code to a view would rotate that view clockwise about its center point. Figure 3-2 shows how this transformation would look if it were applied to an image view embedded in an application. Figure 3-2 Rotating a view 45 degrees Unrotated Rotated 45˚ When applying multiple transformations to a view, the order in which you add those transformations to the CGAffineTransform structure is significant. Rotating the view and then translating it is not the same as translating the view and then rotating it. Even if the amounts of rotation and translation are the same in each case, the sequence of the transformations affects the final results. In addition, any transformations you add are applied to the view relative to its center point. Thus, applying a rotation factor rotates the view around its center point. Scaling a view changes the width and height of the view but does not change its center point. For more information about creating and using affine transforms, see “Transforms” in Quartz 2D Programming Guide . Views Creating and Managing a View Hierarchy 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 49Converting Coordinates in the View Hierarchy At various times, particularly when handling events, an application may need to convert coordinate values from one frame of reference to another. For example, touch events report the location of each touch in the window’s coordinate system but view objects often need that information in the view’slocal coordinate system. The UIView class defines the following methods for converting coordinates to and from the view’s local coordinate system: convertPoint:fromView: convertRect:fromView: convertPoint:toView: convertRect:toView: The convert...:fromView: methods convert coordinates from some other view’s coordinate system to the local coordinate system (bounds rectangle) of the current view. Conversely, the convert...:toView: methods convert coordinates from the current view’s local coordinate system (bounds rectangle) to the coordinate system of the specified view. If you specify nil as the reference view for any of the methods, the conversions are made to and from the coordinate system of the window that contains the view. In addition to the UIView conversion methods, the UIWindow class also defines several conversion methods. These methods are similar to the UIView versions except that instead of converting to and from a view’s local coordinate system, these methods convert to and from the window’s coordinate system. convertPoint:fromWindow: convertRect:fromWindow: convertPoint:toWindow: convertRect:toWindow: When converting coordinates in rotated views, UIKit converts rectangles under the assumption that you want the returned rectangle to reflect the screen area covered by the source rectangle. Figure 3-3 shows an example of how rotations can cause the size of the rectangle to change during a conversion. In the figure, an outer Views Creating and Managing a View Hierarchy 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 50parent view contains a rotated subview. Converting a rectangle in the subview’s coordinate system to the parent’s coordinate system yields a rectangle that is physically larger. This larger rectangle is actually the smallest rectangle in the bounds of outerView that completely encloses the rotated rectangle. Figure 3-3 Converting values in a rotated view Rectangle in rotatedView coordinate system Rectangle converted to outerView coordinate system outerView superview subviews frame rotatedView superview subviews frame Adjusting the Size and Position of Views at Runtime Whenever the size of a view changes, the size and position of its subviews must change accordingly. The UIView class supports both the automatic and manual layout of views in a view hierarchy. With automatic layout, you set the rules that each view should follow when its parent view resizes, and then forget about resizing operations altogether. With manual layout, you manually adjust the size and position of views as needed. Being Prepared for Layout Changes Layout changes can occur whenever any of the following events happens in a view: ● The size of a view’s bounds rectangle changes. ● An interface orientation change occurs, which usually triggers a change in the root view’s boundsrectangle. ● The set of Core Animation sublayers associated with the view’s layer changes and requires layout. ● Your application forces layout to occur by calling the setNeedsLayout or layoutIfNeeded method of a view. ● Your application forces layout by calling the setNeedsLayout method of the view’s underlying layer object. Views Adjusting the Size and Position of Views at Runtime 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 51Handling Layout Changes Automatically Using Autoresizing Rules When you change the size of a view, the position and size of any embedded subviews usually needs to change to account for the new size of their parent. The autoresizesSubviews property of the superview determines whether the subviewsresize at all. If this property isset to YES, the view usesthe autoresizingMask property of each subview to determine how to size and position that subview. Size changes to any subviews trigger similar layout adjustments for their embedded subviews. For each view in your view hierarchy,setting that view’s autoresizingMask property to an appropriate value is an important part of handling automatic layout changes. Table 3-2 lists the autoresizing options you can apply to a given view and describes their effects during layout operations. You can combine constants using an OR operator or just add them together before assigning them to the autoresizingMask property. If you are using Interface Builder to assemble your views, you use the Autosizing inspector to set these properties. Table 3-2 Autoresizing mask constants Autoresizing mask Description UIViewAutoresizingNone The view does not autoresize. (This is the default value.) The view’s height changes when the superview’s height changes. If this constant is not included, the view’s height does not change. UIViewAutoresizingFlexibleHeight The view’s width changes when the superview's width changes. If this constant is not included, the view’s width does not change. UIViewAutoresizingFlexibleWidth The distance between the view’s left edge and the superview’s left edge grows or shrinks as needed. If this constant is not included, the view’s left edge remains a fixed distance from the left edge of the superview. UIViewAutoresizingFlexibleLeftMargin The distance between the view’s right edge and the superview’s right edge grows or shrinks as needed. If this constant is not included, the view’s right edge remains a fixed distance from the right edge of the superview. UIViewAutoresizingFlexibleRightMargin The distance between the view’s bottom edge and the superview’s bottom edge grows or shrinks as needed. If this constant is not included, the view’s bottom edge remains a fixed distance from the bottom edge of the superview. UIViewAutoresizingFlexibleBottomMargin The distance between the view’s top edge and the superview’s top edge grows or shrinks as needed. If this constant is not included, the view’s top edge remains a fixed distance from the top edge of the superview. UIViewAutoresizingFlexibleTopMargin Views Adjusting the Size and Position of Views at Runtime 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 52Figure 3-4 shows a graphical representation of how the options in the autoresizing mask apply to a view. The presence of a given constant indicates that the specified aspect of the view is flexible and may change when the superview’s bounds change. The absence of a constant indicates that the view’s layout is fixed in that aspect. When you configure a view that has more than one flexible attribute along a single axis, UIKit distributes any size changes evenly among the corresponding spaces. Figure 3-4 View autoresizing mask constants UIViewAutoresizingFlexibleWidth UIViewAutoresizingFlexibleRightMargin UIViewAutoresizingFlexibleBottomMargin UIViewAutoresizingFlexibleHeight Superview View UIViewAutoresizingFlexibleTopMargin UIViewAutoresizingFlexibleLeftMargin The easiest way to configure autoresizing rulesis using the Autosizing controlsin the Size inspector of Interface Builder. The flexible width and height constants from the preceding figure have the same behavior as the width and size indicatorsin the Autosizing controls diagram. However, the behavior and use of margin indicators is effectively reversed. In Interface Builder, the presence of a margin indicator means that the margin has a fixed size and the absence of the indicator means the margin has a flexible size. Fortunately, Interface Builder provides an animation to show you how changes to the autoresizing behaviors affect your view. Important: If a view’s transform property does not contain the identity transform, the frame of that view is undefined and so are the results of its autoresizing behaviors. After the automatic autoresizing rules for all affected views have been applied, UIKit goes back and gives each view a chance to make any necessary manual adjustments to its superview. For more information about how to manage the layout of views manually, see “Tweaking the Layout of Your Views Manually” (page 54). Views Adjusting the Size and Position of Views at Runtime 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 53Tweaking the Layout of Your Views Manually Whenever the size of a view changes, UIKit appliesthe autoresizing behaviors of that view’ssubviews and then calls the layoutSubviews method of the view to let it make manual changes. You can implement the layoutSubviews method in custom views when the autoresizing behaviors by themselves do not yield the results you want. Your implementation of this method can do any of the following: ● Adjust the size and position of any immediate subviews. ● Add or remove subviews or Core Animation layers. ● Force a subview to be redrawn by calling its setNeedsDisplay or setNeedsDisplayInRect: method. One place where applications often lay out subviews manually is when implementing a large scrollable area. Because it is impractical to have a single large view for its scrollable content, applications often implement a root view that contains a number of smaller tile views. Each tile represents a portion of the scrollable content. When a scroll event happens, the root view calls its setNeedsLayout method to initiate a layout change. Its layoutSubviews method then repositions the tile views based on the amount of scrolling that occurred. As tiles scroll out of the view’s visible area, the layoutSubviews method moves the tiles to the incoming edge, replacing their contents in the process. When writing your layout code, be sure to test your code in the following ways: ● Change the orientation of your views to make sure the layout looks correct in all supported interface orientations. ● Make sure your code responds appropriately to changes in the height of the status bar. When a phone call is active, the status bar height increasesin size, and when the user endsthe call, the status bar decreases in size. For information about how autoresizing behaviors affect the size and position of your views, see “Handling Layout Changes Automatically Using Autoresizing Rules” (page 52). For an example of how to implement tiling, see the ScrollViewSuite sample. Modifying Views at Runtime As applications receive input from the user, they adjust their user interface in response to that input. An application might modify its views by rearranging them, changing their size or position, hiding or showing them, or loading an entirely new set of views. In iOS applications, there are several places and ways in which you perform these kinds of actions: ● In a view controller: Views Modifying Views at Runtime 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 54● A view controller has to create its views before showing them. It can load the views from a nib file or create them programmatically. When those views are no longer needed, it disposes of them. ● When a device changes orientations, a view controller might adjust the size and position of views to match. As part of its adjustment to the new orientation, it might hide some views and show others. ● When a view controller manages editable content, it might adjust its view hierarchy when moving to and from edit mode. For example, it might add extra buttons and other controls to facilitate editing various aspects of its content. This might also require the resizing of any existing viewsto accommodate the extra controls. ● In animation blocks: ● When you want to transition between different sets of views in your user interface, you hide some views and show others from inside an animation block. ● When implementing special effects, you might use an animation block to modify various properties of the view. For example, to animate changes to the size of a view, you would change the size of its frame rectangle. ● Other ways: ● When touch events or gestures occur, your interface might respond by loading a new set of views or changing the current set of views. For information about handling events, see Event Handling Guide for iOS . ● When the user interacts with a scroll view, a large scrollable area might hide and show tile subviews. For more information about supporting scrollable content, see Scroll View Programming Guide for iOS . ● When the keyboard appears, you might reposition or resize views so that they do not lie underneath the keyboard. For information about how to interact with the keyboard, see Text, Web, and Editing Programming Guide for iOS . View controllers are a common place to initiate changes to your views. Because a view controller manages the view hierarchy associated with the content being displayed, it is ultimately responsible for everything that happens to those views. When loading its views or handling orientation changes, the view controller can add new views, hide or replace existing ones, and make any number of changes to make the views ready for the display. And if you implement support for editing your view’s content, the setEditing:animated: method in UIViewController gives you a place to transition your views to and from their editable versions. Animation blocks are another common place to initiate view-related changes. The animation support built into the UIView class makes it easy to animate changes to view properties. You can also use the transitionWithView:duration:options:animations:completion: or transitionFromView:toView:duration:options:completion: methods to swap out entire sets of views for new ones. Views Modifying Views at Runtime 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 55For more information about animating views and initiating view transitions, see “Animations” (page 64). For more information on how you use view controllers to manage view-related behaviors, see View Controller Programming Guide for iOS . Interacting with Core Animation Layers Each view object has a dedicated Core Animation layer that manages the presentation and animation of the view’s content on the screen. Although you can do a lot with your view objects, you can also work directly with the corresponding layer objects as needed. The layer object for the view is stored in the view’s layer property. Changing the Layer Class Associated with a View The type of layer associated with a view cannot be changed after the view is created. Therefore, each view uses the layerClass class method to specify the class of its layer object. The default implementation of this method returns the CALayer class and the only way to change this value is to subclass, override the method, and return a different value. You might want to change this value in the following circumstances: ● Your application uses OpenGL ES for drawing, in which case, the layer must be an instance of the CAEAGLLayer class. ● Your view uses tiling to display a large scrollable area, in which case you might want to use the CATiledLayer class to back your view instead. Implementation of the layerClass method should simply create the desired Class object and return it. For example, a view that supports OpenGL ES drawing would have the following implementation for this method: + (Class)layerClass { return [CAEAGLLayer class]; } Each view calls its layerClass method early in its initialization process and uses the returned class to create its layer object. In addition, the view always assigns itself as the delegate of its layer object. At this point, the view ownsitslayer and the relationship between the view and layer must not change. You must also not assign the same view as the delegate of any other layer object. Changing the ownership or delegate relationships of the view will cause drawing problems and potential crashes in your application. Views Interacting with Core Animation Layers 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 56For more information about the different types of layer objects provided by Core Animation,see Core Animation Reference Collection . Embedding Layer Objects in a View If you prefer to work primarily with layer objects instead of views, you can incorporate custom layer objects into your view hierarchy as needed. A custom layer object is any instance of CALayer that is not owned by a view. You typically create custom layers programmatically and incorporate them using Core Animation routines. Custom layers do not receive events or participate in the responder chain but do draw themselves and respond to size changes in their parent view or layer according to the Core Animation rules. Listing 3-3 shows an example of the viewDidLoad method from a view controller that creates a custom layer object and addsit to itsroot view. The layer is used to display a static image that is animated. Instead of adding the layer to the view itself, you add it to the view’s underlying layer. Listing 3-3 Adding a custom layer to a view - (void)viewDidLoad { [super viewDidLoad]; // Create the layer. CALayer* myLayer = [[CALayer alloc] init]; // Set the contents of the layer to a fixed image. And set // the size of the layer to match the image size. UIImage layerContents = [[UIImage imageNamed:@"myImage"] retain]; CGSize imageSize = layerContents.size; myLayer.bounds = CGRectMake(0, 0, imageSize.width, imageSize.height); myLayer = layerContents.CGImage; // Add the layer to the view. CALayer* viewLayer = self.view.layer; [viewLayer addSublayer:myLayer]; // Center the layer in the view. CGRect viewBounds = backingView.bounds; Views Interacting with Core Animation Layers 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 57myLayer.position = CGPointMake(CGRectGetMidX(viewBounds), CGRectGetMidY(viewBounds)); // Release the layer, since it is retained by the view's layer [myLayer release]; } You can add any number of sublayers and arrange them into sublayer hierarchies, if you want. However, at some point, those layers must be attached to the layer object of a view. For information on how to work with layers directly, see Core Animation Programming Guide . Defining a Custom View If the standard system views do not do exactly what you need, you can define a custom view. Custom views give you total control over the appearance of your application’s content and how interactions with that content are handled. Checklist for Implementing a Custom View The job of a custom view is to present content and manage interactions with that content. The successful implementation of a custom view involves more than just drawing and handling events, though. The following checklist includes the more important methods you can override (and behaviors you can provide) when implementing a custom view: ● Define the appropriate initialization methods for your view: ● For views you plan to create programmatically, override the initWithFrame: method or define a custom initialization method. ● For views you plan to load from nib files, override the initWithCoder: method. Use this method to initialize your view and put it into a known state. ● Implement a dealloc method to handle the cleanup of any custom data. ● To handle any custom drawing, override the drawRect: method and do your drawing there. ● Set the autoresizingMask property of the view to define its autoresizing behavior. ● If your view class manages one or more integral subviews, do the following: ● Create those subviews during your view’s initialization sequence. ● Set the autoresizingMask property of each subview at creation time. Views Defining a Custom View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 58● If your subviews require custom layout, override the layoutSubviews method and implement your layout code there. ● To handle touch-based events, do the following: ● Attach any suitable gesture recognizersto the view by using the addGestureRecognizer: method. ● For situations where you want to process the touches yourself, override the touchesBegan:withEvent:, touchesMoved:withEvent:, touchesEnded:withEvent:, and touchesCancelled:withEvent: methods. (Remember that you should always override the touchesCancelled:withEvent: method, regardless of which other touch-related methods you override.) ● If you want the printed version of your view to look different from the onscreen version, implement the drawRect:forViewPrintFormatter: method. For detailed information about how to support printing in your views, see Drawing and Printing Guide for iOS . In addition to overriding methods, remember that there is a lot you can do with the view’s existing properties and methods. For example, the contentMode and contentStretch properties let you change the final rendered appearance of your view and might be preferable to redrawing the content yourself. In addition to the UIView class itself, there are many aspects of a view’s underlying CALayer object that you can configure directly or indirectly. You can even change the class of the layer object itself (which you must do if you plan to use OpenGL ES to draw your view’s content). For more information about the methods and properties of the view class, see UIView Class Reference . Initializing Your Custom View Every new view object you define should include a custom initWithFrame: initializer method. This method is responsible for initializing the class at creation time and putting your view object into a known state. You use this method when creating instances of your view programmatically in your code. Listing 3-4 shows a skeletal implementation of a standard initWithFrame: method. This method calls the inherited implementation of the method first and then initializes the instance variables and state information of the class before returning the initialized object. Calling the inherited implementation istraditionally performed first so that if there is a problem, you can abort your own initialization code and return nil. Listing 3-4 Initializing a view subclass - (id)initWithFrame:(CGRect)aRect { self = [super initWithFrame:aRect]; if (self) { // setup the initial properties of the view Views Defining a Custom View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 59... } return self; } If you plan to load instances of your custom view class from a nib file, you should be aware that in iOS, the nib-loading code does not use the initWithFrame: method to instantiate new view objects. Instead, it uses the initWithCoder: method that is part of the NSCoding protocol. Even if your view adopts the NSCoding protocol, Interface Builder does not know about your view’s custom properties and therefore does not encode those properties into the nib file. As a result, your own initWithCoder: method should perform whatever initialization code it can to put the view into a known state. You can also implement the awakeFromNib method in your view class and use that method to perform additional initialization. Implementing Your Drawing Code For viewsthat need to do custom drawing, you need to override the drawRect: method and do your drawing there. Custom drawing is recommended only as a last resort. In general, if you can use other views to present your content, that is preferred. The implementation of your drawRect: method should do exactly one thing: draw your content. This method is not the place to be updating your application’s data structures or performing any tasks not related to drawing. It should configure the drawing environment, draw your content, and exit as quickly as possible. And if your drawRect: method might be called frequently, you should do everything you can to optimize your drawing code and draw as little as possible each time the method is called. Before calling your view’s drawRect: method, UIKit configures the basic drawing environment for your view. Specifically, it creates a graphics context and adjusts the coordinate system and clipping region to match the coordinate system and visible bounds of your view. Thus, by the time your drawRect: method is called, you can begin drawing your content using native drawing technologies such as UIKit and Core Graphics. You can get a pointer to the current graphics context using the UIGraphicsGetCurrentContext function. Views Defining a Custom View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 60Important: The current graphics context is valid only for the duration of one call to your view’s drawRect: method. UIKit might create a different graphics context for each subsequent call to this method, so you should not try to cache the object and use it later. Listing 3-5 shows a simple implementation of a drawRect: method that draws a 10-pixel-wide red border around the view. Because UIKit drawing operations use Core Graphics for their underlying implementations, you can mix drawing calls, as shown here, to get the results you expect. Listing 3-5 A drawing method - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); CGRect myFrame = self.bounds; // Set the line width to 10 and inset the rectangle by // 5 pixels on all sides to compensate for the wider line. CGContextSetLineWidth(context, 10); CGRectInset(myFrame, 5, 5); [[UIColor redColor] set]; UIRectFrame(myFrame); } If you know that your view’s drawing code always covers the entire surface of the view with opaque content, you can improve system performance by setting the opaque property of your view to YES. When you mark a view as opaque, UIKit avoids drawing content that is located immediately behind your view. This not only reduces the amount of time spent drawing but also minimizes the work that must be done to composite your view with other content. However, you should set this property to YES only if you know your view’s content is completely opaque. If your view cannot guarantee that its contents are always opaque, you should set the property to NO. Another way to improve drawing performance, especially during scrolling, is to set the clearsContextBeforeDrawing property of your view to NO. When this property is set to YES, UIKIt automatically fills the area to be updated by your drawRect: method with transparent black before calling your method. Setting this property to NO eliminates the overhead for that fill operation but puts the burden on your application to fill the update rectangle passed to your drawRect: method with content. Views Defining a Custom View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 61Responding to Events View objects are responder objects—instances of the UIResponder class—and are therefore capable of receiving touch events. When a touch event occurs, the window dispatches the corresponding event object to the view in which the touch occurred. If your view is not interested in an event, it can ignore it or passit up the responder chain to be handled by a different object. In addition to handling touch events directly, views can also use gesture recognizers to detect taps, swipes, pinches, and other types of common touch-related gestures. Gesture recognizers do the hard work of tracking touch events and making sure that they follow the right criteria to qualify them as the target gesture. Instead of your application having to track touch events, you can create the gesture recognizer, assign an appropriate target object and action method to it, and install it on your view using the addGestureRecognizer: method. The gesture recognizer then calls your action method when the corresponding gesture occurs. If you prefer to handle touch events directly, you can implement the following methods for your view, which are described in more detail in Event Handling Guide for iOS : touchesBegan:withEvent: touchesMoved:withEvent: touchesEnded:withEvent: touchesCancelled:withEvent: The default behavior for views is to respond to only one touch at a time. If the user puts a second finger down, the system ignoresthe touch event and does not report it to your view. If you plan to track multifinger gestures from your view’s event-handler methods, you need to enable multitouch events by setting the multipleTouchEnabled property of your view to YES. Some views, such as labels and images, disable event handling altogether initially. You can control whether a view is able to receive touch events by changing the value of the view’s userInteractionEnabled property. You might temporarily set this property to NO to prevent the user from manipulating the contents of your view while a long operation is pending. To prevent events from reaching any of your views, you can also use the beginIgnoringInteractionEvents and endIgnoringInteractionEvents methods of the UIApplication object. These methods affect the delivery of events for the entire application, not just for a single view. Views Defining a Custom View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 62Note: The animation methods of UIView typically disable touch events while animations are in progress. You can override this behavior by configuring the animation appropriately. For more information about performing animations, see “Animations” (page 64). As it handles touch events, UIKit uses the hitTest:withEvent: and pointInside:withEvent: methods of UIView to determine whether a touch event occurred inside a given view’s bounds. Although you rarely need to override these methods, you could do so to implement custom touch behaviors for your view. For example, you could override these methods to prevent subviews from handling touch events. Cleaning Up After Your View If your view class allocates any memory, stores references to any custom objects, or holds resources that must be released when the view is released, you must implement a dealloc method. The system calls the dealloc method when your view’s retain count reaches zero and it is time to deallocate the view. Your implementation of this method should release any objects or resources held by the view and then call the inherited implementation, as shown in Listing 3-6. You should not use this method to perform any other types of tasks. Listing 3-6 Implementing the dealloc method - (void)dealloc { // Release a retained UIColor object [color release]; // Call the inherited implementation [super dealloc]; } Views Defining a Custom View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 63Animations provide fluid visual transitions between different states of your user interface. In iOS, animations are used extensively to reposition views, change theirsize, remove them from view hierarchies, and hide them. You might use animations to convey feedback to the user or to implement interesting visual effects. In iOS, creating sophisticated animations does not require you to write any drawing code. All of the animation techniques described in this chapter use the built-in support provided by Core Animation. All you have to do is trigger the animation and let Core Animation handle the rendering of individual frames. This makes creating sophisticated animations very easy with only a few lines of code. What Can Be Animated? Both UIKit and Core Animation provide support for animations, but the level of support provided by each technology varies. In UIKit, animations are performed using UIView objects. Views support a basic set of animations that cover many common tasks. For example, you can animate changes to properties of views or use transition animations to replace one set of views with another. Table 4-1 liststhe animatable properties—the propertiesthat have built-in animation support—of the UIView class. Being animatable does not mean animations happen automatically. Changing the value of these properties normally just updates the property (and the view) immediately without an animation. To animate such a change, you must change the property’s value from inside an animation block, which is described in “Animating Property Changes in a View” (page 66). Table 4-1 Animatable UIView properties Property Changes you can make Modify this property to change the view’s size and position relative to its superview’s coordinate system. (If the transform property does not contain the identity transform, modify the bounds or center properties instead.) frame bounds Modify this property to change the view’s size. Modify this property to change the view’s position relative to its superview’s coordinate system. center 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 64 AnimationsProperty Changes you can make Modify this property to scale, rotate, or translate the view relative to its center point. Transformations using this property are always performed in 2D space. (To perform 3D transformations, you must animate the view’slayer object using Core Animation.) transform alpha Modify this property to gradually change the transparency of the view. backgroundColor Modify this property to change the view’s background color. Modify this property to change the way the view’s contents are stretched to fill the available space. contentStretch Animated view transitions are a way for you to make changes to your view hierarchy beyond those offered by view controllers. Although you should use view controllers to manage succinct view hierarchies, there may be times when you want to replace all or part of a view hierarchy. In those situations, you can use view-based transitions to animate the addition and removal of your views. In places where you want to perform more sophisticated animations, or animations not supported by the UIView class, you can use Core Animation and the view’s underlying layer to create the animation. Because view and layer objects are intricately linked together, changes to a view’s layer affect the view itself. Using Core Animation, you can animate the following types of changes for your view’s layer: ● The size and position of the layer ● The center point used when performing transformations ● Transformations to the layer or its sublayers in 3D space ● The addition or removal of a layer from the layer hierarchy ● The layer’s Z-order relative to other sibling layers ● The layer’s shadow ● The layer’s border (including whether the layer’s corners are rounded) ● The portion of the layer that stretches during resizing operations ● The layer’s opacity ● The clipping behavior for sublayers that lie outside the layer’s bounds ● The current contents of the layer ● The rasterization behavior of the layer Animations What Can Be Animated? 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 65Note: If your view hosts custom layer objects—that is, layer objects without an associated view—you must use Core Animation to animate any changes to them. Although this chapter addresses a few Core Animation behaviors, it does so in relation to initiating them from your view code. For more complete information about how to use Core Animation to animate layers, see Core Animation Programming Guide and Core Animation Cookbook . Animating Property Changes in a View In order to animate changesto a property of the UIView class, you must wrap those changesinside an animation block. The term animation block is used in the generic sense to refer to any code that designates animatable changes. In iOS 4 and later, you create an animation block using block objects. In earlier versions of iOS, you mark the beginning and end of an animation block using special class methods of the UIView class. Both techniques support the same configuration options and offer the same amount of control over the animation execution. However, the block-based methods are preferred whenever possible. The following sections focus on the code you need in order to animate changes to view properties. For information about how to create animated transitions between sets of views,see “Creating Animated Transitions Between Views” (page 73). Starting Animations Using the Block-Based Methods In iOS 4 and later, you use the block-based class methods to initiate animations. There are several block-based methods that offer different levels of configuration for the animation block. These methods are: ● animateWithDuration:animations: ● animateWithDuration:animations:completion: ● animateWithDuration:delay:options:animations:completion: Because these are class methods, the animation blocks you create with them are not tied to a single view. Thus, you can use these methods to create a single animation that involves changes to multiple views. For example, Listing 4-1 showsthe code needed to fade in one view while fading out another over a one second time period. When this code executes, the specified animations are started immediately on another thread so as to avoid blocking the current thread or your application’s main thread. Listing 4-1 Performing a simple block-based animation [UIView animateWithDuration:1.0 animations:^{ Animations Animating Property Changes in a View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 66firstView.alpha = 0.0; secondView.alpha = 1.0; }]; The animations in the preceding example run only once using using an ease-in, ease-out animation curve. If you want to change the default animation parameters, you must use the animateWithDuration:delay:options:animations:completion:method to performyour animations. This method lets you customize the following animation parameters: ● The delay to use before starting the animation ● The type of timing curve to use during the animation ● The number of times the animation should repeat ● Whether the animation should reverse itself automatically when it reaches the end ● Whether touch events are delivered to views while the animations are in progress ● Whether the animation should interrupt any in-progress animations or wait until those are complete before starting Another thing that both the animateWithDuration:animations:completion: and animateWithDuration:delay:options:animations:completion: methods support is the ability to specify a completion handler block. You might use a completion handler to signal your application that a specific animation has finished. Completion handlers are also the way to link separate animations together. Listing 4-2 shows an example of an animation block that uses a completion handler to initiate a new animation after the first one finishes. The first call to animateWithDuration:delay:options:animations:completion: sets up a fade-out animation and configures it with some custom options. When that animation is complete, its completion handler runs and sets up the second half of the animation, which fades the view back in after a delay. Using a completion handler is the primary way that you link multiple animations. Listing 4-2 Creating an animation block with custom options - (IBAction)showHideView:(id)sender { // Fade out the view right away [UIView animateWithDuration:1.0 delay: 0.0 options: UIViewAnimationOptionCurveEaseIn Animations Animating Property Changes in a View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 67animations:^{ thirdView.alpha = 0.0; } completion:^(BOOL finished){ // Wait one second and then fade in the view [UIView animateWithDuration:1.0 delay: 1.0 options:UIViewAnimationOptionCurveEaseOut animations:^{ thirdView.alpha = 1.0; } completion:nil]; }]; } Important: Changing the value of a property while an animation involving that property is already in progress does not stop the current animation. Instead, the current animation continues and animates to the new value you just assigned to the property. Starting Animations Using the Begin/Commit Methods If your application runs in iOS 3.2 and earlier, you must use the beginAnimations:context: and commitAnimations class methods of UIView to define your animation blocks. These methods mark the beginning and end of your animation block. Any animatable properties you change between these methods are animated to their new values after you call the commitAnimations method. Execution of the animations occurs on a secondary thread so as to avoid blocking the current thread or your application’s main thread. Note: If you are writing an application for iOS 4 or later, you should use the block-based methods for animating your content instead. For information on how to use those methods, see “Starting Animations Using the Block-Based Methods” (page 66). Listing 4-3 shows the code needed to implement the same behavior as Listing 4-1 (page 66) but using the begin/commit methods. Asin Listing 4-1, this code fades one view out while fading another in over one second of time. However, in this example, you must set the duration of the animation using a separate method call. Animations Animating Property Changes in a View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 68Listing 4-3 Performing a simple begin/commit animation [UIView beginAnimations:@"ToggleViews" context:nil]; [UIView setAnimationDuration:1.0]; // Make the animatable changes. firstView.alpha = 0.0; secondView.alpha = 1.0; // Commit the changes and perform the animation. [UIView commitAnimations]; By default, all animatable property changes within an animation block are animated. If you want to animate some changes but not others, use the setAnimationsEnabled: method to disable animations temporarily, make any changesthat you do not want animated, and then call setAnimationsEnabled: again to reenable animations. You can determine if animations are current enabled by calling the areAnimationsEnabled class method. Note: Changing the value of a property while an animation involving that property is in progress does not stop the current animation. Instead, the animation continues and animates to the new value you just assigned to the property. Configuring the Parameters for Begin/Commit Animations To configure the animation parameters for a begin/commit animation block, you use any of several UIView class methods. Table 4-2 lists these methods and describes how you use them to configure your animations. Most of these methods should be called only from inside a begin/commit animation block but some may also be used with block-based animations. If you do not call one of these methods from your animation block, a default value for the corresponding attribute is used. For more information about the default value associated with each method, see the method description in UIView Class Reference . Table 4-2 Methods for configuring animation blocks Method Usage Use either of these methodsto specify when the executionsshould begin executing. If the specified start date is in the past (or the delay is 0), the animations begin as soon as possible. setAnimationStartDate: setAnimationDelay: Use this method to set the period of time over which to execute the animations. setAnimationDuration: Animations Animating Property Changes in a View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 69Method Usage Use this method to set the timing curve of the animations. This controls whether animations execute linearly or change speed at certain times. setAnimationCurve: Use these methods to set the number of times the animation repeats and whether the animation runs in reverse at the end of each complete cycle. For more information about using these methods, see “Implementing Animations That Reverse Themselves” (page 73). setAnimationRepeatCount: setAnimationRepeatAutoreverses: Use these methods to execute code immediately before or after the animations. For more information about using a delegate,see “Configuring an Animation Delegate” (page 71). setAnimationDelegate: setAnimationWillStartSelector: setAnimationDidStopSelector: Use this method to stop all previous animations immediately and start the new animations from the stopping point. If you pass NO to this method, instead of YES, the new animations do not begin executing until the previous animations stop. setAnimationBeginsFromCurrentState: Listing 4-4 shows the code needed to implement the same behavior as the code in Listing 4-2 (page 67) but using the begin/commit methods. As before, this code fades out a view, waits one second, and then fades it back in. In order to implement the second part of the animation, the code sets up an animation delegate and implements a did-stop handler method. That handler method then sets up the second half of the animations and runs them. Listing 4-4 Configuring animation parameters using the begin/commit methods // This method begins the first animation. - (IBAction)showHideView:(id)sender { [UIView beginAnimations:@"ShowHideView" context:nil]; [UIView setAnimationCurve:UIViewAnimationCurveEaseIn]; [UIView setAnimationDuration:1.0]; [UIView setAnimationDelegate:self]; [UIView setAnimationDidStopSelector:@selector(showHideDidStop:finished:context:)]; Animations Animating Property Changes in a View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 70// Make the animatable changes. thirdView.alpha = 0.0; // Commit the changes and perform the animation. [UIView commitAnimations]; } // Called at the end of the preceding animation. - (void)showHideDidStop:(NSString *)animationID finished:(NSNumber *)finished context:(void *)context { [UIView beginAnimations:@"ShowHideView2" context:nil]; [UIView setAnimationCurve:UIViewAnimationCurveEaseOut]; [UIView setAnimationDuration:1.0]; [UIView setAnimationDelay:1.0]; thirdView.alpha = 1.0; [UIView commitAnimations]; } Configuring an Animation Delegate If you want to execute code immediately before or after an animation, you must associate a delegate object and a start or stop selector with your begin/commit animation block. You set your delegate object using the setAnimationDelegate: class method of UIView and you set your start and stop selectors using the setAnimationWillStartSelector: and setAnimationDidStopSelector: class methods. During the animation, the animation system calls your delegate methods at the appropriate times to give you a chance to perform your code. The signatures of your animation delegate methods need to be similar to the following: - (void)animationWillStart:(NSString *)animationID context:(void *)context; - (void)animationDidStop:(NSString *)animationID finished:(NSNumber *)finished context:(void *)context; The animationID and context parameters for both methods are the same parameters that you passed to the beginAnimations:context: method at the beginning of the animation block: Animations Animating Property Changes in a View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 71● animationID—An application-supplied string used to identify the animation. ● context—An application-supplied object that you can use to pass additional information to the delegate. The setAnimationDidStopSelector: selector method has an additional parameter—a Boolean value that is YES if the animation ran to completion. If the value of this parameter is NO, the animation was either canceled or stopped prematurely by another animation. Note: Although animation delegates can be used in the block-based methods, there is generally no need to use them there. Instead, place any code you want to run before the animations at the beginning of your block and place any code you want to run after the animationsfinish in a completion handler. Nesting Animation Blocks You can assign different timing and configuration options to parts of an animation block by nesting additional animation blocks. As the name implies, a nested animation block is a new animation block created inside an existing animation block. Nested animations are started at the same time as any parent animations but run (for the most part) with their own configuration options. By default, nested animations do inherit the parent’s duration and animation curve but even those options can be overridden as needed. Listing 4-5 shows an example of how a nested animation is used to change the timing, duration, and behavior of some animations in the overall group. In this case, two views are being faded to total transparency, but the transparency of the anotherView object is changed back and forth several times before it is finally hidden. The UIViewAnimationOptionOverrideInheritedCurve and UIViewAnimationOptionOverrideInheritedDuration keys used in the nested animation block allow the curve and duration values from the first animation to be modified for the second animation. If these keys were not present, the duration and curve of the outer animation block would be used instead. Listing 4-5 Nesting animations that have different configurations [UIView animateWithDuration:1.0 delay: 1.0 options:UIViewAnimationOptionCurveEaseOut animations:^{ aView.alpha = 0.0; // Create a nested animation that has a different // duration, timing curve, and configuration. Animations Animating Property Changes in a View 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 72[UIView animateWithDuration:0.2 delay:0.0 options: UIViewAnimationOptionOverrideInheritedCurve | UIViewAnimationOptionCurveLinear | UIViewAnimationOptionOverrideInheritedDuration | UIViewAnimationOptionRepeat | UIViewAnimationOptionAutoreverse animations:^{ [UIView setAnimationRepeatCount:2.5]; anotherView.alpha = 0.0; } completion:nil]; } completion:nil]; If you are using the begin/commit methods to create your animations, nesting works in much the same way as with the block-based methods. Each successive call to beginAnimations:context: within an already open animation block creates a new nested animation block that you can configure as needed. Any configuration changes you make apply to the most recently opened animation block. All animation blocks must be closed with a call to commitAnimations before the animations are submitted and executed. Implementing Animations That Reverse Themselves When creating reversible animations in conjunction with a repeat count, consider specifying a non integer value for the repeat count. For an autoreversing animation, each complete cycle of the animation involves animating from the original value to the new value and back again. If you want your animation to end on the new value, adding 0.5 to the repeat count causes the animation to complete the extra half cycle needed to end at the new value. If you do not include this half step, your animation will animate to the original value and then snap quickly to the new value, which may not be the visual effect you want. Creating Animated Transitions Between Views View transitions help you hide sudden changes associated with adding, removing, hiding, or showing views in your view hierarchy. You use view transitions to implement the following types of changes: Animations Creating Animated Transitions Between Views 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 73● Change the visible subviews of an existing view. You typically choose this option when you want to make relatively small changes to an existing view. ● Replace one view in your view hierarchy with a different view. You typically choose this option when you want to replace a view hierarchy that spans all or most of the screen. Important: View transitions should not be confused with transitions initiated by view controllers, such as the presentation of modal view controllers or the pushing of new view controllers onto a navigation stack. View transitions affect the view hierarchy only, whereas view-controller transitions change the active view controller as well. Thus, for view transitions, the view controller that was active when you initiated the transition remains active when the transition finishes. For more information about how you can use view controllers to present new content, see View Controller Programming Guide for iOS . Changing the Subviews of a View Changing the subviews of a view allows you to make moderate changes to the view. For example, you might add or remove subviews to toggle the superview between two different states. By the time the animations finish, the same view is displayed but its contents are now different. In iOS 4 and later, you use the transitionWithView:duration:options:animations:completion: method to initiate a transition animation for a view. In the animations block passed to this method, the only changesthat are normally animated are those associated with showing, hiding, adding, or removing subviews. Limiting animations to this set allows the view to create a snapshot image of the before and after versions of the view and animate between the two images, which is more efficient. However, if you need to animate other changes, you can include the UIViewAnimationOptionAllowAnimatedContent option when calling the method. Including that option prevents the view from creating snapshots and animates all changes directly. Listing 4-6 is an example of how to use a transition animation to make it seem as if a new text entry page has been added. In this example, the main view contains two embedded text views. The text views are configured identically, but one is always visible while the other is always hidden. When the user taps the button to create a new page, this method toggles the visibility of the two views, resulting in a new empty page with an empty text view ready to accept text. After the transition is complete, the view saves the text from the old page using a private method and resets the now hidden text view so that it can be reused later. The view then arranges its pointers so that it can be ready to do the same thing if the user requests yet another new page. Listing 4-6 Swapping an empty text view for an existing one - (IBAction)displayNewPage:(id)sender { Animations Creating Animated Transitions Between Views 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 74[UIView transitionWithView:self.view duration:1.0 options:UIViewAnimationOptionTransitionCurlUp animations:^{ currentTextView.hidden = YES; swapTextView.hidden = NO; } completion:^(BOOL finished){ // Save the old text and then swap the views. [self saveNotes:temp]; UIView* temp = currentTextView; currentTextView = swapTextView; swapTextView = temp; }]; } If you need to perform view transitions in iOS 3.2 and earlier, you can use the setAnimationTransition:forView:cache: method to specify the parameters for the transition. The view you pass to that method is the same one you would pass in as the first parameter to the transitionWithView:duration:options:animations:completion: method. Listing 4-7 shows the basic structure of the animation block you need to create. Note that to implement the completion block shown in Listing 4-6 (page 74), you would need to configure an animation delegate with a did-stop handler as described in “Configuring an Animation Delegate” (page 71). Listing 4-7 Changing subviews using the begin/commit methods [UIView beginAnimations:@"ToggleSiblings" context:nil]; [UIView setAnimationTransition:UIViewAnimationTransitionCurlUp forView:self.view cache:YES]; [UIView setAnimationDuration:1.0]; // Make your changes [UIView commitAnimations]; Animations Creating Animated Transitions Between Views 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 75Replacing a View with a Different View Replacing views is something you do when you want your interface to be dramatically different. Because this technique swaps only views (and not view controllers), you are responsible for designing your application’s controller objects appropriately. This technique is simply a way of presenting new views quickly using some standard transitions. In iOS 4 and later, you use the transitionFromView:toView:duration:options:completion: method to transition between two views. This method actually removes the first view from your hierarchy and inserts the other, so you should make sure you have a reference to the first view if you want to keep it. If you want to hide views instead of remove them from your view hierarchy, pass the UIViewAnimationOptionShowHideTransitionViews key as one of the options. Listing 4-8 shows the code needed to swap between two main views managed by a single view controller. In this example, the view controller’s root view always displays one of two child views (primaryView or secondaryView). Each view presents the same content but does so in a different way. The view controller uses the displayingPrimary member variable (a Boolean value) to keep track of which view is displayed at any given time. The flip direction changes depending on which view is being displayed. Listing 4-8 Toggling between two views in a view controller - (IBAction)toggleMainViews:(id)sender { [UIView transitionFromView:(displayingPrimary ? primaryView : secondaryView) toView:(displayingPrimary ? secondaryView : primaryView) duration:1.0 options:(displayingPrimary ? UIViewAnimationOptionTransitionFlipFromRight : UIViewAnimationOptionTransitionFlipFromLeft) completion:^(BOOL finished) { if (finished) { displayingPrimary = !displayingPrimary; } }]; } Animations Creating Animated Transitions Between Views 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 76Note: In addition to swapping out views, your view controller code needs to manage the loading and unloading of both the primary and secondary views. For information on how views are loaded and unloaded by a view controller, see View Controller Programming Guide for iOS . Linking Multiple Animations Together The UIView animation interfaces provide support for linking separate animation blocks so that they perform sequentially instead of at the same time. The process for linking animation blocks depends on whether you are using the block-based animation methods or the begin/commit methods: ● For block-based animations, use the completion handler supported by the animateWithDuration:animations:completion: and animateWithDuration:delay:options:animations:completion: methods to execute any follow-on animations. ● For begin/commit animations, associate a delegate object and a did-stop selector with the animation. For information about how to associate a delegate with your animations, see “Configuring an Animation Delegate” (page 71). An alternative to linking animations together is to use nested animations with different delay factors so as to start the animations at different times. For more information on how to nest animations,see “Nesting Animation Blocks” (page 72). Animating View and Layer Changes Together Applications can freely mix view-based and layer-based animation code as needed but the process for configuring your animation parameters depends on who owns the layer. Changing a view-owned layer is the same as changing the view itself, and any animations you apply to the layer’s properties respect the animation parameters of the current view-based animation block. The same is not true for layers that you create yourself. Custom layer objects ignore view-based animation block parameters and use the default Core Animation parameters instead. If you want to customize the animation parametersfor layers you create, you must use Core Animation directly. Typically, animating layers using Core Animation involves creating a CABasicAnimation object orsome other concrete subclass of CAAnimation. You then add that animation to the corresponding layer. You can apply the animation from either inside or outside a view-based animation block. Animations Linking Multiple Animations Together 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 77Listing 4-9 shows an animation that modifies a view and a custom layer at the same time. The view in this example contains a custom CALayer object at the center of its bounds. The animation rotatesthe view counter clockwise while rotating the layer clockwise. Because the rotations are in opposite directions, the layer maintains its original orientation relative to the screen and does not appear to rotate significantly. However, the view beneath that layerspins 360 degrees and returnsto its original orientation. This example is presented primarily to demonstrate how you can mix view and layer animations. Thistype of mixing should not be used in situations where precise timing is needed. Listing 4-9 Mixing view and layer animations [UIView animateWithDuration:1.0 delay:0.0 options: UIViewAnimationOptionCurveLinear animations:^{ // Animate the first half of the view rotation. CGAffineTransform xform = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(-180)); backingView.transform = xform; // Rotate the embedded CALayer in the opposite direction. CABasicAnimation* layerAnimation = [CABasicAnimation animationWithKeyPath:@"transform"]; layerAnimation.duration = 2.0; layerAnimation.beginTime = 0; //CACurrentMediaTime() + 1; layerAnimation.valueFunction = [CAValueFunction functionWithName:kCAValueFunctionRotateZ]; layerAnimation.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; layerAnimation.fromValue = [NSNumber numberWithFloat:0.0]; layerAnimation.toValue = [NSNumber numberWithFloat:DEGREES_TO_RADIANS(360.0)]; layerAnimation.byValue = [NSNumber numberWithFloat:DEGREES_TO_RADIANS(180.0)]; [manLayer addAnimation:layerAnimation forKey:@"layerAnimation"]; } completion:^(BOOL finished){ // Now do the second half of the view rotation. [UIView animateWithDuration:1.0 Animations Animating View and Layer Changes Together 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 78delay: 0.0 options: UIViewAnimationOptionCurveLinear animations:^{ CGAffineTransform xform = CGAffineTransformMakeRotation(DEGREES_TO_RADIANS(-359)); backingView.transform = xform; } completion:^(BOOL finished){ backingView.transform = CGAffineTransformIdentity; }]; }]; Note: In Listing 4-9 (page 78), you could also create and apply the CABasicAnimation object outside of the view-based animation block to achieve the same results. All of the animations ultimately rely on Core Animation for their execution. Thus, if they are submitted at approximately the same time, they run together. If precise timing between your view and layer based animations is required, it is recommended that you create all of the animations using Core Animation. You may find that some animations are easier to perform using Core Animation anyway. For example, the view-based rotation in Listing 4-9 (page 78) requires a multistep sequence for rotations of more than 180 degrees, whereas the Core Animation portion uses a rotation value function that rotates from start to finish through a middle value. For more information about how to create and configure animations using Core Animation,see Core Animation Programming Guide and Core Animation Cookbook . Animations Animating View and Layer Changes Together 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 79This table describes the changes to View Programming Guide for iOS . Date Notes 2011-03-08 Reorganized and expanded the content of the document. Added information on how to create view-based animations. Incorporated information on how to display content on an external display. Added information about how to work with high-resolution screens. New document describing the creation and management of views, windows, and other visual interface elements. 2010-05-17 2011-03-08 | © 2011 Apple Inc. All Rights Reserved. 80 Document Revision HistoryApple Inc. © 2011 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, iPad, iPhone, iPod, iPod touch, Quartz, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. Retina is a trademark of Apple Inc. OpenGL is a registered trademark of Silicon Graphics, Inc. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Stream Programming GuideContents Introduction to Stream Programming Guide for Cocoa 4 Organization of This Document 4 See Also 4 Cocoa Streams 6 Reading From Input Streams 8 Preparing the Stream Object 8 Handling Stream Events 9 Disposing of the Stream Object 11 Writing To Output Streams 12 Preparing the Stream Object 12 Handling Stream Events 13 Disposing of the Stream Object 15 Polling Versus Run-Loop Scheduling 17 Handling Stream Errors 20 Setting Up Socket Streams 22 Basic Procedure 22 Securing and Configuring the Connection 23 Initiating an HTTP Request 24 For More Information 25 Document Revision History 26 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 2Figures and Listings Cocoa Streams 6 Figure 1 Sources and destinations of stream objects 6 Reading From Input Streams 8 Listing 1 Creating and initializing an NSInputStream object 8 Listing 2 Handling a bytes-available event 10 Listing 3 Closing and releasing the NSInputStream object 11 Writing To Output Streams 12 Listing 1 Creating and initializing an NSOutputStream object for memory 13 Listing 2 Handling a space-available event 14 Listing 3 Closing and releasing the NSInputStream object 15 Polling Versus Run-Loop Scheduling 17 Listing 1 Writing to an output stream using polling 17 Handling Stream Errors 20 Listing 1 Handling stream errors 20 Setting Up Socket Streams 22 Listing 1 Setting up a network socket stream 22 Listing 2 Making an HTTP GET request 24 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 3A stream is a fundamental abstraction in programming: a sequence of bits transmitted serially from one point to another point. Cocoa provides three classes to represent streams and facilitate their use in your programs: NSStream, NSInputStream, and NSOutputStream. With the instances of these classes you can read data from, and write data to, files and application memory. You can also use these objects in socket-based connections to exchange data with remote hosts. You can also subclass the stream classes to obtain specialized stream behavior. Organization of This Document This document includes the following articles: ● “Cocoa Streams” (page 6) gives an overview of the Cocoa stream classes, describing architecture, capabilities, and general usage. ● “Reading From Input Streams” (page 8) explains how to create and prepare a (non-socket) input-stream object. It also describes how to handle stream events generated by all types of NSInputStream objects. ● “Writing To Output Streams” (page 12) explains how to create and prepare a (non-socket) output-stream object. It also describes how to handle stream events generated by all types of NSOutputStream objects. ● “Polling Versus Run-Loop Scheduling” (page 17) discusses the relative merits of the two techniques used to avoid blocking when reading and writing to streams. It also illustrates how to poll for stream data using the API of the stream classes. ● “Handling Stream Errors” (page 20) describes how to handle errors that occur in stream processing. ● “Setting Up Socket Streams” (page 22) explains how to set up stream objects used to communicate with remote hosts via sockets. See Also You may find the following external resources helpful if you are implementing socket-based network streams: ● OpenSSL — http://www.openssl.org/ ● Apache SSL — http://www.apache-ssl.org/ 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 4 Introduction to Stream Programming Guide for Cocoa● SOCKS — http://tools.ietf.org/html/rfc1928 Introduction to Stream Programming Guide for Cocoa See Also 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 5Streams provide an easy way for a program to exchange data with a variety of media in a device-independent way. A stream is a contiguous sequence of bits transmitted serially over a communications path. It is unidirectional and hence, from the perspective of a program, a stream can be an input (or read) stream or an output (or write) stream. Except for ones that are file-based, streams are non-seekable; once stream data has been provided or consumed, it cannot be retrieved again from the stream. Cocoa includes three stream-related classes: NSStream, NSInputStream, and NSOutputStream. NSStream is an abstract classthat definesthe fundamental interface and propertiesfor allstream objects. NSInputStream and NSOutputStream are subclasses of NSStream and implement default input-stream and output-stream behavior. You can create NSOutputStream instances for stream data located in memory or written to a file or C buffer; you can create NSInputStream instances for stream data read from an NSData object or a file. You can also have NSInputStream and NSOutputStream objects at the end points of a socket-based network connection and you can use stream objects without loading all of the stream data into memory at once. Figure 1 illustrates the types of input-stream and output-stream objects in terms of their sources or destinations. Figure 1 Sources and destinations of stream objects NSOutputStream Buffer Memory (NSData) Network socket Data (NSData) Network socket Client program NSInputStream File File Because they deal with such a basic computing abstraction (streams), NSStream and itssubclasses are intended for lower-level programming tasks. If there is a higher-level Cocoa API that is more suited for a particular task (for example, NSURL or NSFileHandle) use it instead. 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 6 Cocoa StreamsStream objects have properties associated with them. Most properties have to do with network security and configuration, namely secure-socket (SSL) levels and SOCKS proxy information. Two important additional properties are NSStreamDataWrittenToMemoryStreamKey, which permits retrieval of data written to memory for an output stream, and NSStreamFileCurrentOffsetKey, which allows you to manipulate the current read or write position in file-based streams. A stream object also has a delegate associated with it. If a delegate is not explicitly set, the stream object itself becomesthe delegate (a useful convention for custom subclasses). A stream object invokesthe sole delegation method stream:handleEvent: for each stream-related event it handles. Of particular importance are the events that indicate when bytes are available to read from an input stream and when an output stream signals that it’s ready to accept bytes. For these two events, the delegate sends the stream the appropriate message—read:maxLength: or write:maxlength:, depending on type of stream—to get the bytes from the stream or to put bytes on the stream. NSStream is built on the CFStream layer of Core Foundation. This close relationship means that the concrete subclasses of NSStream, NSOutputStream and NSInputStream, are toll-free bridged with their Core Foundation counterparts CFWriteStream and CFReadStream. Although there are strong similarities between the Cocoa and Core Foundation stream APIs, their implementations are not exactly coincident. The Cocoa stream classes use the delegation model for asynchronous behavior (assuming run-loop scheduling) while Core Foundation uses client callbacks. The Core Foundation stream types sets the client (termed a context in Core Foundation) differently than the NSStream setsthe delegate; callsto set the delegate should not be mixed with calls to set the context. Otherwise you can freely intermix calls from the two APIs in your code. Despite their strong similarities, NSStream does give you a major advantage over CFStream. Because of its Objective-C underpinnings, it is extensible. You can subclass NSStream, NSInputStream, or NSOutputStream to customize stream attributes and behavior. For example, you could create an input stream that maintains statistics on the bytes it reads; or you could make a NSStream subclass whose instances can seek through their stream, putting back bytes that have been read. NSStream has its own set of required overrides, as do NSInputStream and NSOutputStream. See the reference documentation for NSStream, NSInputStream, and NSOutputStream for details on subclassing these classes. Cocoa Streams 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 7In Cocoa, reading from an NSInputStream instance consists of several steps: 1. Create and initialize an instance of NSInputStream from a source of data. 2. Schedule the stream object on a run loop and open the stream. 3. Handle the events that the stream object reports to its delegate. 4. When there is no more data to read, dispose of the stream object. The following discussion goes into each of these steps in more detail. Note: The examples in this document show the strategy of scheduling stream objects on run loops and setting a delegate to handle stream events. You may use polling instead of run-loop scheduling if you prefer that approach. However, run-loop scheduling with delegation isthe preferred approach for various reasons (described in “Polling Versus Run-Loop Scheduling” (page 17)), and that is why it is highlighted in this document. Preparing the Stream Object To begin using an NSInputStream object you must have (after first locating, if necessary) a source of data for the stream. The source of data can be a file, an NSData object, or a network socket. Note: The procedure for initializing input-stream objects from network sockets is different from the procedure for the other two data sources, and is not covered in this article. To learn about initializing an NSInputStream instance for a network connection, see “Setting Up Socket Streams” (page 22). The initializers and factory methods for NSInputStream allow you to create and initialize the instance from an NSData or file. Listing 1 shows an NSInputStream instance created from a file. Listing 1 Creating and initializing an NSInputStream object - (void)setUpStreamForFile:(NSString *)path { // iStream is NSInputStream instance variable 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 8 Reading From Input StreamsiStream = [[NSInputStream alloc] initWithFileAtPath:path]; [iStream setDelegate:self]; [iStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [iStream open]; } As this example shows, after you create the object you should set the delegate (more often than not to self). The delegate receives stream:handleEvent: messages from the NSInputStream object when that object is scheduled on the run loop and hasstream-related eventsto report,such as when there are bytes on the stream to be read. Before you open the stream to begin the streaming of data,send a scheduleInRunLoop:forMode: message to the stream object to schedule it to receive stream events on a run loop. By doing this, you are helping the delegate to avoid blocking when there is no data on the stream to read. If streaming is taking place on another thread, be sure to schedule the stream object on that thread’s run loop. You should never attempt to access a scheduled stream from a thread different than the one owning the stream’s run loop. Finally, send the NSInputStream instance an open message to start the streaming of data from the input source. Handling Stream Events After a stream object is sent open, you can find out about its status, whether it has bytes available to read, and the nature of any error with the following messages: streamStatus hasBytesAvailable streamError The returned status is an NSStreamStatus constant indicating that the stream is opening, reading, at the end of the stream, and so on. The returned error is an NSError object encapsulating information about any error that took place. (See the reference documentation for NSStream for descriptions of NSStreamStatus and other stream types.) More importantly, once the streamobject has been opened, it keepssending stream:handleEvent:messages to its delegate until it encounters the end of the stream. These messages include a parameter with an NSStreamEvent constant that indicates the type of event. For NSInputStream objects, the most common types of events are NSStreamEventOpenCompleted, NSStreamEventHasBytesAvailable, and Reading From Input Streams Handling Stream Events 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 9NSStreamEventEndEncountered. The delegate is typically most interested in NSStreamEventHasBytesAvailable events. Listing 2 illustrates a good approach for handling this type of event. Listing 2 Handling a bytes-available event - (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode { switch(eventCode) { case NSStreamEventHasBytesAvailable: { if(!_data) { _data = [[NSMutableData data] retain]; } uint8_t buf[1024]; unsigned int len = 0; len = [(NSInputStream *)stream read:buf maxLength:1024]; if(len) { [_data appendBytes:(const void *)buf length:len]; // bytesRead is an instance variable of type NSNumber. [bytesRead setIntValue:[bytesRead intValue]+len]; } else { NSLog(@"no buffer!"); } break; } // continued In this implementation of stream:handleEvent: the delegate uses a switch statement to identify the passed-in NSStreamEvent constant. If the constant is NSStreamEventHasBytesAvailable, the delegate first lazily creates(if necessary) an NSMutableData object (_data) to hold the retrieved bytes. Then it declares a buffer of a certain size (1024 bytes, in this case) and invokes the stream object’s read:maxLength: method, which fills up the buffer with the specified number of bytes. If the read operation successfully fetched bytes from the stream, the delegate appends these bytes to the NSMutableData object. Reading From Input Streams Handling Stream Events 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 10There is no firm guideline on how many bytes to read at one time. Although it may be possible to read all the data in the stream in one event, this depends on the length of the stream (that is, the number of bytes in it) as well as the behavior of the kernel, including device and socket characteristics. The best approach is to use some reasonable buffer size, such as 512 bytes, one kilobyte (as in the example above), or a page size (four kilobytes). When the NSInputStream object experiences errors processing the stream, it stops streaming and notifies its delegate with a NSStreamEventErrorOccurred. The delegate should handle the error in its stream:handleEvent: method as described in “Handling Stream Errors” (page 20). Disposing of the Stream Object When an NSInputStream object reaches the end of a stream, it sends the delegate a NSStreamEventEndEncountered event in a stream:handleEvent: message. The delegate should dispose of the object by doing the mirror-opposite of what it did to prepare the object. In other words, it should first close the stream object, remove it from the run loop, and finally release it. Listing 3 gives an example of how you might do this. Listing 3 Closing and releasing the NSInputStream object - (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode { switch(eventCode) { case NSStreamEventEndEncountered: { [stream close]; [stream removeFromRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [stream release]; stream = nil; // stream is ivar, so reinit it break; } // continued ... } } Reading From Input Streams Disposing of the Stream Object 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 11Using an NSOutputStream instance to write to an output stream requires several steps: 1. Create and initialize an instance of NSOutputStream with a repository for the written data. Also set a delegate. 2. Schedule the stream object on a run loop and open the stream. 3. Handle the events that the stream object reports to its delegate. 4. If the stream object has written data to memory, obtain the data by requesting the NSStreamDataWrittenToMemoryStreamKey property. 5. When there is no more data to write, dispose of the stream object. The following discussion goes into each of these steps in more detail. Note: The examples in this document show the strategy of scheduling stream objects on run loops and setting a delegate to handle stream events. You may use polling instead of run-loop scheduling if you prefer that approach. However, run-loop scheduling with delegation isthe preferred approach for various reasons (described in “Polling Versus Run-Loop Scheduling” (page 17)), and that is why it is highlighted in this document. Preparing the Stream Object To begin using an NSOutputStream object you must specify a destination for the data written to the stream. The destination for an output-stream object can be a file, a C buffer, application memory, or a network socket. 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 12 Writing To Output StreamsNote: The procedure for initializing output-stream objects from network sockets is different from the procedure for the other data destinations, and is not covered in this article. To learn about initializing an NSOutputStream instance for a network connection, see “Setting Up Socket Streams” (page 22). The initializers and factory methods for NSOutputStream allow you to create and initialize the instance with a file, a buffer, or memory. Listing 1 shows the creation of an NSOutputStream instance that will write data to application memory. Listing 1 Creating and initializing an NSOutputStream object for memory - (void)createOutputStream { NSLog(@"Creating and opening NSOutputStream..."); // oStream is an instance variable oStream = [[NSOutputStream alloc] initToMemory]; [oStream setDelegate:self]; [oStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [oStream open]; } As the code in Listing 1 shows, after you create the object you should set the delegate (more often than not to self). The delegate receives stream:handleEvent: messages from the NSOutputStream object when that object has stream-related events to report, such as when the stream has space for bytes. Before you open the stream to begin the streaming of data,send a scheduleInRunLoop:forMode: message to the stream object to schedule it to receive stream events on a run loop. By doing this, you are helping the delegate to avoid blocking when the stream is unable to accept more bytes. If streaming is taking place on another thread, be sure to schedule the stream object on that thread’s run loop. You should never attempt to access a scheduled stream from a thread different than the one owning the stream’s run loop. Finally, send the NSOutputStream instance an open message to start the streaming of data to the output container. Handling Stream Events After a stream object is sent open, you can find out about its status, whether it has space for writing data, and the nature of any error with the following messages: streamStatus Writing To Output Streams Handling Stream Events 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 13hasSpaceAvailable streamError The returned status is an NSStreamStatus constant indicating that the stream is opening, writing, at the end of the stream, and so on. The returned error is an NSError object encapsulating information about any error that took place. (See the reference documentation for NSStream for descriptions of NSStreamStatus and other stream types.) More importantly, once the streamobject has been opened, it keepssending stream:handleEvent:messages to its delegate (as long as the delegate continues to put bytes on the stream) until it encounters the end of the stream. These messages include a parameter with an NSStreamEvent constant that indicates the type of event. For NSOutputStream objects, the most common types of events are NSStreamEventOpenCompleted, NSStreamEventHasSpaceAvailable, and NSStreamEventEndEncountered. The delegate is typically most interested in NSStreamEventHasSpaceAvailable events. Listing 2 illustrates one approach you could take to handle this type of event. Listing 2 Handling a space-available event - (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode { switch(eventCode) { case NSStreamEventHasSpaceAvailable: { uint8_t *readBytes = (uint8_t *)[_data mutableBytes]; readBytes += byteIndex; // instance variable to move pointer int data_len = [_data length]; unsigned int len = ((data_len - byteIndex >= 1024) ? 1024 : (data_len-byteIndex)); uint8_t buf[len]; (void)memcpy(buf, readBytes, len); len = [stream write:(const uint8_t *)buf maxLength:len]; byteIndex += len; break; } // continued ... } } Writing To Output Streams Handling Stream Events 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 14In this implementation of stream:handleEvent: the delegate uses a switch statement to identify the passed-in NSStreamEvent constant. If the constant is NSStreamEventHasSpacesAvailable, the delegate gets the bytes held by a NSMutableData object (_data) and advances the pointer for the current write operation. It next determines the byte capacity of the impending write operation (1024 or the remaining bytes to write), declares a buffer of that size, and copies that amount of data to the buffer. Next the delegate invokes the output-stream object’s write:maxLength: method to put the buffer’s contents onto the output stream. Finally it advances the index used to advance the readBytes pointer for the next operation. If the delegate receives an NSStreamEventHasSpaceAvailable event and does not write anything to the stream, it does not receive further space-available events from the run loop until the NSOutputStream object receives more bytes. When this happens, the run loop is restarted for space-available events. If this scenario is likely in your implementation, you can have the delegate set a flag when it doesn’t write to the stream upon receiving an NSStreamEventHasSpaceAvailable event. Later, when your program has more bytesto write, it can check this flag and, if set, write to the output-stream instance directly. There is no firm guideline on how many bytes to write at one time. Although it may be possible to write all the data to the stream in one event, this depends on external factors, such as the behavior of the kernel and device and socket characteristics. The best approach is to use some reasonable buffer size, such as 512 bytes, one kilobyte (as in the example above), or a page size (four kilobytes). When the NSOutputStream object experiences errors writing to the stream, it stops streaming and notifies its delegate with a NSStreamEventErrorOccurred. The delegate should handle the error in its stream:handleEvent: method as described in “Handling Stream Errors” (page 20). Disposing of the Stream Object When an NSOutputStream object concludes writing data to an output stream, it sends the delegate a NSStreamEventEndEncountered event in a stream:handleEvent: message. At this point the delegate should dispose of the stream object by doing the mirror-opposite of what it did to prepare the object. In other words, it should first close the stream object, remove it from the run loop, and finally release it. Furthermore, if the destination for the NSOutputStream object is application memory (that is, you created the instance using initToMemory or the factory method outputStreamToMemory), you might now want to retrieve the data held in memory. Listing 3 illustrates how you might do all of these things. Listing 3 Closing and releasing the NSInputStream object - (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode { switch(eventCode) { case NSStreamEventEndEncountered: Writing To Output Streams Disposing of the Stream Object 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 15{ NSData *newData = [oStream propertyForKey: NSStreamDataWrittenToMemoryStreamKey]; if (!newData) { NSLog(@"No data written to memory!"); } else { [self processData:newData]; } [stream close]; [stream removeFromRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [stream release]; oStream = nil; // oStream is instance variable break; } // continued ... } } You get the stream data written to memory by sending the NSOutputStream object a propertyForKey: message, specifying a key of NSStreamDataWrittenToMemoryStreamKey The stream object returns the data in an NSData object. Writing To Output Streams Disposing of the Stream Object 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 16A potential problem with stream processing is blocking. A thread that is writing to or reading from a stream might have to wait indefinitely until there is (respectively) space on the stream to put bytes or bytes on the stream that can be read. In effect, the thread is at the mercy of the stream, and that can spell trouble for an application. Blocking can especially be a problem with socketstreams because they are dependent on responses from a remote host. With Cocoa streams you have two ways to handle stream events: ● Run-loop scheduling. You schedule a stream object on a run loop so that the delegate receives messages reporting stream-related events only when blocking is unlikely to take place. For read and write operations, the pertinent NSStreamEvent constants are NSStreamHasBytesAvailable and NSStreamHasSpaceAvailable. ● Polling. In a closed loop broken only at the end of the stream or upon error, you keep asking the stream object if it has (for read streams) bytes available to read or (for write streams) space available for writing. The pertinent methods are hasBytesAvailable (NSInputStream) and hasSpaceAvailable (NSOutputStream). Run-loop scheduling is almost always preferable over polling, and that is why the code examples in “Reading From Input Streams” (page 8) and “Writing To Output Streams” (page 12) exclusively show the use of run loops. With polling, your program is locked in a tight loop, waiting for stream events that might or might not be imminent. With run-loop scheduling, your program can go off and do other things, knowing that it will be notified when there is a stream event to handle. Moreover, run loops save you from having to manage state and are more efficient than polling. Polling is also CPU-intensive; there are other things you can be doing with your processing time. That said, there can be situations where polling is a viable option. For example, if you are porting legacy code, you might choose to use polling because it is better suited the threading model in the legacy code. Listing 1 illustrates a method that writes data to an output stream using polling. Listing 1 Writing to an output stream using polling - (void)createNewFile { oStream = [[NSOutputStream alloc] initToMemory]; [oStream open]; 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 17 Polling Versus Run-Loop Schedulinguint8_t *readBytes = (uint8_t *)[data mutableBytes]; uint8_t buf[1024]; int len = 1024; while (1) { if (len == 0) break; if ( [oStream hasSpaceAvailable] ) { (void)strncpy(buf, readBytes, len); readBytes += len; if ([oStream write:(const uint8_t *)buf maxLength:len] == -1) { [self handleError:[oStream streamError]]; break; } [bytesWritten setIntValue:[bytesWritten intValue]+len]; len = (([data length] - [bytesWritten intValue] >= 1024) ? 1024 : [data length] - [bytesWritten intValue]); } } NSData *newData = [oStream propertyForKey: NSStreamDataWrittenToMemoryStreamKey]; if (!newData) { NSLog(@"No data written to memory!"); } else { [self processData:newData]; } [oStream close]; [oStream release]; oStream = nil; } It should be pointed out that neither the polling nor run-loop scheduling approaches are airtight defenses against blocking. If the NSInputStream hasBytesAvailable method or the NSOutputStream hasSpaceAvailable method returns NO, it means in both cases that the stream definitely has no available bytes or space. However, if either of these methods returns YES, it can mean that there is available bytes or Polling Versus Run-Loop Scheduling 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 18space or that the only way to find out isto attempt a read or a write operation (which could lead to a momentary block). The NSStreamEventHasBytesAvailable and NSStreamEventHasSpaceAvailable stream events have identical semantics. Polling Versus Run-Loop Scheduling 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 19Occasionally, and especially with sockets, streams can experience errors that prevent further processing of stream data. Generally, errors indicate the absence of something at one end of a stream, such as the crash of a remote host or the deletion of a file being streamed. There is a little that a client of a stream can do when most errors occur except report the error to the user. Although a stream object that has reported an error can be queried for state before it is closed, it cannot be reused for read or write operations. The NSStream and NSOutputStream classes inform you if an error occurred in several ways: ● If the stream object is scheduled on a run loop, the object reports a NSStreamEventErrorOccurred event to its delegate in a stream:handleEvent: message. ● At any time, the client can send a streamStatus message to a stream object and see if it returns NSStreamStatusError. ● If you attempt to write to an NSOutputStream object by sending it write:maxLength: and it returns -1, a write error has occurred. Once you have determined that a stream object experienced an error, you can query the object with a streamError message to get more information about the error (in the form of an NSError object). Next, inform the user about the error. Listing 1 shows how the delegate of a run loop-scheduled stream object might handle an error. Listing 1 Handling stream errors - (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode { NSLog(@"stream:handleEvent: is invoked..."); switch(eventCode) { case NSStreamEventErrorOccurred: { NSError *theError = [stream streamError]; NSAlert *theAlert = [[NSAlert alloc] init]; [theAlert setMessageText:@"Error reading stream!"]; [theAlert setInformativeText:[NSString stringWithFormat:@"Error %i: %@", 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 20 Handling Stream Errors[theError code], [theError localizedDescription]]]; [theAlert addButtonWithTitle:@"OK"]; [theAlert beginSheetModalForWindow:[NSApp mainWindow] modalDelegate:self didEndSelector:@selector(alertDidEnd:returnCode:contextInfo:) contextInfo:nil]; [stream close]; [stream release]; break; } // continued .... } } For some errors, you can attempt to do more than inform the user. For example, if you try to set an SSL security level on a socket connection but the remote host is not secure, the stream object will report an error. You can then release the old stream object and create a new one for a non-secure socket connection. Handling Stream Errors 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 21You can use the CFStream API to establish a socket connection and, with the stream object (or objects) created as a result,send data to and receive data from a remote host. You can also configure the connection forsecurity. Basic Procedure The NSStream class does notsupport connecting to a remote host on iOS. CFStream does support this behavior, however, and once you have created your streams with the CFStream API, you can take advantage of the toll-free bridge between CFStream and NSStream to cast your CFStreams to NSStreams. Just call the CFStreamCreatePairWithSocketToHost function, providing a host name and a port number, to receive both a CFReadStreamRef and a CFWriteStreamRef for the given host. You can then cast these objects to an NSInputStream and an NSOutputStream and proceed. Listing 1 illustrates the use of CFStreamCreatePairWithSocketToHost. This example shows the creation of both a CFReadStreamRef object and a CFWriteStreamRef object. If you want to receive only one of these objects, just specify NULL as the parameter value for the unwanted object. Listing 1 Setting up a network socket stream - (IBAction)searchForSite:(id)sender { NSString *urlStr = [sender stringValue]; if (![urlStr isEqualToString:@""]) { NSURL *website = [NSURL URLWithString:urlStr]; if (!website) { NSLog(@"%@ is not a valid URL"); return; } CFReadStreamRef readStream; CFWriteStreamRef writeStream; CFStreamCreatePairWithSocketToHost(NULL, (CFStringRef)[website host], 80, &readStream, &writeStream); 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 22 Setting Up Socket StreamsNSInputStream *inputStream = (__bridge_transfer NSInputStream *)readStream; NSOutputStream *outputStream = (__bridge_transfer NSOutputStream *)writeStream; [inputStream setDelegate:self]; [outputStream setDelegate:self]; [inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [inputStream open]; [outputStream open]; /* Store a reference to the input and output streams so that they don't go away.... */ ... } } If you pass in invalid parameters, one or both of the requested CFReadStreamRef and CFWriteStreamRef objects are NULL. Once you have cast the CFStreams to NSStreams, set the delegate, schedule the stream on a run loop, and open the stream as usual. The delegate should begin to receive stream-event messages (stream:handleEvent:). See “Reading From Input Streams” (page 8) and “Writing To Output Streams” (page 12) for more information. Securing and Configuring the Connection Before you open a stream object, you might want to set security and other features for the connection to the remote host (which might be, for example, an HTTPS server). NSStream defines properties that affect the security of TCP/IP socket connections in two ways: ● Secure Socket Layer (SSL). A security protocol using digital certificates to provide data encryption, server authentication, message integrity, and (optionally) client authentication for TCP/IP connections. ● SOCKS proxy server. Setting Up Socket Streams Securing and Configuring the Connection 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 23A server that sits between a client application and a real server over a TCP/IP connection. It intercepts requests to the real server and, if it cannot fulfill them from a cache of recently requested files, forwards them to the real server. SOCKS proxy servers help improve performance over a network and can also be used to filter requests. For SSL security, NSStream defines various security-level properties (for example, NSStreamSocketSecurityLevelSSLv2). You set these properties by sending setProperty:forKey: to the stream object using the key NSStreamSocketSecurityLevelKey, as in this sample message: [inputStream setProperty:NSStreamSocketSecurityLevelTLSv1 forKey:NSStreamSocketSecurityLevelKey]; You must set the property before you open the stream. Once it opens, it goes through a handshake protocol to find out what level of SSL security the other side of the connection is using. If the security level is not compatible with the specified property, the stream object generates an error event. However, if you request a negotiated security level (NSStreamSocketSecurityLevelNegotiatedSSL), the security level becomes the highest that both sides of the connection can implement. Still, if you try to set an SSL security level when the remote host is not secure, an error is generated. To configure a SOCKS proxy server for a connection, you need to construct a dictionary with keys of the form NSStreamSOCKSProxyNameKey (for example, NSStreamSOCKSProxyHostKey). The value of each key is the SOCKS proxy setting that Name refers to. Then using setProperty:forKey:, set the dictionary as the value of the NSStreamSOCKSProxyConfigurationKey. Initiating an HTTP Request If you are opening a connection to an HTTP server (that is, a website), then you may have to initiate a transaction with that server by sending it an HTTP request. A good time to make this request is when the delegate of the NSOutputStream objectreceives a NSStreamEventHasSpaceAvailable event via a stream:handleEvent: message. “Making an HTTP GET request” shows the delegate creating an HTTP GET request and writing it to the output stream, after which it immediately closes the stream object. Listing 2 Making an HTTP GET request - (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode { NSLog(@"stream:handleEvent: is invoked..."); switch(eventCode) { Setting Up Socket Streams Initiating an HTTP Request 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 24case NSStreamEventHasSpaceAvailable: { if (stream == oStream) { NSString * str = [NSString stringWithFormat: @"GET / HTTP/1.0\r\n\r\n"]; const uint8_t * rawstring = (const uint8_t *)[str UTF8String]; [oStream write:rawstring maxLength:strlen(rawstring)]; [oStream close]; } break; } // continued ... } } For More Information To learn more about using streams for networking, read Networking Overview. Setting Up Socket Streams For More Information 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 25This table describes the changes to Stream Programming Guide . Date Notes 2012-09-19 Clarified behavior of CFStreamCreatePairWithSocketToHost. 2009-12-16 Updated code listings in the Setting Up Socket Streams chapter. 2009-08-28 Added links to related concepts. 2009-05-06 Added a missing comment to a code sample. 2008-10-15 Fixed broken links. 2006-10-03 Fixed a broken link. Changed event in code listing on writing to a network stream to NSStreamEventHasSpaceAvailable. 2006-04-04 2005-07-07 Fixed bugs and changed title from "Streams." 2004-07-21 Fixed bug in code example (Radar 3597799). 2004-02-20 First version of Streams. 2012-09-19 | © 2004, 2012 Apple Inc. All Rights Reserved. 26 Document Revision HistoryApple Inc. © 2004, 2012 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, Mac, Objective-C, and Spaces are trademarks of Apple Inc., registered in the U.S. and other countries. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. URL Loading System Programming GuideContents Introduction 5 Organization of This Document 5 See Also 6 URL Loading System Overview 7 URL Loading 7 Cache Management 9 Authentication and Credentials 9 Cookie Storage 10 Protocol Support 11 Using NSURLConnection 12 Creating a Connection 12 Controlling Response Caching 15 Estimating Upload Progress 16 Downloading Data Synchronously 17 Using NSURLDownload 18 Downloading to a Predetermined Destination 18 Downloading a File Using the Suggested Filename 20 Displaying Download Progress 22 Resuming Downloads 24 Decoding Encoded Files 24 Handling Redirects and Other Request Changes 26 Authentication Challenges 28 Deciding How to Respond to an Authentication Challenge 28 Responding to an Authentication Challenge 29 Providing Credentials 29 Continuing Without Credentials 30 Canceling the Connection 30 Understanding Cache Access 32 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 2Using the Cache for a Request 32 Cache Use Semantics for the http Protocol 33 Document Revision History 34 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 3Figures and Listings URL Loading System Overview 7 Figure 1 The URL loading system class hierarchy 7 Using NSURLConnection 12 Listing 1 Creating a connection using NSURLConnection 12 Listing 2 Example connection:didReceiveResponse: implementation 13 Listing 3 Example connection:didReceiveData: implementation 14 Listing 4 Example connectionDidFailWithError: implementation 14 Listing 5 Example connectionDidFinishLoading: implementation 15 Listing 6 Example connection:withCacheResponse: implementation 16 Using NSURLDownload 18 Listing 1 Using NSURLDownload with a predetermined destination file location 18 Listing 2 Using NSURLDownload with a filename derived from the download 20 Listing 3 Logging the finalized filename using download:didCreateDestination: 22 Listing 4 Displaying the download progress 22 Listing 5 Example implementation of download:shouldDecodeSourceDataOfMIMEType: method. 24 Handling Redirects and Other Request Changes 26 Listing 1 Example of an implementation of connection:willSendRequest:redirectResponse: 26 Authentication Challenges 28 Listing 1 An example of using the connection:didReceiveAuthenticationChallenge: delegate method 30 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 4This guide describes the Foundation framework classes available for interacting with URLs and communicating with servers using standard Internet protocols. Together these classes are referred to asthe URL loading system. The NSURL class provides the ability to manipulate URLs and the resources they refer to. The Foundation framework also provides a rich collection of classes that include support for URL loading, cookie storage, response caching, credentialstorage and authentication, and writing custom protocol extensions. The URL loading system provides support for accessing resources using the following protocols: ● File Transfer Protocol (ftp://) ● Hypertext Transfer Protocol (http://) ● Secure 128-bit Hypertext Transfer Protocol (https://) ● Local file URLs (file:///) It also transparently supports both proxy servers and SOCKS gateways using the user’s system preferences. Organization of This Document This guide includes the following articles: ● “URL Loading System Overview” (page 7) describes the classes of the URL loading system and their interaction. ● “Using NSURLConnection” (page 12) describes using NSURLConnection for asynchronous connections. ● “Using NSURLDownload” (page 18) describes using NSURLDownload to download files asynchronously to disk. ● “Handling Redirects and Other Request Changes” (page 26) describesthe options you have for responding to a change to your URL request. ● “Authentication Challenges” (page 28) describes the process for authenticating your connection against a secure server. ● “Understanding Cache Access” (page 32) describes how a connection uses the cache during a request. 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 5 IntroductionSee Also The following sample code is available through Apple Developer Connection: ● SpecialPictureProtocol implements a custom NSURLProtocol that creates jpeg images in memory as data is downloaded. ● AutoUpdater demonstrates how to check for, and download, an application update using NSURLConnection and NSURLDownload. Introduction See Also 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 6The URL loading system is a set of classes and protocols that provide the underlying capability for an application to access the data specified by a URL. These classes fall into five categories: URL loading, cache management, authentication and credentials, cookie storage, and protocol support. Figure 1 The URL loading system class hierarchy NSObject URL Loading NSURLConnection NSURLRequest NSMutableURLRequest NSURLResponse NSHTTPURLResponse NSURLDownload Cache Management NSCacheURLRequest NSURLCache Cookie Storage NSHTTPCookie NSHTTPCookieStorage Protocol Support NSURLProtocolClient NSURLProtocol Authentication and Credentials NSURLCredential NSURLCredentialStorage NSURLProtectionSpace NSURLAuthenticationChallenge NSURLAuthenticationChallengeSender URL Loading The most commonly used classes in the URL loading system allow an application to create a request for the content of a URL and download it from the source. 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 7 URL Loading System OverviewA request for the contents of a URL is represented by an NSURLRequest object. The NSURLRequest class encapsulates a URL and any protocol-specific properties, in a protocol-independent manner. It also provides an interface to set the timeout for a connection and specifiesthe policy regarding the use of any locally cached data. The NSMutableURLRequest classis a mutable subclass of NSURLRequest that allows a client application to alter an existing request. Note: When a client application initiates a connection or download using an instance of NSMutableURLRequest, a deep copy is made of the request. Changes made to the initiating request have no effect once a download has been initialized. Protocols,such as HTTP, thatsupport protocol-specific properties must create categories on the NSURLRequest and NSMutableURLRequest classesto provide accessorsfor those properties. As an example, the HTTP protocol adds methods to NSURLRequest to return the HTTP request body, headers, and transfer method. It also adds methodsto NSMutableURLRequest to set the corresponding values. Methodsforsetting and getting property values in those accessors are exposed in the NSURLProtocol class. The response from a server to a request can be viewed as two parts: metadata describing the contents and the URL content data. The metadata that is common to most protocolsis encapsulated by the NSURLResponse class and consists of the MIME type, expected content length, text encoding (where applicable), and the URL that provided the response. Protocols can create subclasses of NSURLResponse to store protocol-specific metadata. NSHTTPURLResponse, for example, stores the headers and the status code returned by the web server. Note: It’s important to remember that only the metadata for the response is stored in an NSURLResponse object. An NSCachedURLResponse instance is used to encapsulate an NSURLResponse, the URL content data, and any application-provided information. See “Cache Management” (page 9) for details. The NSURLConnection and NSURLDownload classes provide the interface to make a connection specified by an NSURLRequest object and download the contents. An NSURLConnection object provides data to the delegate as it is received from the originating source, whereas an NSURLDownload object writes the request data directly to disk. Both classes provide extensive delegate support for responding to redirects, authentication challenges, and error conditions. The NSURLConnection class provides a delegate method that allows an application to control the caching of a response on a per-request basis. Downloads initiated by an NSURLDownload instance are not cached. URL Loading System Overview URL Loading 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 8Cache Management The URL loading system provides a composite on-disk and in-memory cache allowing an application to reduce its dependence on a network connection and provide faster turnaround for previously cached responses. The cache is stored on a per-application basis. The cache is queried by NSURLConnection according to the cache policy specified by the initiating NSURLRequest. The NSURLCache class provides methods to configure the cache size and its location on disk. It also provides methods to manage the collection of NSCachedURLResponse objects that contain the cached responses. An NSCachedURLResponse object encapsulates the NSURLResponse and the URL content data. NSCachedURLResponse also provides a user info dictionary that can be used by an application to cache any custom data. Not all protocol implementations support response caching. Currently only http and https requests are cached, and https requests are never cached to disk. An NSURLConnection can control whether a response is cached and whether the response should be cached only in memory by implementing the connection:willCacheResponse: delegate method. Authentication and Credentials Some servers restrict access to certain content, requiring a user to authenticate with a valid user name and password in order to gain access. In the case of a web server, restricted content is grouped together into a realm that requires a single set of credentials. The URL loading system provides classesthat model credentials and protected areas as well as providing secure credential persistence. Credentials can be specified to persist for a single request, for the duration of an application’s launch, or permanently in the user’s keychain. Note: Credentials stored in persistent storage are kept in the user's keychain and shared among all applications. The NSURLCredential class encapsulates a credential consisting of the user name, password, and the type of persistence to use. The NSURLProtectionSpace classrepresents an area that requires a specific credential. A protection space can be limited to a single URL, encompass a realm on a web server, or refer to a proxy. URL Loading System Overview Cache Management 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 9A shared instance of the NSURLCredentialStorage class manages credential storage and provides the mapping of an NSURLCredential object to the corresponding NSURLProtectionSpace object for which it provides authentication. The NSURLAuthenticationChallenge class encapsulates the information required by an NSURLProtocol implementation to authenticate a request: a proposed credential, the protection space involved, the error or response that the protocol used to determine that authentication isrequired, and the number of authentication attemptsthat have been made. An NSURLAuthenticationChallenge instance also specifiesthe object that initiated the authentication. The initiating object, referred to as the sender, must conform to the NSURLAuthenticationChallengeSender protocol. NSURLAuthenticationChallenge instances are used by NSURLProtocol subclasses to inform the URL loading system that authentication is required. They are also provided to the delegate methods of NSURLConnection and NSURLDownload that facilitate customized authentication handling. Cookie Storage Due to the stateless nature of the HTTP protocol, cookies are often used to provide persistent storage of data across URL requests. The URL loading system provides interfaces to create and manage cookies as well as sending and receiving cookies from web servers. The NSHTTPCookie class encapsulates a cookie, providing accessors for many of the common cookie attributes. It also provides methods to convert HTTP cookie headers to NSHTTPCookie instances and convert an NSHTTPCookie instance to headers suitable for use with an NSURLRequest. The URL loading system automatically sends any stored cookies appropriate for an NSURLRequest. unless the request specifies not to send cookies. Likewise, cookies returned in an NSURLResponse are accepted in accordance with the current cookie acceptance policy. The NSHTTPCookieStorage class provides the interface for managing the collection of NSHTTPCookie objects shared by all applications. iOS Note: Cookies are not shared by applications in iOS. NSHTTPCookieStorage allows an application to specify a cookie acceptance policy. The cookie acceptance policy controls whether cookies should always be accepted, never be accepted, or accepted only from the same domain as the main document URL. URL Loading System Overview Cookie Storage 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 10Note: Changing the cookie acceptance policy in an application affectsthe cookie acceptance policy for all other running applications. When another application changesthe cookie storage or the cookie acceptance policy, NSHTTPCookieStorage notifies an application by posting the NSHTTPCookieStorageCookiesChangedNotification and NSHTTPCookieStorageAcceptPolicyChangedNotification notifications. Protocol Support The URL loading system design allows a client application to extend the protocols that are supported for transferring data. The URL loading system natively supports http, https, file, and ftp protocols. Custom protocols are implemented by subclassing NSURLProtocol and then registering the new class with the URL loading system using the NSURLProtocol class method registerClass:. When an NSURLConnection or NSURLDownload object initiates a connection for an NSURLRequest, the URL loading system consults each of the registered classes in the reverse order of their registration. The first class that returns YES for a canInitWithRequest: message is used to handle the request. The URL loading system is responsible for creating and releasing NSURLProtocol instances when connections start and complete. An application should never create an instance of NSURLProtocol directly. When an NSURLProtocol subclass is initialized by the URL loading system, it is provided a client object that conforms to the NSURLProtocolClient protocol. The NSURLProtocol subclass sends messages from the NSURLProtocolClient protocol to the client object to inform the URL loading system of its actions as it creates a response, receives data, redirectsto a new URL, requires authentication, and completesthe load. If the custom protocolsupports authentication, then it must conform to the NSURLAuthenticationChallengeSender protocol. URL Loading System Overview Protocol Support 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 11NSURLConnection provides the most flexible method of downloading the contents of a URL. It provides a simple interface for creating and canceling a connection, and supports a collection of delegate methods that provide feedback and control of many aspects of the connection. These classes fall into five categories: URL loading, cache management, authentication and credentials, cookie storage, and protocol support. Creating a Connection In order to download the contents of a URL, an application needsto provide a delegate object that, at a minimum, implements the following delegate methods: connection:didReceiveResponse:, connection:didReceiveData:, connection:didFailWithError: and connectionDidFinishLoading:. The example in Listing 1 initiates a connection for a URL. It begins by creating an NSURLRequest instance for the URL, specifying the cache access policy and the timeout interval for the connection. It then creates an NSURLConnection instance,specifying the request and a delegate. If NSURLConnection can’t create a connection for the request, initWithRequest:delegate: returns nil. If the connection is successful, an instance of NSMutableData is created to store the data that is provided to the delegate incrementally. Listing 1 Creating a connection using NSURLConnection // Create the request. NSURLRequest *theRequest=[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://www.apple.com/"] cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0]; // create the connection with the request // and start loading the data NSURLConnection *theConnection=[[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; if (theConnection) { // Create the NSMutableData to hold the received data. // receivedData is an instance variable declared elsewhere. 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 12 Using NSURLConnectionreceivedData = [[NSMutableData data] retain]; } else { // Inform the user that the connection failed. } The download starts immediately upon receiving the initWithRequest:delegate: message. It can be canceled any time before the delegate receives a connectionDidFinishLoading: or connection:didFailWithError: message by sending the connection a cancel message. When the server has provided sufficient data to create an NSURLResponse object, the delegate receives a connection:didReceiveResponse: message. The delegate method can examine the provided NSURLResponse and determine the expected content length of the data, MIME type, suggested filename and other metadata provided by the server. You should be prepared for your delegate to receive the connection:didReceiveResponse: message multiple times for a single connection. This message can be sent due to server redirects, or in rare cases multi-part MIME documents. Each time the delegate receives the connection:didReceiveResponse: message, it should reset any progress indication and discard all previously received data. The example implementation in Listing 2 simply resets the length of the received data to 0 each time it is called. Listing 2 Example connection:didReceiveResponse: implementation - (void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { // This method is called when the server has determined that it // has enough information to create the NSURLResponse. // It can be called multiple times, for example in the case of a // redirect, so each time we reset the data. // receivedData is an instance variable declared elsewhere. [receivedData setLength:0]; } The delegate is periodically sent connection:didReceiveData: messages as the data is received. The delegate implementation is responsible for storing the newly received data. In the example implementation in Listing 3, the new data is appended to the NSMutableData object created in Listing 1. Using NSURLConnection Creating a Connection 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 13Listing 3 Example connection:didReceiveData: implementation - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { // Append the new data to receivedData. // receivedData is an instance variable declared elsewhere. [receivedData appendData:data]; } You can also use the connection:didReceiveData: method to provide an indication of the connection’s progress to the user. If an error is encountered during the download, the delegate receives a connection:didFailWithError: message. The NSError object passed as the parameter specifies the details of the error. It also provides the URL of the request that failed in the user info dictionary using the key NSURLErrorFailingURLStringErrorKey. After the delegate receives a message connection:didFailWithError:, it receives no further delegate messages for the specified connection. The example in Listing 4 releases the connection, as well as any received data, and logs the error. Listing 4 Example connectionDidFailWithError: implementation - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { // release the connection, and the data object [connection release]; // receivedData is declared as a method instance elsewhere [receivedData release]; // inform the user NSLog(@"Connection failed! Error - %@ %@", [error localizedDescription], [[error userInfo] objectForKey:NSURLErrorFailingURLStringErrorKey]); } Using NSURLConnection Creating a Connection 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 14Finally, if the connection succeeds in downloading the request, the delegate receives the connectionDidFinishLoading: message. The delegate will receive no further messagesfor the connection and the NSURLConnection object can be released. The example implementation in Listing 5 logsthe length of the received data and releases both the connection object and the received data. Listing 5 Example connectionDidFinishLoading: implementation - (void)connectionDidFinishLoading:(NSURLConnection *)connection { // do something with the data // receivedData is declared as a method instance elsewhere NSLog(@"Succeeded! Received %d bytes of data",[receivedData length]); // release the connection, and the data object [connection release]; [receivedData release]; } This represents the simplest implementation of a client using NSURLConnection. Additional delegate methods provide the ability to customize the handling of server redirects, authorization requests and caching of the response. Controlling Response Caching By default the data for a connection is cached according to the support provided by the NSURLProtocolsubclass that handles the request. An NSURLConnection delegate can further refine that behavior by implementing connection:willCacheResponse:. This delegate method can examine the provided NSCachedURLResponse object and change how the response is cached, for example restricting its storage to memory only or preventing it from being cached altogether. It is also possible to insert objects in an NSCachedURLResponse’s user info dictionary, causing them to be stored in the cache as part of the response. Using NSURLConnection Controlling Response Caching 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 15Note: The delegate receives connection:willCacheResponse: messages only for protocols that support caching. The example in Listing 6 prevents the caching of https responses. It also adds the current date to the user info dictionary for responses that are cached. Listing 6 Example connection:withCacheResponse: implementation -(NSCachedURLResponse *)connection:(NSURLConnection *)connection willCacheResponse:(NSCachedURLResponse *)cachedResponse { NSCachedURLResponse *newCachedResponse = cachedResponse; if ([[[[cachedResponse response] URL] scheme] isEqual:@"https"]) { newCachedResponse = nil; } else { NSDictionary *newUserInfo; newUserInfo = [NSDictionary dictionaryWithObject:[NSCalendarDate date] forKey:@"Cached Date"]; newCachedResponse = [[[NSCachedURLResponse alloc] initWithResponse:[cachedResponse response] data:[cachedResponse data] userInfo:newUserInfo storagePolicy:[cachedResponse storagePolicy]] autorelease]; } return newCachedResponse; } Estimating Upload Progress You can estimate the progress of an HTTP POST upload with the connection:didSendBodyData:totalBytesWritten:totalBytesExpectedToWrite: delegatemethod. Note that this is not an exact measurement of upload progress, because the connection may fail or the connection may encounter an authentication challenge. Using NSURLConnection Estimating Upload Progress 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 16Downloading Data Synchronously NSURLConnection provides support for downloading the contents of an NSURLRequest in a synchronous manner using the class method sendSynchronousRequest:returningResponse:error:. Using this method is not recommended, because it has severe limitations: ● The client application blocks until the data has been completely received, an error is encountered, or the request times out. ● Minimal support is provided for requests that require authentication. ● There is no means of modifying the default behavior of response caching or accepting server redirects. If the download succeeds, the contents of the request are returned as an NSData object and an NSURLResponse for the request is returned by reference. If NSURLConnection is unable to download the URL, the method returns nil and any available NSError instance by reference in the appropriate parameter. If the request requires authentication in order to make the connection, valid credentials must already be available in the NSURLCredentialStorage, or must be provided as part of the requested URL. If the credentials are not available or fail to authenticate, the URL loading system responds by sending the NSURLProtocol subclass handling the connection a continueWithoutCredentialForAuthenticationChallenge: message. When a synchronous connection attempt encounters a server redirect, the redirect is always honored. Likewise the response data is stored in the cache according to the default support provided by the protocol implementation. Using NSURLConnection Downloading Data Synchronously 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 17NSURLDownload provides an application the ability to download the contents of a URL directly to disk. It provides an interface similar to NSURLConnection, adding an additional method forspecifying the destination of the file. NSURLDownload can also decode commonly used encoding schemes such as MacBinary, BinHex and gzip. Unlike NSURLConnection, data downloaded using NSURLDownload is notstored in the cache system. If your application is not restricted to using Foundation classes, the WebKit framework includes WebDownload, a subclass of NSURLDownload that provides a user interface for authentication. iOS Note: The NSURLDownload class is not available in iOS, because downloading directly to the file system is discouraged. Use the NSURLConnection class instead. See “Using NSURLConnection” (page 12) for more information. Downloading to a Predetermined Destination One usage pattern for NSURLDownload is downloading a file to a predetermined filename on disk. If the application knows the destination of the download, it can set it explicitly using setDestination:allowOverwrite:. Multiple setDestination:allowOverwrite: messages to an NSURLDownload instance are ignored. The download starts immediately upon receiving the initWithRequest:delegate: message. It can be canceled any time before the delegate receives a downloadDidFinish: or download:didFailWithError: message by sending the download a cancel message. The example in Listing 1 sets the destination, and thus requires the delegate only implement the download:didFailWithError: and downloadDidFinish: methods. Listing 1 Using NSURLDownload with a predetermined destination file location - (void)startDownloadingURL:sender { // Create the request. NSURLRequest *theRequest = [NSURLRequest requestWithURL:[NSURL URLWithString:@"http://www.apple.com"] 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 18 Using NSURLDownloadcachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0]; // Create the connection with the request and start loading the data. NSURLDownload *theDownload = [[NSURLDownload alloc] initWithRequest:theRequest delegate:self]; if (theDownload) { // Set the destination file. [theDownload setDestination:@"/tmp" allowOverwrite:YES]; } else { // inform the user that the download failed. } } - (void)download:(NSURLDownload *)download didFailWithError:(NSError *)error { // Release the connection. [download release]; // Inform the user. NSLog(@"Download failed! Error - %@ %@", [error localizedDescription], [[error userInfo] objectForKey:NSURLErrorFailingURLStringErrorKey]); } - (void)downloadDidFinish:(NSURLDownload *)download { // Release the connection. [download release]; // Do something with the data. NSLog(@"%@",@"downloadDidFinish"); } Using NSURLDownload Downloading to a Predetermined Destination 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 19Additional methods can be implemented by the delegate to customize the handling of authentication, server redirects and file decoding. Downloading a File Using the Suggested Filename Sometimesthe application must derive the destination filename from the downloaded data itself. Thisrequires you to implement the delegate method download:decideDestinationWithSuggestedFilename: and call setDestination:allowOverwrite: with the suggested filename. The example in Listing 2 saves the downloaded file to the desktop using the suggested filename. Listing 2 Using NSURLDownload with a filename derived from the download - (void)startDownloadingURL:sender { // Create the request. NSURLRequest *theRequest = [NSURLRequest requestWithURL:[NSURL URLWithString:@"http://www.apple.com/index.html"] cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0]; // Create the download with the request and start loading the data. NSURLDownload *theDownload = [[NSURLDownload alloc] initWithRequest:theRequest delegate:self]; if (!theDownload) { // Inform the user that the download failed. } } - (void)download:(NSURLDownload *)download decideDestinationWithSuggestedFilename:(NSString *)filename { NSString *destinationFilename; NSString *homeDirectory = NSHomeDirectory(); destinationFilename = [[homeDirectory stringByAppendingPathComponent:@"Desktop"] stringByAppendingPathComponent:filename]; Using NSURLDownload Downloading a File Using the Suggested Filename 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 20[download setDestination:destinationFilename allowOverwrite:NO]; } - (void)download:(NSURLDownload *)download didFailWithError:(NSError *)error { // Release the download. [download release]; // Inform the user. NSLog(@"Download failed! Error - %@ %@", [error localizedDescription], [[error userInfo] objectForKey:NSURLErrorFailingURLStringErrorKey]); } - (void)downloadDidFinish:(NSURLDownload *)download { // Release the download. [download release]; // Do something with the data. NSLog(@"%@",@"downloadDidFinish"); } The downloaded file is stored on the user's desktop with the name index.html, which was derived from the downloaded content. Passing NO to setDestination:allowOverwrite: prevents an existing file from being overwritten by the download. Instead a unique filename is created by inserting a sequential number after the filename, for example, index-1.html. The delegate is informed when a file is created on disk if it implements the download:didCreateDestination: method. This method also gives the application the opportunity to determine the finalized filename with which the download is saved. The example in Listing 3 logs the finalized filename. Using NSURLDownload Downloading a File Using the Suggested Filename 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 21Listing 3 Logging the finalized filename using download:didCreateDestination: -(void)download:(NSURLDownload *)download didCreateDestination:(NSString *)path { // path now contains the destination path // of the download, taking into account any // unique naming caused by -setDestination:allowOverwrite: NSLog(@"Final file destination: %@",path); } This message is sent to the delegate after it has been given an opportunity to respond to the download:shouldDecodeSourceDataOfMIMEType: and download:decideDestinationWithSuggestedFilename: messages. Displaying Download Progress The progress of the download can be determined by implementing the delegate methods download:didReceiveResponse: and download:didReceiveDataOfLength:. The download:didReceiveResponse: method provides the delegate an opportunity to determine the expected content length from the NSURLResponse. The delegate should reset the progress each time this message is received. The example implementation in Listing 4 demonstrates using these methods to provide progress feedback to the user. Listing 4 Displaying the download progress - (void)setDownloadResponse:(NSURLResponse *)aDownloadResponse { [aDownloadResponse retain]; // downloadResponse is an instance variable defined elsewhere. [downloadResponse release]; downloadResponse = aDownloadResponse; } Using NSURLDownload Displaying Download Progress 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 22- (void)download:(NSURLDownload *)download didReceiveResponse:(NSURLResponse *)response { // Reset the progress, this might be called multiple times. // bytesReceived is an instance variable defined elsewhere. bytesReceived = 0; // Retain the response to use later. [self setDownloadResponse:response]; } - (void)download:(NSURLDownload *)download didReceiveDataOfLength:(unsigned)length { long long expectedLength = [[self downloadResponse] expectedContentLength]; bytesReceived = bytesReceived + length; if (expectedLength != NSURLResponseUnknownLength) { // If the expected content length is // available, display percent complete. float percentComplete = (bytesReceived/(float)expectedLength)*100.0; NSLog(@"Percent complete - %f",percentComplete); } else { // If the expected content length is // unknown, just log the progress. NSLog(@"Bytes received - %d",bytesReceived); } } The delegate receives a download:didReceiveResponse: message before it begins receiving download:didReceiveDataOfLength: messages. Using NSURLDownload Displaying Download Progress 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 23Resuming Downloads In some cases, you can resume a download that was canceled or that failed while in progress. To do so, first make sure your original download doesn’t delete its data upon failure by passing NO to the download’s setDeletesFileUponFailure: method. If the original download fails, you can obtain its data with the resumeData method. You can then initialize a new download with the initWithResumeData:delegate:path: method. When the download resumes, the download’s delegate receives the download:willResumeWithResponse:fromByte: message. You can resume a download only if both the protocol of the connection and the MIME type of the file being downloaded support resuming. You can determine whether your file’s MIME type is supported with the canResumeDownloadDecodedWithEncodingMIMEType: method. Decoding Encoded Files NSURLDownload provides support for decoding selected file formats: MacBinary, BinHex and gzip. If NSURLDownload determines that a file is encoded in a supported format, it attempts to send the delegate a download:shouldDecodeSourceDataOfMIMEType: message. If the delegate implements this method, it should examine the passed MIME type and return YES if the file should be decoded. The example in Listing 5 compares the MIME type of the file and allows decoding of MacBinary and BinHex encoded content. Listing 5 Example implementation of download:shouldDecodeSourceDataOfMIMEType: method. - (BOOL)download:(NSURLDownload *)download shouldDecodeSourceDataOfMIMEType:(NSString *)encodingType; { BOOL shouldDecode = NO; if ([encodingType isEqual:@"application/macbinary"]) { shouldDecode = YES; } else if ([encodingType isEqual:@"application/binhex"]) { shouldDecode = YES; } else if ([encodingType isEqual:@"application/gzip"]) { shouldDecode = NO; } return shouldDecode; Using NSURLDownload Resuming Downloads 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 24} Using NSURLDownload Decoding Encoded Files 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 25A server may redirect a request for one URL to another URL. The delegates for NSURLConnection and NSURLDownload can be notified when this occurs for their connection. To handle a redirect for an instance of NSURLConnection, implement the connection:willSendRequest:redirectResponse: delegate method (for NSURLDownload, implement download:willSendRequest:redirectResponse:). If the delegate implements this method, it can examine the new request and the response that caused the redirect, and respond in one of four ways: ● The delegate can allow the redirect by simply returning the provided request. ● The delegate can create a new request, pointing to a different URL, and return that request. ● The delegate can reject the redirect and receive any existing data from the connection by returning nil. ● The delegate can cancel both the redirect and the connection by sending the cancelmessage to the NSURLConnection or NSURLDownload. The delegate also receives the connection:willSendRequest:redirectResponse: message if the NSURLProtocol subclass that handles the request has changed the NSURLRequest in order to standardize its format, for example, changing a request for http://www.apple.com to http://www.apple.com/. This occurs because the standardized, or canonical, version of the request is used for cache management. In this special case, the response passed to the delegate is nil and the delegate should simply return the provided request. The example implementation in Listing 1 allows canonical changes and denies all server redirects. Listing 1 Example of an implementation of connection:willSendRequest:redirectResponse: -(NSURLRequest *)connection:(NSURLConnection *)connection willSendRequest:(NSURLRequest *)request redirectResponse:(NSURLResponse *)redirectResponse { NSURLRequest *newRequest = request; if (redirectResponse) { newRequest = nil; } 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 26 Handling Redirects and Other Request Changesreturn newRequest; } If the delegate doesn't implement connection:willSendRequest:redirectResponse:, all canonical changes and server redirects are allowed. Handling Redirects and Other Request Changes 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 27An NSURLRequest object often encounters an authentication challenge, or a request for credentials from the server it is connecting to. The delegates for NSURLConnection and NSURLDownload can be notified when their request encounters an authentication challenge, so that they can act accordingly. Deciding How to Respond to an Authentication Challenge If an NSURLRequest object requires authentication, the delegate of the NSURLConnection (or NSURLDownload) object associated with the request first receives a connection:canAuthenticateAgainstProtectionSpace: (or download:canAuthenticateAgainstProtectionSpace:) message. This allows the delegate to analyze properties of the server, including its protocol and authentication method, before attempting to authenticate against it. If your delegate is not prepared to authenticate against the server’s protection space, you can return NO, and the system attempts to authenticate with information from the user’s keychain. Note: If your delegate does not implement the connection:canAuthenticateAgainstProtectionSpace: method and the protection space uses client certificate authentication or server trust authentication, the system behaves as if you returned NO. The system behaves as if you returned YES for all other authentication methods. If your delegate returns YES from connection:canAuthenticateAgainstProtectionSpace: or doesn’t implement it, and there are no valid credentials available, either as part of the requested URL or in the shared NSURLCredentialStorage,the delegate receives a connection:didReceiveAuthenticationChallenge: message. In order for the connection to continue, the delegate has three options: ● Provide authentication credentials ● Attempt to continue without credentials ● Cancel the authentication challenge 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 28 Authentication ChallengesTo help determine the correct course of action, the NSURLAuthenticationChallenge instance passed to the method contains information about what triggered the authentication challenge, how many attempts were made for the challenge, any previously attempted credentials, the NSURLProtectionSpace that requires the credentials, and the sender of the challenge. If the authentication challenge has tried to authenticate previously and failed, you can obtain the attempted credentials by calling proposedCredential on the authentication challenge. The delegate can then use these credentials to populate a dialog that it presents to the user. Calling previousFailureCount on the authentication challenge returns the total number of previous authentication attempts, including those from different authentication protocols. The delegate can provide this information to the end user, to determine whether the credentials it supplied previously are failing, or to limit the maximum number of authentication attempts. Responding to an Authentication Challenge The following are the three ways you can respond to the connection:didReceiveAuthenticationChallenge: delegate method. Providing Credentials To attempt to authenticate, the application should create an NSURLCredential object with authentication information of the form expected by the server. You can determine the server’s authentication method by calling authenticationMethod on the protection space of the provided authentication challenge. Some authentication methods supported by NSURLCredential are: HTTP Basic Authentication (NSURLAuthenticationMethodHTTPBasic) The basic authentication method requires a user name and password. Prompt the user for the necessary information and create an NSURLCredential object with credentialWithUser:password:persistence:. HTTP Digest Authentication (NSURLAuthenticationMethodHTTPDigest) Like basic authentication, digest authentication just requires a user name and password (the digest is generated automatically). Prompt the user for the necessary information and create an NSURLCredential object with credentialWithUser:password:persistence:. Client Certificate Authentication (NSURLAuthenticationMethodClientCertificate) Client certificate authentication requires the system identity and all certificates needed to authenticate with the server. Create an NSURLCredential object with credentialWithIdentity:certificates:persistence:. Authentication Challenges Responding to an Authentication Challenge 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 29Server Trust Authentication (NSURLAuthenticationMethodServerTrust) Server trust authentication requires a trust provided by the protection space of the authentication challenge. Create an NSURLCredential object with credentialForTrust:. After you’ve created the NSURLCredential object, pass it to the authentication challenge’s sender with useCredential:forAuthenticationChallenge:. Continuing Without Credentials If the delegate chooses not to provide a credential for the authentication challenge, it can attempt to continue without one by calling continueWithoutCredentialsForAuthenticationChallenge: on [challenge sender]. Depending on the protocol implementation, continuing without credentials may either cause the connection to fail, resulting in a connectionDidFailWithError: message, or return alternate URL contents that don’t require authentication. Canceling the Connection The delegate may also choose to cancel the authentication challenge by calling cancelAuthenticationChallenge: on [challenge sender]. The delegate receives a connection:didCancelAuthenticationChallenge: message, providing the opportunity to give the user feedback. Para The implementation shown in Listing 1 attempts to authenticate the challenge by creating an NSURLCredential instance with a user name and password supplied by the application’s preferences. If the authentication has failed previously, it cancels the authentication challenge and informs the user. Listing 1 An example of using the connection:didReceiveAuthenticationChallenge: delegate method -(void)connection:(NSURLConnection *)connection didReceiveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge { if ([challenge previousFailureCount] == 0) { NSURLCredential *newCredential; newCredential = [NSURLCredential credentialWithUser:[self preferencesName] password:[self preferencesPassword] persistence:NSURLCredentialPersistenceNone]; [[challenge sender] useCredential:newCredential Authentication Challenges Responding to an Authentication Challenge 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 30forAuthenticationChallenge:challenge]; } else { [[challenge sender] cancelAuthenticationChallenge:challenge]; // inform the user that the user name and password // in the preferences are incorrect [self showPreferencesCredentialsAreIncorrectPanel:self]; } } If the delegate doesn’t implement connection:didReceiveAuthenticationChallenge: and the request requires authentication, valid credentials must already be available in the URL credential storage or must be provided as part of the requested URL. If the credentials are not available or if they fail to authenticate, a continueWithoutCredentialForAuthenticationChallenge: message is sent by the underlying implementation. Authentication Challenges Responding to an Authentication Challenge 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 31The URL loading system provides a composite on-disk and in-memory cache of responses to requests. This cache allows an application to reduce its dependency on a network connection and increase its performance. Using the Cache for a Request An NSURLRequest instance specifies how the local cache is used by setting the cache policy to one of the NSURLRequestCachePolicy values: NSURLRequestUseProtocolCachePolicy, NSURLRequestReloadIgnoringCacheData, NSURLRequestReturnCacheDataElseLoad, or NSURLRequestReturnCacheDataDontLoad. The default cache policy for an NSURLRequest instance is NSURLRequestUseProtocolCachePolicy. The NSURLRequestUseProtocolCachePolicy behavior is protocol specific and is defined as being the best conforming policy for the protocol. Setting the cache policy to NSURLRequestReloadIgnoringCacheData causes the URL loading system to load the data from the originating source, ignoring the cache completely. The NSURLRequestReturnCacheDataElseLoad cache policy will cause the URL loading system to use cached data ignoring its age or expiration date, if it exists, and load the data from the originating source only if there is no cached version. The NSURLRequestReturnCacheDataDontLoad policy allows an application to specify that only data in the cache should be returned. Attempting to create an NSURLConnection or NSURLDownload instance with this cache policy returns nil immediately if the response is not in the local cache. This is similar in function to an “offline” mode and never brings up a network connection. 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 32 Understanding Cache AccessNote: Currently, only responsesto http and https requests are cached. The ftp and file protocols attempt to access the originating source as allowed by the cache policy. Custom NSURLProtocol classes can provide caching if they choose. Cache Use Semantics for the http Protocol The most complicated cache use situation is when a request uses the http protocol and has set the cache policy to NSURLRequestUseProtocolCachePolicy. If an NSCachedURLResponse does not exist for the request, then the data isfetched from the originating source. If there is a cached response for the request, the URL loading system checks the response to determine if it specifies that the contents must be revalidated. If the contents must be revalidated a connection is made to the originating source to see if it has changed. If it has not changed, then the response is returned from the local cache. If it has changed, the data is fetched from the originating source. If the cached response doesn’t specify that the contents must be revalidated, the maximum age or expiration specified in the response is examined. If the cached response is recent enough, then the response is returned from the local cache. If the response is determined to be stale, the originating source is checked for newer data. If newer data is available, the data is fetched from the originating source, otherwise it is returned from the cache. RFC 2616, Section 13 (http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13) specifies the semantics involved in detail. Understanding Cache Access Cache Use Semantics for the http Protocol 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 33This table describes the changes to URL Loading System Programming Guide . Date Notes 2010-09-01 Fixed typos and removed deprecated symbols from code examples. Restructured content and added discussions of new authentication functionality. 2010-03-24 2009-08-14 Added links to Cocoa Core Competencies. 2008-05-20 Updated to include content about NSURLDownload availability in iOS. 2008-05-06 Made minor editorial changes. 2007-07-10 Corrected minor typos. 2006-05-23 Added links to sample code. 2006-03-08 Updated sample code. 2005-09-08 Corrected connectionDidFinishLoading: method signature. 2005-04-08 Added accessor method to sample code. Corrected minor typos. 2004-08-31 Corrected minor typos. Corrected table of contents ordering. Corrected willSendRequest:redirectResponse: method signature throughout topic. 2003-07-03 Added additional article outlining differences in behavior between NSURLDownload and NSURLConnection. 2003-06-11 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 34 Document Revision HistoryDate Notes First release of conceptual and task material covering the usage of new classes in Mac OS X v10.2 with Safari 1.0 for downloading content from the Internet. 2003-06-06 Document Revision History 2010-09-01 | © 2003, 2010 Apple Inc. All Rights Reserved. 35Apple Inc. © 2003, 2010 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, Mac, Mac OS, OS X, and Safari are trademarks of Apple Inc., registered in the U.S. and other countries. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. External Accessory Programming TopicsContents About External Accessories 4 At a Glance 4 Including the External Accessory Framework in Your Project 4 Declaring the Protocols Your App Supports 5 Communicating with an Accessory 5 See Also 5 Connecting to an Accessory 6 Monitoring Accessory-Related Events 9 Document Revision History 10 2012-02-24 | © 2012 Apple Inc. All Rights Reserved. 2Listings Connecting to an Accessory 6 Listing 1 Creating a communications session for an accessory 6 Listing 2 Processing stream events 8 2012-02-24 | © 2012 Apple Inc. All Rights Reserved. 3The External Accessory framework (ExternalAccessory.framework) provides a conduit for communicating with accessories attached to any iOS-based device. App developers can use this conduit to integrate accessory-level features into their apps. Communicating with an external accessory requires you to work closely with the accessory manufacturer to understand the services provided by that accessory. Manufacturers must build explicit support into their accessory hardware for communicating with iOS. As part of this support, an accessory must support at least one command protocol, which is a custom scheme for sending data back and forth between the accessory and an attached app. Apple does not maintain a registry of protocols; it is up to the manufacturer to decide which protocols to support and whether to use custom protocols or standard protocols supported by other manufacturers. As part of your communication with the accessory manufacturer, you must find out what protocols a given accessory supports. To prevent namespace conflicts, protocol names are specified as reverse-DNS strings of the form com.apple.myProtocol. This allows each manufacturer to define as many protocols as needed to support their line of accessories. Note: If you are interested in becoming a developer of accessories for iPad, iPhone, or iPod touch, you can find information about how to do so on http://developer.apple.com. At a Glance Communicating with accessories requires information about the accessory itself, which you must obtain from the hardware manufacturer. From there, you use the classes of the External Accessory framework to create the bridge between the hardware and your app. Including the External Accessory Framework in Your Project To use the features of the External Accessory framework, you must add ExternalAccessory.framework to your Xcode project and link against it in any relevant targets. To access the classes and headers of the framework, include an #import statement at the top of any relevant source files. 2012-02-24 | © 2012 Apple Inc. All Rights Reserved. 4 About External AccessoriesDeclaring the Protocols Your App Supports Appsthat are able to communicate with an external accessory must declare the protocolsthey support in their Info.plist file. Declaring support for specific protocols lets the system know that your app can be launched when that accessory is connected. If no app supports the connected accessory, the system may choose to launch the App Store and point out apps that do. To declare the protocols your app supports, you must include the UISupportedExternalAccessoryProtocols key in your app’s Info.plist file. This key contains an array ofstringsthat identify the communications protocolsthat your app supports. Your app can include any number of protocols in this list and the protocols can be in any order. The system does not use this list to determine which protocol your app should choose; it uses it only to determine if your app is capable of communicating with the accessory. It is up to your code to choose an appropriate communications protocol when it begins talking to the accessory. For more information about the keys you put into your app’s Info.plist file, see Information Property List Key Reference . Communicating with an Accessory An app communicates with an accessory by creating an EASession object for managing the accessory interactions. Session objects work with the underlying system to transfer data packetsto and from the accessory. Data transfer in your app occurs through NSInputStream and NSOutputStream objects, which are vended by the session object once the connection is made. To receive data from the accessory, monitor the input stream using a custom delegate object. To send data to the accessory, write data packets to the output stream. The format of the incoming and outgoing data packetsis determined by the protocol you use to communicate with the accessory. Relevant Article: “Connecting to an Accessory” (page 6), “Monitoring Accessory-Related Events” (page 9) See Also For information about the classes of the External Accessory framework, see External Accessory Framework Reference About External Accessories See Also 2012-02-24 | © 2012 Apple Inc. All Rights Reserved. 5Accessories are not visible through the External Accessory framework until they have been connected by the system and made ready for use. When an accessory does become visible, your app can get the appropriate accessory object and open a session using one or more of the protocols supported by the accessory. The shared EAAccessoryManager object provides the main entry point for apps looking to communicate with accessories. This class contains an array of already connected accessory objects that you can enumerate to see if there is one your app supports. Most of the information in an EAAccessory object (such asthe name, manufacturer, and model information) is intended for display purposes only. To determine whether your app can connect to an accessory, you must look at the accessory’s protocols and see if there is one your app supports. Note: It is possible for more than one accessory object to support the same protocol. If that happens, your code is responsible for choosing which accessory object to use. For a given accessory object, only one session at a time is allowed for a specific protocol. The protocolStrings property of each EAAccessory object contains a dictionary whose keys are the supported protocols. If you attempt to create a session using a protocol that is already in use, the External Accessory framework closes the existing session before opening the new one. Listing 1 shows a method that checks the list of connected accessories and grabs the first one that the app supports. It creates a session for the designated protocol and configures the input and output streams of the session. By the time this method returnsthe session object, it is connected to the accessory and ready to begin sending and receiving data. Listing 1 Creating a communications session for an accessory - (EASession *)openSessionForProtocol:(NSString *)protocolString { NSArray *accessories = [[EAAccessoryManager sharedAccessoryManager] connectedAccessories]; EAAccessory *accessory = nil; EASession *session = nil; 2012-02-24 | © 2012 Apple Inc. All Rights Reserved. 6 Connecting to an Accessoryfor (EAAccessory *obj in accessories) { if ([[obj protocolStrings] containsObject:protocolString]) { accessory = obj; break; } } if (accessory) { session = [[EASession alloc] initWithAccessory:accessory forProtocol:protocolString]; if (session) { [[session inputStream] setDelegate:self]; [[session inputStream] scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [[session inputStream] open]; [[session outputStream] setDelegate:self]; [[session outputStream] scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode]; [[session outputStream] open]; [session autorelease]; } } return session; } After the input and output streams are configured, the final step is to process the stream-related data. Listing 2 shows the fundamental structure of a delegate’s stream processing code. This method responds to events from both input and output streams of the accessory. As the accessory sends data to your app an event arrives indicating there are bytes available to be read. Similarly, when the accessory is ready to receive data from your app, events arrive indicating that fact. (Of course, your app does not always have to wait for an event to arrive Connecting to an Accessory 2012-02-24 | © 2012 Apple Inc. All Rights Reserved. 7before it can write bytes to the stream. It can also call the stream’s hasBytesAvailable method to see if the accessory is still able to receive data.) For more information on streams and handling stream-related events, see Stream Programming Guide . Listing 2 Processing stream events // Handle communications from the streams. - (void)stream:(NSStream*)theStream handleEvent:(NSStreamEvent)streamEvent { switch (streamEvent) { case NSStreamHasBytesAvailable: // Process the incoming stream data. break; case NSStreamEventHasSpaceAvailable: // Send the next queued command. break; default: break; } } Connecting to an Accessory 2012-02-24 | © 2012 Apple Inc. All Rights Reserved. 8The External Accessory framework is capable of sending notifications whenever a hardware accessory is connected or disconnected. Although it is capable, it does not do so automatically. Your app must specifically request that notifications be generated by calling the registerForLocalNotifications method of the EAAccessoryManager class. When an accessory is connected, authenticated, and ready to interact with your app, the framework sends an EAAccessoryDidConnectNotification notification. When an accessory is disconnected, it sends an EAAccessoryDidDisconnectNotification notification. You can register to receive these notifications using the default NSNotificationCenter, and both notifications include information about which accessory was affected. In addition to receiving notificationsthrough the default notification center, an app that is currently interacting with an accessory can assign a delegate to the corresponding EAAccessory object and be notified of changes. Delegate objects must conform to the EAAccessoryDelegateprotocol, which currently contains the optional accessoryDidDisconnect: method. You can use this method to receive disconnection notices without first setting up a notification observer. If your app is suspended in the background when an accessory notification arrives, that notification is put in a queue. When your app begins running again (either in the foreground or background), notifications in the queue are delivered to your app. Notifications are also coalesced and filtered wherever possible to eliminate any irrelevant events. For example, if an accessory was connected and subsequently disconnected while your app was suspended, your app would ultimately not receive any indication that such events took place. For more information about how to register to receive notifications, see Notification Programming Topics. 2012-02-24 | © 2012 Apple Inc. All Rights Reserved. 9 Monitoring Accessory-Related EventsThis table describes the changes to External Accessory Programming Topics. Date Notes 2012-02-24 Clarified that protocols must be declared in an app's Info.plist file. Corrected information about what happens when you connect to an existing session. 2011-09-22 2010-05-26 New document describing how to attach to external hardware devices. 2012-02-24 | © 2012 Apple Inc. All Rights Reserved. 10 Document Revision HistoryApple Inc. © 2012 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, iPad, iPhone, iPod, iPod touch, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. App Store is a service mark of Apple Inc. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Multimedia Programming GuideContents About Audio and Video 4 Organization of This Document 4 Using Audio 5 The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions 6 iOS Hardware and Software Audio Codecs 7 Audio Sessions 9 Playing Audio 11 Playing Audio Items with iPod Library Access 12 Playing UI Sound Effects or Invoking Vibration Using System Sound Services 12 Playing Sounds Easily with the AVAudioPlayer Class 15 Playing Sounds with Control Using Audio Queue Services 17 Playing Sounds with Positioning Using OpenAL 21 Recording Audio 21 Recording with the AVAudioRecorder Class 22 Recording with Audio Queue Services 24 Parsing Streamed Audio 25 Audio Unit Support in iOS 26 Best Practices for iOS Audio 27 Tips for Using Audio 27 Preferred Audio Formats in iOS 28 Using Video 30 Recording and Editing Video 30 Playing Video Files 31 Document Revision History 34 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 2Figures, Tables, and Listings Using Audio 5 Figure 1-1 Using iPod library access 12 Table 1-1 Audio playback formats and codecs 7 Table 1-2 Audio recording formats and codecs 8 Table 1-3 Features provided by the audio session APIs 9 Table 1-4 Handling audio interruptions 11 Table 1-5 System-supplied audio units 26 Table 1-6 Audio tips 27 Listing 1-1 Creating a sound ID object 13 Listing 1-2 Playing a system sound 14 Listing 1-3 Triggering vibration 14 Listing 1-4 Configuring an AVAudioPlayer object 15 Listing 1-5 Implementing an AVAudioPlayer delegate method 16 Listing 1-6 Controlling an AVAudioPlayer object 17 Listing 1-7 Creating an audio queue object 18 Listing 1-8 Setting the playback level directly 20 Listing 1-9 The AudioQueueLevelMeterState structure 21 Listing 1-10 Setting up the audio session and the sound file URL 22 Listing 1-11 A record/stop method using the AVAudioRecorder class 23 Using Video 30 Figure 2-1 Media player interface with transport controls 31 Listing 2-1 Playing full-screen movies 31 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 3Whether multimedia features are central or incidental to your application, iPhone, iPod touch, and iPad users expect high quality. When presenting video content, take advantage of the device’s high-resolution screen and high frame rates. When designing the audio portion of your application, keep in mind that compelling sound adds immeasurably to a user’s overall experience. You can take advantage of the iOS multimedia frameworks for adding features like: ● High-quality audio recording, playback, and streaming ● Immersive game sounds ● Live voice chat ● Playback of content from a user’s iPod library ● Video playback and recording on supported devices In iOS 4.0 and later, the AV Foundation framework gives you fine-grained control over inspecting, editing, and presenting audio-visual assets. Organization of This Document This document contains the following chapters: ● “Using Audio” (page 5) shows how to use the system’s audio technologies to play and record audio. ● “Using Video” (page 30) shows how to use the system’s video technologies to play and capture video. 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 4 About Audio and VideoImportant: This document contains information that used to be in iOS App Programming Guide . The information in this document has not been updated specifically for iOS 4.0. iOS offers a rich set of toolsfor working with sound in your application. These tools are arranged into frameworks according to the features they provide, as follows: ● Use the Media Player framework to play songs, audio books, or audio podcasts from a user’s iPod library. For details, see Media Player Framework Reference , iPod Library Access Programming Guide , and the AddMusic sample code project. ● Use the AV Foundation framework to play and record audio using a simple Objective-C interface. For details, see AV Foundation Framework Reference and the avTouch sample code project. ● Use the Audio Toolbox framework to play audio with synchronization capabilities, access packets of incoming audio, parse audio streams, convert audio formats, and record audio with access to individual packets. For details, see Audio Toolbox Framework Reference and the SpeakHere sample code project. ● Use the Audio Unit framework to connect to and use audio processing plug-ins. For details, see Audio Unit Hosting Guide for iOS . ● Use the OpenAL framework to provide positional audio playback in games and other applications. iOS supports OpenAL 1.1. For information on OpenAL, see the OpenAL website, OpenAL FAQ for iPhone OS , and the oalTouch sample code project. To allow your code to use the features of an audio framework, add that framework to your Xcode project, link against it in any relevant targets, and add an appropriate #import statement near the top of relevant source files. For example, to provide access to the AV Foundation framework in a source file, add a #import statement near the top of the file. For detailed information on how to add frameworks to your project, see “Files in Projects” in Xcode Project Management Guide . Important: To use the features of the Audio Unit framework, add the Audio Toolbox framework to your Xcode project and link against it in any relevant targets. Then add a #import statement near the top of relevant source files. This section on sound provides a quick introduction to implementing iOS audio features, as listed here: 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 5 Using Audio● To play songs, audio podcasts, and audio books from a user’s iPod library, see “Playing Audio Items with iPod Library Access” (page 12). ● To play and record audio in the fewest lines of code, use the AV Foundation framework. See “Playing Sounds Easily with the AVAudioPlayer Class” (page 15) and “Recording with the AVAudioRecorder Class” (page 22). ● To provide full-featured audio playback including stereo positioning, level control, and simultaneous sounds, use OpenAL. See “Playing Sounds with Positioning Using OpenAL” (page 21). ● To provide lowest latency audio, especially when doing simultaneous input and output (such as for a VoIP application), use the I/O unit or the Voice Processing I/O unit. See “Audio Unit Support in iOS” (page 26). ● To play sounds with the highest degree of control, including support forsynchronization, use Audio Queue Services. See “Playing Sounds with Control Using Audio Queue Services” (page 17). Audio Queue Services also supports recording and provides access to incoming audio packets, as described in “Recording with Audio Queue Services” (page 24). ● To parse audio streamed from a network connection, use Audio File Stream Services. See “Parsing Streamed Audio” (page 25). ● To play user-interface sound effects, or to invoke vibration on devicesthat provide that feature, use System Sound Services. See “Playing UI Sound Effects or Invoking Vibration Using System Sound Services” (page 12). Be sure to read the nextsection,“The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions” (page 6), for critical information on how audio works in iOS. Also read “Best Practices for iOS Audio” (page 27), which offers guidelines and liststhe audio and file formatsto use for best performance and best user experience. When you’re ready to dig deeper, the iOS Dev Center contains guides, reference books, sample code, and more. For tips on how to perform common audio tasks, see Audio & Video Coding How-To's. For in-depth explanations of audio development in iOS, see Core Audio Overview, Audio Session Programming Guide , Audio Queue Services Programming Guide , Audio Unit Hosting Guide for iOS , and iPod Library Access Programming Guide . The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions To get oriented toward iOS audio development, it’s important to understand a few critical things about the hardware and software architecture of iOS devices—described in this section. Using Audio The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 6iOS Hardware and Software Audio Codecs To ensure optimum performance and quality, you need to pick the right audio format and audio codec type. Starting in iOS 3.0, most audio formats can use software-based encoding (for recording) and decoding (for playback). Software codecs support simultaneous playback of multiple sounds, but may entail significant CPU overhead. Hardware-assisted decoding provides excellent performance—but does not support simultaneous playback of multiple sounds. If you need to maximize video frame rate in your application, minimize the CPU impact of your audio playback by using uncompressed audio or the IMA4 format, or use hardware-assisted decoding of your compressed audio assets. For best-practice advice on picking an audio format, see “Preferred Audio Formats in iOS” (page 28). Table 1-1 describes the playback audio codecs available on iOS devices. Table 1-1 Audio playback formats and codecs Hardware-assisted Software-based decoding decoding Audio decoder/playback format AAC (MPEG-4 Advanced Audio Coding) Yes Yes, starting in iOS 3.0 ALAC (Apple Lossless) Yes Yes, starting in iOS 3.0 HE-AAC (MPEG-4 High Efficiency AAC) Yes - iLBC (internet Low Bitrate Codec, another format - Yes for speech) IMA4 (IMA/ADPCM) - Yes Linear PCM (uncompressed, linear pulse-code - Yes modulation) MP3 (MPEG-1 audio layer 3) Yes Yes, starting in iOS 3.0 µ-law and a-law - Yes When using hardware-assisted decoding, the device can play only a single instance of one of the supported formats at a time. For example, if you are playing a stereo MP3 sound using the hardware codec, a second simultaneous MP3 sound will use software decoding. Similarly, you cannot simultaneously play an AAC and an ALAC sound using hardware. If the iPod application is playing an AAC or MP3 sound in the background, it has claimed the hardware codec; your application then plays AAC, ALAC, and MP3 audio using software decoding. Using Audio The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 7To play multiple sounds with best performance, or to efficiently play sounds while the iPod is playing in the background, use linear PCM (uncompressed) or IMA4 (compressed) audio. To learn how to check at runtime which hardware and software codecs are available on a device, read the discussion for the kAudioFormatProperty_HardwareCodecCapabilities constant in Audio Format Services Reference and read Technical Q&A QA1663, “Determining the availability of the AAC hardware encoder at runtime.” To summarize how iOS supports audio formats for single or multiple playback: ● Linear PCM and IMA4 (IMA/ADPCM) You can play multiple linear PCM or IMA4 sounds simultaneously in iOS without incurring CPU resource problems. The same is true for the iLBC speech-quality format, and for the µ-law and a-law compressed formats. When using compressed formats, check the sound quality to ensure it meets your needs. ● AAC, HE-AAC, MP3, and ALAC (Apple Lossless) Playback for AAC, HE-AAC, MP3, and ALAC sounds can use efficient hardware-assisted decoding on iOS devices, but these codecs all share a single hardware path. The device can play only a single instance of one of these formats at a time using hardware-assisted decoding. The single hardware path for AAC, HE-AAC, MP3, and ALAC playback has implications for “play along” style applications, such as a virtual piano. If the user is playing a song in one of these three formats in the iPod application, then your application—to play along over that audio—will employ software decoding. Table 1-2 describes the recording audio codecs available on iOS devices. Table 1-2 Audio recording formats and codecs Audio encoder/recording format Hardware-assisted encoding Software-based encoding Yes, starting in iOS 4.0 for iPhone 3GS and iPod touch (2nd generation) Yes, starting in iOS 3.1 for iPhone 3GS and iPod touch (2nd generation) Yes, starting in iOS 3.2 for iPad AAC (MPEG-4 Advanced Audio Coding) ALAC (Apple Lossless) - Yes iLBC (internet Low Bitrate Codec, for - Yes speech) IMA4 (IMA/ADPCM) - Yes Linear PCM (uncompressed, linear - Yes pulse-code modulation) Using Audio The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 8Audio encoder/recording format Hardware-assisted encoding Software-based encoding µ-law and a-law - Yes Audio Sessions The iOS audio session APIs let you define your application’s general audio behavior and design it to work well within the larger audio context of the device it’s running on. These APIs are described in Audio Session Services Reference and AVAudioSession Class Reference . Using these APIs, you can specify such behaviors as: ● Whether or not your audio should be silenced by the Silent switch (on iPhone, this is called the Ring/Silent switch ) ● Whether or not your audio should stop upon screen lock ● Whether other audio, such as from the iPod, should continue playing or be silenced when your audio starts The audio session APIs also let you respond to user actions, such as the plugging in or unplugging of headsets, and to events that use the device’s sound hardware, such as Clock and Calendar alarms and incoming phone calls. The audio session APIs provide three programmatic features, described in Table 1-3. Table 1-3 Features provided by the audio session APIs Audio session feature Description A category is a key that identifies a set of audio behaviorsfor your application. By setting a category, you indicate your audio intentions to iOS, such as whether your audio should continue when the screen locks. There are six categories, described in “Audio Session Categories”. You can fine-tune the behavior of some categories, as explained in “Fine-Tuning the Category” in Audio Session Programming Guide . Setting categories Your audio session posts messages when your audio is interrupted, when an interruption ends, and when the hardware audio route changes. These messages let you respond gracefully to changes in the larger audio environment—such as an interruption due to an incoming phone call. For details, see “Handling Audio Hardware Route Changes” and “Handling Audio Interruptions”. Handling interruptions and route changes You can query the audio session to discover characteristics of the device your application isrunning on,such as hardware sample rate, number of hardware channels, and whether audio input is available. For details, see “Optimizing for Device Hardware”. Optimizing for hardware characteristics Using Audio The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 9There are two interfaces for working with the audio session: ● A streamlined, objective-C interface that gives you accessto the core audio session features and is described in AVAudioSession Class Reference and AVAudioSessionDelegate Protocol Reference . ● A C-based interface that provides comprehensive access to all basic and advanced audio session features and is described in Audio Session Services Reference . You can mix and match audio session code from AV Foundation and Audio Session Services—the interfaces are completely compatible. An audio session comes with some default behavior that you can use to get started in development. However, except for certain special cases, the default behavior is unsuitable for a shipping application that uses audio. For example, when using the default audio session, audio in your application stops when the Auto-Lock period times out and the screen locks. If you want to ensure that playback continues with the screen locked, include the following lines in your application’s initialization code: NSError *setCategoryErr = nil; NSError *activationErr = nil; [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: &setCategoryErr]; [[AVAudioSession sharedInstance] setActive: YES error: &activationErr]; The AVAudioSessionCategoryPlayback category ensures that playback continues when the screen locks. Activating the audio session puts the specified category into effect. How you handle the interruption caused by an incoming phone call or Clock or Calendar alarm depends on the audio technology you are using, as shown in Table 1-4. Using Audio The Basics: Audio Codecs, Supported Audio Formats, and Audio Sessions 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 10Table 1-4 Handling audio interruptions Audio technology How interruptions work The AVAudioPlayer and AVAudioRecorder classes provide delegate methods for interruption start and end. Implement these methods to update your user interface and optionally, after interruption ends, to resume paused playback. The system automatically pauses playback or recording upon interruption, and reactivates your audio session when you resume playback or recording. If you want to save and restore playback position between application launches,save playback position on interruption as well as on application quit. AV Foundation framework These technologies put your application in control of handling interruptions. You are responsible for saving playback or recording position and reactivating your audio session after interruption ends. Implement the AVAudioSession interruption delegate methods or write an interruption listener callback function. Audio Queue Services, I/O audio unit When using OpenAL for playback, implement the AVAudioSession interruption delegate methods or write an interruption listener callback function—as when using Audio Queue Services. However, the delegate or callback must additionally manage the OpenAL context. OpenAL Sounds played using System Sound Services go silent when an interruption starts. They can automatically be used again if the interruption ends. Applications cannotinfluence the interruption behavior for sounds that use this playback technology. System Sound Services Every iOS application—with rare exception—should actively manage its audio session. For a complete explanation of how to do this,read Audio Session ProgrammingGuide . To ensure that your application conforms to Apple recommendations for audio session behavior, read “Sound” in iOS Human Interface Guidelinesin iOS Human Interface Guidelines. Playing Audio This section introduces you to playing sounds in iOS using iPod library access, System Sound Services, Audio Queue Services, the AV Foundation framework, and OpenAL. Using Audio Playing Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 11Playing Audio Items with iPod Library Access Starting in iOS 3.0, iPod library accesslets your application play a user’ssongs, audio books, and audio podcasts. The API design makes basic playback very simple while also supporting advanced searching and playback control. As shown in Figure 1-1, your application has two ways to retrieve media items. The media item picker, shown on the left, is an easy-to-use, pre-packaged view controller that behaves like the built-in iPod application’s music selection interface. For many applications, this is sufficient. If the picker doesn’t provide the specialized access control you want, the media query interface will. Itsupports predicate-based specification of itemsfrom the iPod library. Figure 1-1 Using iPod library access iPod Library Music Player Media Picker Media Query Your application As depicted in the figure to the right of your application, you then play the retrieved media items using the music player provided by this API. For a complete explanation of how to add media item playback to your application, see iPod Library Access Programming Guide . For a code example, see the AddMusic sample code project. Playing UI Sound Effects or Invoking Vibration Using System Sound Services To play user-interface sound effects (such as button clicks), or to invoke vibration on devices that support it, use System Sound Services. This compact interface is described in System Sound Services Reference . You can find sample code in the Audio UI Sounds (SysSound) sample in the iOS Dev Center. Using Audio Playing Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 12Note: Sounds played with System Sound Services are not subject to configuration using your audio session. As a result, you cannot keep the behavior of System Sound Services audio in line with other audio behavior in your application. This is the most important reason to avoid using System Sound Services for any audio apart from its intended uses. The AudioServicesPlaySystemSound function lets you very simply play short sound files. The simplicity carries with it a few restrictions. Your sound files must be: ● No longer than 30 seconds in duration ● In linear PCM or IMA4 (IMA/ADPCM) format ● Packaged in a .caf, .aif, or .wav file In addition, when you use the AudioServicesPlaySystemSound function: ● Sounds play at the current system audio volume, with no programmatic volume control available ● Sounds play immediately ● Looping and stereo positioning are unavailable ● Simultaneous playback is unavailable: You can play only one sound at a time The similar AudioServicesPlayAlertSound function plays a shortsound as an alert. If a user has configured their device to vibrate in Ring Settings, calling this function invokes vibration in addition to playing the sound file. Note: System-supplied alert sounds and system-supplied user-interface sound effects are not available to your application. For example, using the kSystemSoundID_UserPreferredAlert constant as a parameter to the AudioServicesPlayAlertSound function will not play anything. To play a sound with the AudioServicesPlaySystemSound or AudioServicesPlayAlertSound function, first create a sound ID object, as shown in Listing 1-1. Listing 1-1 Creating a sound ID object // Get the main bundle for the app CFBundleRef mainBundle = CFBundleGetMainBundle (); // Get the URL to the sound file to play. The file in this case // is "tap.aif" Using Audio Playing Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 13soundFileURLRef = CFBundleCopyResourceURL ( mainBundle, CFSTR ("tap"), CFSTR ("aif"), NULL ); // Create a system sound object representing the sound file AudioServicesCreateSystemSoundID ( soundFileURLRef, &soundFileObject ); Then play the sound, as shown in Listing 1-2. Listing 1-2 Playing a system sound - (IBAction) playSystemSound { AudioServicesPlaySystemSound (self.soundFileObject); } In typical use, which includes playing a sound occasionally or repeatedly, retain the sound ID object until your application quits. If you know that you will use a sound only once—for example, in the case of a startup sound—you can destroy the sound ID object immediately after playing the sound, freeing memory. Applicationsrunning on iOS devicesthatsupport vibration can trigger that feature using System Sound Services. You specify the vibrate option with the kSystemSoundID_Vibrate identifier. To trigger it, use the AudioServicesPlaySystemSound function, as shown in Listing 1-3. Listing 1-3 Triggering vibration #import #import - (void) vibratePhone { AudioServicesPlaySystemSound (kSystemSoundID_Vibrate); } Using Audio Playing Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 14If your application is running on an iPod touch, this code does nothing. Playing Sounds Easily with the AVAudioPlayer Class The AVAudioPlayer class provides a simple Objective-C interface for playing sounds. If your application does not require stereo positioning or precise synchronization, and if you are not playing audio captured from a network stream, Apple recommends that you use this class for playback. Using an audio player you can: ● Play sounds of any duration ● Play sounds from files or memory buffers ● Loop sounds ● Play multiple sounds simultaneously (although not with precise synchronization) ● Control relative playback level for each sound you are playing ● Seek to a particular point in a sound file, which supports application features such as fast forward and rewind ● Obtain audio power data that you can use for audio level metering The AVAudioPlayer class lets you play sound in any audio format available in iOS, as described in Table 1-1 (page 7). For a complete description of this class’s interface, see AVAudioPlayer Class Reference . To configure an audio player: 1. Assign a sound file to the audio player. 2. Prepare the audio player for playback, which acquires the hardware resources it needs. 3. Designate an audio player delegate object, which handlesinterruptions as well asthe playback-completed event. The code in Listing 1-4 illustratesthese steps. It would typically go into an initialization method of the controller class for your application. (In production code, you’d include appropriate error handling.) Listing 1-4 Configuring an AVAudioPlayer object // in the corresponding .h file: // @property (nonatomic, retain) AVAudioPlayer *player; // in the .m file: @synthesize player; // the player object Using Audio Playing Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 15NSString *soundFilePath = [[NSBundle mainBundle] pathForResource: @"sound" ofType: @"wav"]; NSURL *fileURL = [[NSURL alloc] initFileURLWithPath: soundFilePath]; AVAudioPlayer *newPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL: fileURL error: nil]; [fileURL release]; self.player = newPlayer; [newPlayer release]; [player prepareToPlay]; [player setDelegate: self]; The delegate (which can be your controller object) handles interruptions and updates the user interface when a sound has finished playing. The delegate methods for the AVAudioPlayer class are described in AVAudioPlayerDelegate Protocol Reference . Listing 1-5 shows a simple implementation of one delegatemethod. This code updates the title of a Play/Pause toggle button when a sound has finished playing. Listing 1-5 Implementing an AVAudioPlayer delegate method - (void) audioPlayerDidFinishPlaying: (AVAudioPlayer *) player successfully: (BOOL) completed { if (completed == YES) { [self.button setTitle: @"Play" forState: UIControlStateNormal]; } } To play, pause, or stop an AVAudioPlayer object, call one of its playback control methods. You can test whether or not playback is in progress by using the playing property. Listing 1-6 shows a basic play/pause toggle method that controls playback and updates the title of a UIButton object. Using Audio Playing Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 16Listing 1-6 Controlling an AVAudioPlayer object - (IBAction) playOrPause: (id) sender { // if already playing, then pause if (self.player.playing) { [self.button setTitle: @"Play" forState: UIControlStateHighlighted]; [self.button setTitle: @"Play" forState: UIControlStateNormal]; [self.player pause]; // if stopped or paused, start playing } else { [self.button setTitle: @"Pause" forState: UIControlStateHighlighted]; [self.button setTitle: @"Pause" forState: UIControlStateNormal]; [self.player play]; } } The AVAudioPlayer class uses the Objective-C declared properties feature for managing information about a sound—such as the playback point within the sound’s timeline, and for accessing playback options—such as volume and looping. For example, you can set the playback volume for an audio player as shown here: [self.player setVolume: 1.0]; // available range is 0.0 through 1.0 For more information on the AVAudioPlayer class, see AVAudioPlayer Class Reference . Playing Sounds with Control Using Audio Queue Services Audio Queue Services adds playback capabilities beyond those available with the AVAudioPlayer class. Using Audio Queue Services for playback lets you: ● Precisely schedule when a sound plays, allowing synchronization ● Precisely control volume on a buffer-by-buffer basis ● Play audio that you have captured from a stream using Audio File Stream Services Audio Queue Services lets you play sound in any audio format available in iOS, as described in Table 1-1 (page 7). You can also use this technology for recording, as explained in “Recording Audio” (page 21). Using Audio Playing Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 17For detailed information on using this technology, see Audio Queue Services Programming Guide and Audio Queue Services Reference . For sample code, see the SpeakHere sample. Creating an Audio Queue Object To create an audio queue object for playback, perform these three steps: 1. Create a data structure to manage information needed by the audio queue, such as the audio format for the data you want to play. 2. Define a callback function for managing audio queue buffers. The callback uses Audio File Servicesto read the file you want to play. (In iOS 2.1 and later, you can also use Extended Audio File Services to read the file.) 3. Instantiate the playback audio queue using the AudioQueueNewOutput function. Listing 1-7 illustrates these steps using ANSI C. (In production code, you’d include appropriate error handling.) The SpeakHere sample project shows these same steps in the context of a C++ program. Listing 1-7 Creating an audio queue object static const int kNumberBuffers = 3; // Create a data structure to manage information needed by the audio queue struct myAQStruct { AudioFileID mAudioFile; CAStreamBasicDescription mDataFormat; AudioQueueRef mQueue; AudioQueueBufferRef mBuffers[kNumberBuffers]; SInt64 mCurrentPacket; UInt32 mNumPacketsToRead; AudioStreamPacketDescription *mPacketDescs; bool mDone; }; // Define a playback audio queue callback function static void AQTestBufferCallback( void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer ) { Using Audio Playing Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 18myAQStruct *myInfo = (myAQStruct *)inUserData; if (myInfo->mDone) return; UInt32 numBytes; UInt32 nPackets = myInfo->mNumPacketsToRead; AudioFileReadPackets ( myInfo->mAudioFile, false, &numBytes, myInfo->mPacketDescs, myInfo->mCurrentPacket, &nPackets, inCompleteAQBuffer->mAudioData ); if (nPackets > 0) { inCompleteAQBuffer->mAudioDataByteSize = numBytes; AudioQueueEnqueueBuffer ( inAQ, inCompleteAQBuffer, (myInfo->mPacketDescs ? nPackets : 0), myInfo->mPacketDescs ); myInfo->mCurrentPacket += nPackets; } else { AudioQueueStop ( myInfo->mQueue, false ); myInfo->mDone = true; } } // Instantiate an audio queue object AudioQueueNewOutput ( &myInfo.mDataFormat, AQTestBufferCallback, Using Audio Playing Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 19&myInfo, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &myInfo.mQueue ); Controlling the Playback Level Audio queue objects give you two ways to control playback level. To set playback level directly, use the AudioQueueSetParameter function with the kAudioQueueParam_Volume parameter, as shown in Listing 1-8. Level change takes effect immediately. Listing 1-8 Setting the playback level directly Float32 volume = 1; // linear scale, range from 0.0 through 1.0 AudioQueueSetParameter ( myAQstruct.audioQueueObject, kAudioQueueParam_Volume, volume ); You can also set playback level for an audio queue buffer by using the AudioQueueEnqueueBufferWithParameters function. This lets you assign audio queue settings that are, in effect, carried by an audio queue buffer as you enqueue it. Such changes take effect when the buffer begins playing. In both cases, level changes for an audio queue remain in effect until you change them again. Indicating Playback Level You can obtain the current playback level from an audio queue object by: 1. Enabling metering for the audio queue object by setting its kAudioQueueProperty_EnableLevelMetering property to true 2. Querying the audio queue object’s kAudioQueueProperty_CurrentLevelMeter property Using Audio Playing Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 20The value of this property is an array of AudioQueueLevelMeterState structures, one per channel. Listing 1-9 shows this structure: Listing 1-9 The AudioQueueLevelMeterState structure typedef struct AudioQueueLevelMeterState { Float32 mAveragePower; Float32 mPeakPower; }; AudioQueueLevelMeterState; Playing Multiple Sounds Simultaneously To play multiple sounds simultaneously, create one playback audio queue object for each sound. For each audio queue, schedule the first buffer of audio to start at the same time using the AudioQueueEnqueueBufferWithParameters function. Starting in iOS 3.0, nearly all supported audio formats can be used for simultaneous playback—namely, all those that can be played using software decoding, as described in Table 1-1 (page 7). For the most processor-efficient multiple playback, use linear PCM (uncompressed) or IMA4 (compressed) audio. Playing Sounds with Positioning Using OpenAL The open-sourced OpenAL audio API, available in iOS in the OpenAL framework, provides an interface optimized for positioning sounds in a stereo field during playback. Playing, positioning, and moving sounds works just asit does on other platforms. OpenAL also lets you mix sounds. OpenAL usesthe I/O unit for playback, resulting in the lowest latency. For all of these reasons, OpenAL is your best choice for playing sounds in game applications on iOS-based devices. However, OpenAL is also a good choice for general iOS application audio playback needs. OpenAL 1.1 support in iOS is built on top of Core Audio. For more information, see OpenAL FAQ for iPhone OS . For OpenAL documentation, see the OpenAL website at http://openal.org. For sample code, see oalTouch . Recording Audio iOS supports audio recording using the AVAudioRecorder class and Audio Queue Services. These interfaces do the work of connecting to the audio hardware, managing memory, and employing codecs as needed. You can record audio in any of the formats listed in Table 1-2 (page 8). Using Audio Recording Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 21Recording takes place at a system-defined input level in iOS. The system takes input from the audio source that the user has chosen—the built-in microphone or, if connected, the headset microphone or other input source. Recording with the AVAudioRecorder Class The easiest way to record sound in iOS is with the AVAudioRecorder class, described in AVAudioRecorder Class Reference . This class provides a highly-streamlined, Objective-C interface that makes it easy to provide sophisticated features like pausing/resuming recording and handling audio interruptions. At the same time, you retain complete control over recording format. To prepare for recording using an audio recorder: 1. Specify a sound file URL. 2. Set up the audio session. 3. Configure the audio recorder’s initial state. Application launch is a good time to do this part of the setup, as shown in Listing 1-10. Variables such as soundFileURL and recording in this example are declared in the class interface. (In production code, you would include appropriate error handling.) Listing 1-10 Setting up the audio session and the sound file URL - (void) viewDidLoad { [super viewDidLoad]; NSString *tempDir = NSTemporaryDirectory (); NSString *soundFilePath = [tempDir stringByAppendingString: @"sound.caf"]; NSURL *newURL = [[NSURL alloc] initFileURLWithPath: soundFilePath]; self.soundFileURL = newURL; [newURL release]; AVAudioSession *audioSession = [AVAudioSession sharedInstance]; audioSession.delegate = self; [audioSession setActive: YES error: nil]; Using Audio Recording Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 22recording = NO; playing = NO; } To handle interruptions and the completion of recording, add the AVAudioSessionDelegate and AVAudioRecorderDelegate protocol names to the interface declaration for your implementation. If your application also does playback, also adopt the AVAudioPlayerDelegate Protocol Reference protocol. To implement a record method, you can use code such as that shown in Listing 1-11. (In production code, you would include appropriate error handling.) Listing 1-11 A record/stop method using the AVAudioRecorder class - (IBAction) recordOrStop: (id) sender { if (recording) { [soundRecorder stop]; recording = NO; self.soundRecorder = nil; [recordOrStopButton setTitle: @"Record" forState: UIControlStateNormal]; [recordOrStopButton setTitle: @"Record" forState: UIControlStateHighlighted]; [[AVAudioSession sharedInstance] setActive: NO error: nil]; } else { [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryRecord error: nil]; NSDictionary *recordSettings = [[NSDictionary alloc] initWithObjectsAndKeys: Using Audio Recording Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 23[NSNumber numberWithFloat: 44100.0], AVSampleRateKey, [NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey, nil]; AVAudioRecorder *newRecorder = [[AVAudioRecorder alloc] initWithURL: soundFileURL settings: recordSettings error: nil]; [recordSettings release]; self.soundRecorder = newRecorder; [newRecorder release]; soundRecorder.delegate = self; [soundRecorder prepareToRecord]; [soundRecorder record]; [recordOrStopButton setTitle: @"Stop" forState: UIControlStateNormal]; [recordOrStopButton setTitle: @"Stop" forState: UIControlStateHighlighted]; recording = YES; } } For more information on the AVAudioRecorder class, see AVAudioRecorder Class Reference . Recording with Audio Queue Services To set up for recording with audio with Audio Queue Services, your application instantiates a recording audio queue object and provides a callback function. The callback storesincoming audio data in memory for immediate use or writes it to a file for long-term storage. Just as with playback, you can obtain the current recording audio level from an audio queue object by querying its kAudioQueueProperty_CurrentLevelMeter property, as described in “Indicating Playback Level” (page 20). Using Audio Recording Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 24For detailed examples of how to use Audio Queue Services to record audio, see “Recording Audio” in Audio Queue Services Programming Guide . For sample code, see the SpeakHere sample. Parsing Streamed Audio To play streamed audio content, such as from a network connection, use Audio File Stream Services in concert with Audio Queue Services. Audio File Stream Services parses audio packets and metadata from common audio file container formats in a network bitstream. You can also use it to parse packets and metadata from on-disk files. In iOS, you can parse the same audio file and bitstream formats that you can in Mac OS X, as follows: ● MPEG-1 Audio Layer 3, used for .mp3 files ● MPEG-2 ADTS, used for the .aac audio data format ● AIFC ● AIFF ● CAF ● MPEG-4, used for .m4a, .mp4, and .3gp files ● NeXT ● WAVE Having retrieved audio packets, you can play back the recovered sound in any of the formats supported in iOS, as listed in Table 1-1 (page 7). For best performance, network streaming applications should use data from Wi-Fi connections. iOS lets you determine which networks are reachable and available through its System Configuration framework and its SCNetworkReachabilityRef opaque type, described in SCNetworkReachability Reference . Forsample code, see the Reachability sample in the iOS Dev Center. To connect to a network stream, use interfaces from Core Foundation, such as the one described in CFHTTPMessage Reference . Parse the network packetsto recover audio packets using Audio File StreamServices. Then buffer the audio packets and send them to a playback audio queue object. Audio File Stream Services relies on interfaces from Audio File Services, such as the AudioFramePacketTranslation structure and the AudioFilePacketTableInfo structure. These are described in Audio File Services Reference . For more information on using streams, refer to Audio File Stream Services Reference . Using Audio Parsing Streamed Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 25Audio Unit Support in iOS iOS provides a set of audio processing plug-ins, known as audio units, that you can use in any application. The interfaces in the Audio Unit framework let you open, connect, and use these audio units. To use the features of the Audio Unit framework, add the Audio Toolbox framework to your Xcode project and link against it in any relevant targets. Then add a #import statement near the top of relevant source files. For detailed information on how to add frameworks to your project, see “Files in Projects” in Xcode Project Management Guide . Table 1-5 lists the audio units provided in iOS. Table 1-5 System-supplied audio units Audio unit Description The iPod EQ unit, of type kAudioUnitSubType_AUiPodEQ, provides a simple, preset-based equalizer you can use in your application. For a demonstration of how to use this audio unit,see the sample code project iPhoneMixerEQGraphTest. iPod Equalizer unit The 3DMixer unit, oftype kAudioUnitSubType_AU3DMixerEmbedded, lets you mix multiple audio streams, specify stereo output panning, manipulate playback rate, and more. OpenAL is built on top of this audio unit and provides a higher-level API well suited for game apps. 3D Mixer unit The Multichannel Mixer unit, of type kAudioUnitSubType_- MultiChannelMixer, lets you mix multiple mono or stereo audio streams to a single stereo stream. It also supports left/right panning for each input. For a demonstration of how to use this audio unit, see the sample code project Audio Mixer (MixerHost). Multichannel Mixer unit The Remote I/Ounit, oftype kAudioUnitSubType_RemoteIO, connects to audio input and output hardware and supports realtime I/O. For demonstrations of how to use this audio unit,see the sample code project aurioTouch . Remote I/O unit The Voice Processing I/O unit, of type kAudioUnitSubType_- VoiceProcessingIO, has the characteristics of the I/O unit and adds echo suppression and other features for two-way communication. Voice Processing I/O unit The Generic Output unit, of type kAudioUnitSubType_- GenericOutput, supports converting to and from linear PCM format; can be used to start and stop a graph. Generic Output unit Using Audio Audio Unit Support in iOS 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 26Audio unit Description The Converter unit, of type kAudioUnitSubType_AUConverter, lets you convert audio data from one format to another. You typically obtain the features of this audio unit by using the Remote I/O unit, which incorporates a Converter unit. Converter unit For more information on using system audio units, see Audio Unit Hosting Guide for iOS . For reference documentation, see Audio Unit Framework Reference and Audio Unit Processing Graph Services Reference . The iOS Dev Center provides two sample-code projects that demonstrate use of system audio units: aurioTouch and iPhoneMultichannelMixerTest. Best Practices for iOS Audio This section lists some important tips for using audio in iOS and describes the best audio data formats for various uses. Tips for Using Audio Table 1-6 lists some important tips to keep in mind when using audio in iOS. Table 1-6 Audio tips Tip Action For AAC, MP3, and ALAC (Apple Lossless) audio, decoding can take place using hardware-assisted codecs. While efficient, this is limited to one audio stream at a time. If you need to play multiple sounds simultaneously, store those sounds using the IMA4 (compressed) or linear PCM (uncompressed) format. Use compressed audio appropriately The afconvert tool in Mac OS X lets you convert to a wide range of audio data formats and file types. See “Preferred Audio Formats in iOS” (page 28) and the afconvert man page. Convert to the data format and file format you need When playing sound with Audio Queue Services, you write a callback that sends short segments of audio data to audio queue buffers. In some cases, loading an entire sound file to memory for playback, which minimizes disk access, is best. In other cases, loading just enough data at a time to keep the buffers full is best. Test and evaluate which strategy works best for your application. Evaluate audiomemory issues Using Audio Best Practices for iOS Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 27Tip Action Sample rate and the number of bits per sample have a direct impact on the size of your audio files. If you need to play many such sounds, or long-duration sounds, consider reducing these valuesto reduce the memory footprint of the audio data. For example, rather than use 44.1 kHz sampling rate for sound effects, you could use a 32 kHz (or possibly lower) sample rate and still provide reasonable quality. Using monophonic (single-channel) audio instead of stereo (two channel) reduces file size. For each sound asset, consider whether mono could suit your needs. Reduce audio file sizes by limiting sample rates, bit depths, and channels Use OpenAL when you want a convenient, high-level interface for positioning sounds in a stereo field or when you need low latency playback. To parse audio packetsfrom a file or a network stream, use Audio File Stream Services. For simple playback of single or multiple sounds, use the AVAudioPlayer class. For recording to a file, use the AVAudioRecorder class. For audio chat, use the Voice Processing I/O unit. To play audio resources synced from a user’s iTunes library, use iPod Library Access. When your sole audio need is to play alerts and user-interface sound effects, use Core Audio’s System Sound Services. For other audio applications, including playback ofstreamed audio, precise synchronization, and access to packets of incoming audio, use Audio Queue Services. Pick the appropriate technology For the lowest possible playback latency, use OpenAL or use the I/O unit directly. Code for low latency Preferred Audio Formats in iOS For uncompressed (highest quality) audio, use 16-bit, little endian, linear PCM audio data packaged in a CAF file. You can convert an audio file to this format in Mac OS X using the afconvert command-line tool, as shown here: /usr/bin/afconvert -f caff -d LEI16 {INPUT} {OUTPUT} The afconvert tool lets you convert to a wide range of audio data formats and file types. See the afconvert man page, and enter afconvert -h at a shell prompt, for more information. For compressed audio when playing one sound at a time, and when you don’t need to play audio simultaneously with the iPod application, use the AAC format packaged in a CAF or m4a file. Using Audio Best Practices for iOS Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 28For less memory usage when you need to play multiple sounds simultaneously, use IMA4 (IMA/ADPCM) compression. This reduces file size but entails minimal CPU impact during decompression. As with linear PCM data, package IMA4 data in a CAF file. Using Audio Best Practices for iOS Audio 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 29Important: This document contains information that used to be in iOS App Programming Guide . The information in this document has not been updated specifically for iOS 4.0. Recording and Editing Video Starting in iOS 3.0, you can record video, with included audio, on supported devices. To display the video recording interface, create and push a UIImagePickerController object, just asfor displaying the still-camera interface. To record video, you must first check that the camera source type (UIImagePickerControllerSourceTypeCamera) is available and that the movie media type (kUTTypeMovie) is available for the camera. Depending on the media types you assign to the mediaTypes property, the picker can directly display the still camera or the video camera, or a selection interface that lets the user choose. Using the UIImagePickerControllerDelegate protocol, register as a delegate of the image picker. Your delegate object receives a completed video recording by way of the imagePickerController:didFinishPickingMediaWithInfo: method. On supported devices, you can also pick previously-recorded videos from a user’s photo library. For more information on using the image picker class, see UIImagePickerController Class Reference . For information on trimming recorded videos, see UIVideoEditorController Class Reference and UIVideoEditorControllerDelegate Protocol Reference . In iOS 4.0 and later, you can record from a device’s camera and display the incoming data live on screen. You use AVCaptureSession to manage data flow from inputs represented by AVCaptureInput objects (which mediate input from an AVCaptureDevice) to outputs represented by AVCaptureOutput. In iOS 4.0 and later, you can edit, assemble, and compose video using existing assets or with new raw materials. Assets are represented by AVAsset, which you can inspect asynchronously for better performance. You use AVMutableComposition to compose media from one or more sources, then AVAssetExportSession to encode output of a composition for delivery. 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 30 Using VideoPlaying Video Files Important: The information in this section currently reflects the usage of the Media Player framework in iOS 3.1 and earlier. Please see the headers for information about changes to this framework in iOS 4.0. iOS supportsthe ability to play back video files directly from your application using the Media Player framework, described in Media Player Framework Reference . Video playback is supported in full screen mode only and can be used by game developers who want to play short animations or by any developers who want to play media files. When you start a video from your application, the media player interface takes over, fading the screen to black and then fading in the video content. You can play a video with or without user controls for adjusting playback. Enabling some or all of these controls (shown in Figure 2-1) gives the user the ability to change the volume, change the playback point, or start and stop the video. If you disable all of these controls, the video plays until completion. Figure 2-1 Media player interface with transport controls To initiate video playback, you must know the URL of the file you want to play. For files your application provides, this would typically be a pointer to a file in your application’s bundle; however, it can also be a pointer to a file on a remote server. Use this URL to instantiate a new instance of the MPMoviePlayerController class. This class presides over the playback of your video file and manages user interactions, such as user taps in the transport controls (if shown). To start playback, call the play method described in MPMediaPlayback Protocol Reference . Listing 2-1 shows a sample method that plays back the video at a specified URL. The play method is an asynchronous call that returns control to the caller while the movie plays. The movie controller loadsthe movie in a full-screen view, and animates the movie into place on top of the application’s existing content. When playback is finished, the movie controller sends a notification received by the application controller object, which releases the movie controller now that it is no longer needed. Listing 2-1 Playing full-screen movies -(void) playMovieAtURL: (NSURL*) theURL { Using Video Playing Video Files 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 31MPMoviePlayerController* theMovie = [[MPMoviePlayerController alloc] initWithContentURL: theURL]; theMovie.scalingMode = MPMovieScalingModeAspectFill; theMovie.movieControlMode = MPMovieControlModeHidden; // Register for the playback finished notification [[NSNotificationCenter defaultCenter] addObserver: self selector: @selector(myMovieFinishedCallback:) name: MPMoviePlayerPlaybackDidFinishNotification object: theMovie]; // Movie playback is asynchronous, so this method returns immediately. [theMovie play]; } // When the movie is done, release the controller. -(void) myMovieFinishedCallback: (NSNotification*) aNotification { MPMoviePlayerController* theMovie = [aNotification object]; [[NSNotificationCenter defaultCenter] removeObserver: self name: MPMoviePlayerPlaybackDidFinishNotification object: theMovie]; // Release the movie instance created in playMovieAtURL: [theMovie release]; } For a list of supported video formats, see iOS Technology Overview. Using Video Playing Video Files 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 32In iOS 4.0 and later, you can play video using AVPlayer in conjunction with an AVPlayerLayer or an AVSynchronizedLayer object. You can use AVAudioMix and AVVideoComposition to customize the audio and video parts of playback respectively. You can also use AVCaptureVideoPreviewLayer to display video as it is being captured by an input device. Using Video Playing Video Files 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 33This table describes the changes to Multimedia Programming Guide . Date Notes Clarified usage of AudioServicesPlaySystemSound function in “Playing UI Sound Effects or Invoking Vibration Using System Sound Services” (page 12). 2010-09-01 Updated Table 1-2 (page 8) for iOS 4.0 by clarifying support for AAC encoding. 2010-05-27 2010-09-01 | © 2010 Apple Inc. All Rights Reserved. 34 Document Revision HistoryApple Inc. © 2010 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, iPad, iPhone, iPod, iPod touch, iTunes, Mac, Mac OS, Objective-C, OS X, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. NeXT is a trademark of NeXT Software, Inc., registered in the U.S. and other countries. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Concepts in Objective-C ProgrammingContents About the Basic Programming Concepts for Cocoa and Cocoa Touch 7 At a Glance 7 How to Use This Document 7 Prerequisites 8 See Also 8 Class Clusters 9 Without Class Clusters: Simple Concept but Complex Interface 9 With Class Clusters: Simple Concept and Simple Interface 10 Creating Instances 10 Class Clusters with Multiple Public Superclasses 11 Creating Subclasses Within a Class Cluster 12 A True Subclass 12 True Subclasses: An Example 14 A Composite Object 16 A Composite Object: An Example 16 Class Factory Methods 20 Delegates and Data Sources 22 How Delegation Works 22 The Form of Delegation Messages 24 Delegation and the Application Frameworks 25 Becoming the Delegate of a Framework Class 26 Locating Objects Through the delegate Property 27 Data Sources 27 Implementing a Delegate for a Custom Class 27 Introspection 29 Evaluating Inheritance Relationships 29 Method Implementation and Protocol Conformance 30 Object Comparison 31 Object Allocation 34 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 2Object Initialization 35 The Form of Initializers 35 Issues with Initializers 36 Implementing an Initializer 38 Multiple Initializers and the Designated Initializer 40 Model-View-Controller 43 Roles and Relationships of MVC Objects 43 Model Objects Encapsulate Data and Basic Behaviors 43 View Objects Present Information to the User 44 Controller Objects Tie the Model to the View 44 Combining Roles 45 Types of Cocoa Controller Objects 45 MVC as a Compound Design Pattern 47 Design Guidelines for MVC Applications 50 Model-View-Controller in Cocoa (OS X) 52 Object Modeling 53 Entities 53 Attributes 54 Relationships 55 Relationship Cardinality and Ownership 56 Accessing Properties 57 Keys 57 Values 57 Key Paths 58 Object Mutability 60 Why Mutable and Immutable Object Variants? 60 Programming with Mutable Objects 62 Creating and Converting Mutable Objects 62 Storing and Returning Mutable Instance Variables 63 Receiving Mutable Objects 64 Mutable Objects in Collections 66 Outlets 67 Receptionist Pattern 68 The Receptionist Design Pattern in Practice 68 When to Use the Receptionist Pattern 71 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 3 ContentsTarget-Action 73 The Target 73 The Action 74 Target-Action in the AppKit Framework 75 Controls, Cells, and Menu Items 75 Setting the Target and Action 77 Actions Defined by AppKit 78 Target-Action in UIKit 78 Toll-Free Bridging 80 Document Revision History 83 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 4 ContentsFigures, Tables, and Listings Class Clusters 9 Figure 1-1 A simple hierarchy for number classes 9 Figure 1-2 A more complete number class hierarchy 10 Figure 1-3 Class cluster architecture applied to number classes 10 Figure 1-4 An object that embeds a cluster object 16 Table 1-1 Class clusters and their public superclasses 11 Table 1-2 Derived methods and their possible implementations 13 Delegates and Data Sources 22 Figure 3-1 The mechanism of delegation 23 Figure 3-2 A more realistic sequence involving a delegate 23 Listing 3-1 Sample delegation methods with return values 24 Listing 3-2 Sample delegation methods returning void 24 Introspection 29 Listing 4-1 Using the class and superclass methods 29 Listing 4-2 Using isKindOfClass: 30 Listing 4-3 Using respondsToSelector: 31 Listing 4-4 Using conformsToProtocol: 31 Listing 4-5 Using isEqual: 32 Listing 4-6 Overriding isEqual: 32 Object Initialization 35 Figure 6-1 Initialization up the inheritance chain 39 Figure 6-2 Interactions of secondary and designated initializers 41 Model-View-Controller 43 Figure 7-1 Traditional version of MVC as a compound pattern 48 Figure 7-2 Cocoa version of MVC as a compound design pattern 48 Figure 7-3 Coordinating controller as the owner of a nib file 50 Object Modeling 53 Figure 8-1 Employee management application object diagram 54 Figure 8-2 Employees table view 55 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 5Figure 8-3 Relationships in the employee management application 56 Figure 8-4 Relationship cardinality 56 Figure 8-5 Object graph for the employee management application 58 Figure 8-6 Employees table view showing department name 59 Object Mutability 60 Listing 9-1 Returning an immutable copy of a mutable instance variable 63 Listing 9-2 Making a snapshot of a potentially mutable object 65 Receptionist Pattern 68 Figure 11-1 Bouncing KVO updates to the main operation queue 69 Listing 11-1 Declaring the receptionist class 69 Listing 11-2 The class factory method for creating a receptionist object 70 Listing 11-3 Handling the KVO notification 71 Listing 11-4 Creating a receptionist object 71 Target-Action 73 Figure 12-1 How the target-action mechanism works in the control-cell architecture 76 Toll-Free Bridging 80 Table 13-1 Data types that can be used interchangeably between Core Foundation and Foundation 81 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 6 Figures, Tables, and ListingsMany of the programmatic interfaces of the Cocoa and Cocoa Touch frameworks only make sense only if you are aware of the concepts on which they are based. These concepts express the rationale for many of the core designs of the frameworks. Knowledge of these concepts will illuminate your software-development practices. Model layer View layer Controller layer Application delegate System frameworks At a Glance This document contains articles that explain central concepts, design patterns, and mechanisms of the Cocoa and Cocoa Touch frameworks. The articles are arranged in alphabetical order. How to Use This Document If you read this document cover-to-cover, you learn important information about Cocoa and Cocoa Touch application development. However, most readers come to the articles in this document in one of two ways: ● Other documents—especially those that are intended for novice iOS and OS X developers—which link to these articles. ● In-line mini-articles (which appear when you click a dash-underlined word or phrase) that have a link to an article as a “Definitive Discussion.” 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 7 About the Basic Programming Concepts for Cocoa and Cocoa TouchPrerequisites Prior programming experience, especially with object-oriented languages, is recommended. See Also The Objective-C Programming Language offers further discussion of many of the language-related concepts covered in this document. About the Basic Programming Concepts for Cocoa and Cocoa Touch Prerequisites 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 8Class clusters are a design pattern that the Foundation framework makes extensive use of. Class clusters group a number of private concrete subclasses under a public abstract superclass. The grouping of classes in this way simplifies the publicly visible architecture of an object-oriented framework without reducing its functional richness. Class clusters are based on the Abstract Factory design pattern. Without Class Clusters: Simple Concept but Complex Interface To illustrate the class cluster architecture and its benefits, consider the problem of constructing a class hierarchy that defines objects to store numbers of different types (char, int, float, double). Because numbers of different types have many features in common (they can be converted from one type to another and can be represented as strings, for example), they could be represented by a single class. However, their storage requirements differ,so it’sinefficient to represent them all by the same class. Taking thisfact into consideration, one could design the class architecture depicted in Figure 1-1 to solve the problem. Figure 1-1 A simple hierarchy for number classes Number is the abstract superclass that declares in its methods the operations common to its subclasses. However, it doesn’t declare an instance variable to store a number. The subclasses declare such instance variables and share in the programmatic interface declared by Number. 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 9 Class ClustersSo far, this design is relatively simple. However, if the commonly used modifications of these basic C types are taken into account, the class hierarchy diagram looks more like Figure 1-2. Figure 1-2 A more complete number class hierarchy The simple concept—creating a class to hold number values—can easily burgeon to over a dozen classes. The class cluster architecture presents a design that reflects the simplicity of the concept. With Class Clusters: Simple Concept and Simple Interface Applying the class cluster design pattern to this problem yieldsthe class hierarchy in Figure 1-3 (private classes are in gray). Figure 1-3 Class cluster architecture applied to number classes Users of this hierarchy see only one public class, Number, so how is it possible to allocate instances of the proper subclass? The answer is in the way the abstract superclass handles instantiation. Creating Instances The abstract superclass in a class cluster must declare methods for creating instances of its private subclasses. It’s the superclass’s responsibility to dispense an object of the proper subclass based on the creation method that you invoke—you don’t, and can’t, choose the class of the instance. In the Foundation framework, you generally create an object by invoking a +className... method or the alloc... and init... methods. Taking the Foundation framework’s NSNumber class as an example, you could send these messages to create number objects: Class Clusters With Class Clusters: Simple Concept and Simple Interface 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 10NSNumber *aChar = [NSNumber numberWithChar:’a’]; NSNumber *anInt = [NSNumber numberWithInt:1]; NSNumber *aFloat = [NSNumber numberWithFloat:1.0]; NSNumber *aDouble = [NSNumber numberWithDouble:1.0]; You are not responsible for releasing the objects returned from factory methods. Many classes also provide the standard alloc... and init... methodsto create objectsthat require you to manage their deallocation. Each object returned—aChar, anInt, aFloat, and aDouble—may belong to a different private subclass (and in fact does). Although each object’s class membership is hidden, itsinterface is public, being the interface declared by the abstract superclass, NSNumber. Although it is not precisely correct, it’s convenient to consider the aChar, anInt, aFloat, and aDouble objects to be instances of the NSNumber class, because they’re created by NSNumber class methods and accessed through instance methods declared by NSNumber. Class Clusters with Multiple Public Superclasses In the example above, one abstract public class declares the interface for multiple private subclasses. This is a class cluster in the purest sense. It’s also possible, and often desirable, to have two (or possibly more) abstract public classes that declare the interface for the cluster. This is evident in the Foundation framework, which includes the clusters listed in Table 1-1. Table 1-1 Class clusters and their public superclasses Class cluster Public superclasses NSData NSData NSMutableData NSArray NSArray NSMutableArray NSDictionary NSDictionary NSMutableDictionary NSString NSString NSMutableString Class Clusters Class Clusters with Multiple Public Superclasses 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 11Other clusters of this type also exist, but these clearly illustrate how two abstract nodes cooperate in declaring the programmatic interface to a class cluster. In each of these clusters, one public node declares methods that all cluster objects can respond to, and the other node declares methods that are only appropriate for cluster objects that allow their contents to be modified. This factoring of the cluster’s interface helps make an object-oriented framework’s programmatic interface more expressive. For example, imagine an object representing a book that declares this method: - (NSString *)title; The book object could return its own instance variable or create a new string object and return that—it doesn’t matter. It’s clear from this declaration that the returned string can’t be modified. Any attempt to modify the returned object will elicit a compiler warning. Creating Subclasses Within a Class Cluster The class cluster architecture involves a trade-off between simplicity and extensibility: Having a few public classes stand in for a multitude of private ones makes it easier to learn and use the classes in a framework but somewhat harder to create subclasses within any of the clusters. However, if it’s rarely necessary to create a subclass, then the cluster architecture is clearly beneficial. Clusters are used in the Foundation framework in just these situations. If you find that a cluster doesn’t provide the functionality your program needs, then a subclass may be in order. For example, imagine that you want to create an array object whose storage is file-based rather than memory-based, as in the NSArray class cluster. Because you are changing the underlying storage mechanism of the class, you’d have to create a subclass. On the other hand, in some cases it might be sufficient (and easier) to define a class that embeds within it an object from the cluster. Let’s say that your program needs to be alerted whenever some data is modified. In this case, creating a simple class that wraps a data object that the Foundation framework defines may be the best approach. An object of this class could intervene in messages that modify the data, intercepting the messages, acting on them, and then forwarding them to the embedded data object. In summary, if you need to manage your object’sstorage, create a true subclass. Otherwise, create a composite object, one that embeds a standard Foundation framework object in an object of your own design. The following sections give more detail on these two approaches. A True Subclass A new class that you create within a class cluster must: Class Clusters Creating Subclasses Within a Class Cluster 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 12● Be a subclass of the cluster’s abstract superclass ● Declare its own storage ● Override all initializer methods of the superclass ● Override the superclass’s primitive methods (described below) Because the cluster’s abstract superclass is the only publicly visible node in the cluster’s hierarchy, the first point is obvious. This implies that the new subclass will inherit the cluster’s interface but no instance variables, because the abstract superclass declares none. Thus the second point: The subclass must declare any instance variables it needs. Finally, the subclass must override any method it inherits that directly accesses an object’s instance variables. Such methods are called primitive methods. A class’s primitive methodsform the basisfor itsinterface. For example, take the NSArray class, which declares the interface to objects that manage arrays of objects. In concept, an array stores a number of data items, each of which is accessible by index. NSArray expresses this abstract notion through its two primitive methods, count and objectAtIndex:. With these methods as a base, other methods—derived methods—can be implemented; Table 1-2 gives two examples of derived methods. Table 1-2 Derived methods and their possible implementations Derived Method Possible Implementation Find the last object by sending the array object this message: [self objectAtIndex: ([self count] –1)]. lastObject Find an object by repeatedly sending the array object an objectAtIndex: message, each time incrementing the index until all objects in the array have been tested. containsObject: The division of an interface between primitive and derived methods makes creating subclasses easier. Your subclass must override inherited primitives, but having done so can be sure that all derived methods that it inherits will operate properly. The primitive-derived distinction applies to the interface of a fully initialized object. The question of how init... methods should be handled in a subclass also needs to be addressed. In general, a cluster’s abstract superclass declares a number of init... and + className methods. As described in “Creating Instances” (page 10), the abstract class decides which concrete subclass to instantiate based your choice of init... or + className method. You can consider that the abstract class declares these methods for the convenience of the subclass. Since the abstract class has no instance variables, it has no need of initialization methods. Class Clusters Creating Subclasses Within a Class Cluster 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 13Your subclass should declare its own init... (if it needs to initialize its instance variables) and possibly + className methods. It should not rely on any of those that it inherits. To maintain its link in the initialization chain, it should invoke its superclass’s designated initializer within its own designated initializer method. It should also override all other inherited initializer methods and implement them to behave in a reasonable manner. (See ““The Runtime System”“ in The Objective-C Programming Language for a discussion of designated initializers.) Within a class cluster, the designated initializer of the abstract superclass is always init. True Subclasses: An Example Let’s say that you want to create a subclass of NSArray, named MonthArray, that returns the name of a month given its index position. However, a MonthArray object won’t actually store the array of month names as an instance variable. Instead, the method that returns a name given an index position (objectAtIndex:) will return constantstrings. Thus, only twelve string objects will be allocated, no matter how many MonthArray objects exist in an application. The MonthArray class is declared as: #import @interface MonthArray : NSArray { } + monthArray; - (unsigned)count; - (id)objectAtIndex:(unsigned)index; @end Note that the MonthArray class doesn’t declare an init... method because it has no instance variables to initialize. The count and objectAtIndex: methodssimply cover the inherited primitive methods, as described above. The implementation of the MonthArray class looks like this: #import "MonthArray.h" @implementation MonthArray Class Clusters Creating Subclasses Within a Class Cluster 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 14static MonthArray *sharedMonthArray = nil; static NSString *months[] = { @"January", @"February", @"March", @"April", @"May", @"June", @"July", @"August", @"September", @"October", @"November", @"December" }; + monthArray { if (!sharedMonthArray) { sharedMonthArray = [[MonthArray alloc] init]; } return sharedMonthArray; } - (unsigned)count { return 12; } - objectAtIndex:(unsigned)index { if (index >= [self count]) [NSException raise:NSRangeException format:@"***%s: index (%d) beyond bounds (%d)", sel_getName(_cmd), index, [self count] - 1]; else return months[index]; } @end Because MonthArray overridesthe inherited primitive methods, the derived methodsthat it inherits will work properly without being overridden. NSArray’s lastObject, containsObject:, sortedArrayUsingSelector:, objectEnumerator, and other methods work without problems for MonthArray objects. Class Clusters Creating Subclasses Within a Class Cluster 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 15A Composite Object By embedding a private cluster object in an object of your own design, you create a composite object. This composite object can rely on the cluster object for its basic functionality, only intercepting messages that the composite object wants to handle in some particular way. This architecture reduces the amount of code you must write and lets you take advantage of the tested code provided by the Foundation Framework. Figure 1-4 depicts this architecture. Figure 1-4 An object that embeds a cluster object The composite object must declare itself to be a subclass of the cluster’s abstract superclass. As a subclass, it must override the superclass’s primitive methods. It can also override derived methods, but this isn’t necessary because the derived methods work through the primitive ones. The count method of the NSArray class is an example; the intervening object’s implementation of a method it overrides can be as simple as: - (unsigned)count { return [embeddedObject count]; } However, your object could put code for its own purposes in the implementation of any method it overrides. A Composite Object: An Example To illustrate the use of a composite object, imagine you want a mutable array object that tests changes against some validation criteria before allowing any modification to the array’s contents. The example that follows describes a class called ValidatingArray, which contains a standardmutable array object. ValidatingArray overrides all of the primitive methods declared in its superclasses, NSArray and NSMutableArray. It also declares the array, validatingArray, and init methods, which can be used to create and initialize an instance: #import Class Clusters Creating Subclasses Within a Class Cluster 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 16@interface ValidatingArray : NSMutableArray { NSMutableArray *embeddedArray; } + validatingArray; - init; - (unsigned)count; - objectAtIndex:(unsigned)index; - (void)addObject:object; - (void)replaceObjectAtIndex:(unsigned)index withObject:object; - (void)removeLastObject; - (void)insertObject:object atIndex:(unsigned)index; - (void)removeObjectAtIndex:(unsigned)index; @end The implementation file shows how, in an init method of the ValidatingArrayclass, the embedded object is created and assigned to the embeddedArray variable. Messages that simply access the array but don’t modify its contents are relayed to the embedded object. Messagesthat could change the contents are scrutinized (here in pseudocode) and relayed only if they pass the hypothetical validation test. #import "ValidatingArray.h" @implementation ValidatingArray - init { self = [super init]; if (self) { embeddedArray = [[NSMutableArray allocWithZone:[self zone]] init]; } return self; } Class Clusters Creating Subclasses Within a Class Cluster 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 17+ validatingArray { return [[[self alloc] init] autorelease]; } - (unsigned)count { return [embeddedArray count]; } - objectAtIndex:(unsigned)index { return [embeddedArray objectAtIndex:index]; } - (void)addObject:object { if (/* modification is valid */) { [embeddedArray addObject:object]; } } - (void)replaceObjectAtIndex:(unsigned)index withObject:object; { if (/* modification is valid */) { [embeddedArray replaceObjectAtIndex:index withObject:object]; } } - (void)removeLastObject; { if (/* modification is valid */) { [embeddedArray removeLastObject]; Class Clusters Creating Subclasses Within a Class Cluster 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 18} } - (void)insertObject:object atIndex:(unsigned)index; { if (/* modification is valid */) { [embeddedArray insertObject:object atIndex:index]; } } - (void)removeObjectAtIndex:(unsigned)index; { if (/* modification is valid */) { [embeddedArray removeObjectAtIndex:index]; } } Class Clusters Creating Subclasses Within a Class Cluster 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 19Class factory methods are implemented by a class as a convenience for clients. They combine allocation and initialization in one step and return the created object. However, the client receiving this object does not own the object and thus (per the object-ownership policy) is not responsible for releasing it. These methods are of the form + (type)className... (where className excludes any prefix). Cocoa provides plenty of examples, especially among the “value” classes. NSDate includes the following class factory methods: + (id)dateWithTimeIntervalSinceNow:(NSTimeInterval)secs; + (id)dateWithTimeIntervalSinceReferenceDate:(NSTimeInterval)secs; + (id)dateWithTimeIntervalSince1970:(NSTimeInterval)secs; And NSData offers the following factory methods: + (id)dataWithBytes:(const void *)bytes length:(unsigned)length; + (id)dataWithBytesNoCopy:(void *)bytes length:(unsigned)length; + (id)dataWithBytesNoCopy:(void *)bytes length:(unsigned)length freeWhenDone:(BOOL)b; + (id)dataWithContentsOfFile:(NSString *)path; + (id)dataWithContentsOfURL:(NSURL *)url; + (id)dataWithContentsOfMappedFile:(NSString *)path; Factory methods can be more than a simple convenience. They can not only combine allocation and initialization, but the allocation can inform the initialization. As an example, let’s say you must initialize a collection object from a property-list file that encodes any number of elements for the collection (NSString objects, NSData objects, NSNumber objects, and so on). Before the factory method can know how much memory to allocate for the collection, it must read the file and parse the property list to determine how many elements there are and what object type these elements are. 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 20 Class Factory MethodsAnother purpose for a classfactory method isto ensure that a certain class(NSWorkspace, for example) vends a singleton instance. Although an init... method could verify that only one instance exists at any one time in a program, it would require the prior allocation of a “raw” instance and then, in memory-managed code, would have to release that instance. A factory method, on the other hand, gives you a way to avoid blindly allocating memory for an object that you might not use, as in the following example: static AccountManager *DefaultManager = nil; + (AccountManager *)defaultManager { if (!DefaultManager) DefaultManager = [[self allocWithZone:NULL] init]; return DefaultManager; } Class Factory Methods 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 21A delegate is an object that acts on behalf of, or in coordination with, another object when that object encounters an event in a program. The delegating object is often a responder object—that is, an object inheriting from NSResponder in AppKit or UIResponder in UIKit—that is responding to a user event. The delegate is an object that is delegated control of the user interface for that event, or is at least asked to interpret the event in an application-specific manner. To better appreciate the value of delegation, it helps to consider an off-the-shelf Cocoa object such as a text field (an instance of NSTextField or UITextField) or a table view (an instance of NSTableView or UITableView ). These objects are designed to fulfill a specific role in a generic fashion; a window object in the AppKit framework, for example, responds to mouse manipulations of its controls and handles such things as closing, resizing, and moving the physical window. This restricted and generic behavior necessarily limits what the object can know about how an event affects (or will affect) something elsewhere in the application, especially when the affected behavior isspecific to your application. Delegation provides a way for your custom object to communicate application-specific behavior to the off-the-shelf object. The programming mechanism of delegation gives objects a chance to coordinate their appearance and state with changes occurring elsewhere in a program, changes usually brought about by user actions. More importantly, delegation makes it possible for one object to alter the behavior of another object without the need to inherit from it. The delegate is almost always one of your custom objects, and by definition it incorporates application-specific logic that the generic and delegating object cannot possibly know itself. How Delegation Works The design of the delegation mechanism is simple—see Figure 3-1 (page 23). The delegating class has an outlet or property, usually one that is named delegate; if it is an outlet, it includes methods for setting and accessing the value of the outlet. It also declares, without implementing, one or more methods that constitute a formal protocol or an informal protocol. A formal protocol that uses optional methods—a feature ofObjective-C 2.0—is the preferred approach, but both kinds of protocols are used by the Cocoa frameworks for delegation. In the informal protocol approach, the delegating class declares methods on a category of NSObject, and the delegate implements only those methods in which it has an interest in coordinating itself with the delegating object or affecting that object’s default behavior. If the delegating class declares a formal protocol, the delegate may choose to implement those methods marked optional, but it must implement the required ones. 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 22 Delegates and Data SourcesDelegation follows a common design, illustrated by Figure 3-1. Figure 3-1 The mechanism of delegation User just clicked close button; should window close? No windowShouldClose: Don't close. The window has unsaved data. windowDelegate The methods of the protocol mark significant events handled or anticipated by the delegating object. This object wants either to communicate these events to the delegate or, for impending events, to request input or approval from the delegate. For example, when a user clicks the close button of a window in OS X, the window object sends the windowShouldClose: message to its delegate; this gives the delegate the opportunity to veto or defer the closing of the window if, for example, the window has associated data that must be saved (see Figure 3-2). Figure 3-2 A more realistic sequence involving a delegate Yes windowShouldClose: ➌ ➊ ➍ ➋ aWindow aDelegate The delegating object sends a message only if the delegate implements the method. It makes this discovery by invoking the NSObjectmethod respondsToSelector: in the delegate first. Delegates and Data Sources How Delegation Works 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 23The Form of Delegation Messages Delegation methods have a conventional form. They begin with the name of the AppKit or UIKit object doing the delegating—application, window, control, and so on; this name is in lower-case and without the “NS” or “UI” prefix. Usually (but not always) this object name is followed by an auxiliary verb indicative of the temporal status of the reported event. This verb, in other words, indicates whether the event is about to occur (“Should” or “Will”) or whether it has just occurred (“Did” or “Has”). This temporal distinction helps to categorize those messagesthat expect a return value and those that don’t. Listing 3-1 includes a few AppKit delegation methods that expect a return value. Listing 3-1 Sample delegation methods with return values - (BOOL)application:(NSApplication *)sender openFile:(NSString *)filename; // NSApplication - (BOOL)application:(UIApplication *)application handleOpenURL:(NSURL *)url; // UIApplicationDelegate - (UITableRowIndexSet *)tableView:(NSTableView *)tableView willSelectRows:(UITableRowIndexSet *)selection; // UITableViewDelegate - (NSRect)windowWillUseStandardFrame:(NSWindow *)window defaultFrame:(NSRect)newFrame; // NSWindow The delegate that implements these methods can block the impending event (by returning NO in the first two methods) or alter a suggested value (the index set and the frame rectangle in the last two methods). It can even defer an impending event; for example, the delegate implementing the applicationShouldTerminate:method can delay application termination by returning NSTerminateLater. Other delegation methods are invoked by messagesthat don’t expect a return value and so are typed to return void. These messages are purely informational, and the method names often contain “Did”, “Will”, or some other indication of a transpired or impending event. Listing 3-2 shows a few examples of these kinds of delegation method. Listing 3-2 Sample delegation methods returning void - (void) tableView:(NSTableView*)tableView mouseDownInHeaderOfTableColumn:(NSTableColumn *)tableColumn; // NSTableView - (void)windowDidMove:(NSNotification *)notification; // NSWindow - (void)application:(UIApplication *)application willChangeStatusBarFrame:(CGRect)newStatusBarFrame; // UIApplication Delegates and Data Sources The Form of Delegation Messages 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 24- (void)applicationWillBecomeActive:(NSNotification *)notification; // NSApplication There are a couple of things to note about this last group of methods. The first is that an auxiliary verb of “Will” (as in the third method) does not necessarily mean that a return value is expected. In this case, the event is imminent and cannot be blocked, but the message gives the delegate an opportunity to prepare the program for the event. The other point of interest concernsthe second and last method declarationsin Listing 3-2 . The sole parameter of each of these methods is an NSNotification object, which means that these methods are invoked as the result of the posting of a particular notification. For example, the windowDidMove: method is associated with the NSWindow notification NSWindowDidMoveNotification. It’s important to understand the relationship of notifications to delegation messages in AppKit. The delegating object automatically makes its delegate an observer of all notifications it posts. All the delegate needs to do is implement the associated method to get the notification. To make an instance of your custom class the delegate of an AppKit object, simply connect the instance to the delegate outlet or property in Interface Builder. Or you can set it programmatically through the delegating object’s setDelegate: method or delegate property, preferably early on, such as in the awakeFromNib or applicationDidFinishLaunching: method. Delegation and the Application Frameworks The delegating object in a Cocoa or Cocoa Touch application is often a responder object such as a UIApplication, NSWindow, or NSTableView object. The delegate object itself istypically, but not necessarily, an object, often a custom object, that controls some part of the application (that is, a coordinating controller object). The following AppKit classes define a delegate: ● NSApplication ● NSBrowser ● NSControl ● NSDrawer ● NSFontManager ● NSFontPanel ● NSMatrix ● NSOutlineView ● NSSplitView Delegates and Data Sources Delegation and the Application Frameworks 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 25● NSTableView ● NSTabView ● NSText ● NSTextField ● NSTextView ● NSWindow The UIKit framework also uses delegation extensively and always implements it using formal protocols. The application delegate is extremely important in an application running in iOS because it must respond to application-launch, application-quit, low-memory, and other messages from the application object. The application delegate must adopt the UIApplicationDelegate protocol. Delegating objects do not (and should not) retain their delegates. However, clients of delegating objects (applications, usually) are responsible for ensuring that their delegates are around to receive delegation messages. To do this, they may have to retain the delegate in memory-managed code. This precaution applies equally to data sources, notification observers, and targets of action messages. Note that in a garbage-collection environment, the reference to the delegate is strong because the retain-cycle problem does not apply. Some AppKit classes have a more restricted type of delegate called a modal delegate . Objects of these classes (NSOpenPanel, for example) run modal dialogs that invoke a handler method in the designated delegate when the user clicksthe dialog’s OK button. Modal delegates are limited in scope to the operation of the modal dialog. Becoming the Delegate of a Framework Class A framework class or any other classthat implements delegation declares a delegate property and a protocol (usually a formal protocol). The protocol liststhe required and optional methodsthat the delegate implements. For an instance of your class to function as the delegate of a framework object, it must do the following: ● Set your object asthe delegate (by assigning it to the delegate property). You can do this programmatically or through Interface Builder. ● If the protocol is formal, declare that your class adopts the protocol in the class definition. For example: @interface MyControllerClass : UIViewController { ● Implement all required methods of the protocol and any optional methods that you want to participate in. Delegates and Data Sources Delegation and the Application Frameworks 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 26Locating Objects Through the delegate Property The existence of delegates has other programmatic uses. For example, with delegates it is easy for two coordinating controllers in the same program to find and communicate with each other. For example, the object controlling the application overall can find the controller of the application’sinspector window (assuming it’s the current key window) using code similar to the following: id winController = [[NSApp keyWindow] delegate]; And your code can find the application-controller object—by definition, the delegate of the global application instance—by doing something similar to the following: id appController = [NSApp delegate]; Data Sources A data source is like a delegate except that, instead of being delegated control of the user interface, it is delegated control of data. A data source is an outlet held by NSView and UIView objects such as table views and outline views that require a source from which to populate their rows of visible data. The data source for a view is usually the same object that acts as its delegate, but it can be any object. As with the delegate, the data source must implement one or more methods of an informal protocol to supply the view with the data it needs and, in more advanced implementations, to handle data that users directly edit in such views. As with delegates, data sources are objectsthat must be present to receive messagesfrom the objectsrequesting data. The application that uses them must ensure their persistence, retaining them if necessary in memory-managed code. Data sources are responsible for the persistence of the objectsthey hand out to user-interface objects. In other words, they are responsible for the memory management of those objects. However, whenever a view object such as an outline view or table view accesses the data from a data source, it retains the objects as long as it uses the data. But it does not use the data for very long. Typically it holds on to the data only long enough to display it. Implementing a Delegate for a Custom Class To implement a delegate for your custom class, complete the following steps: ● Declare the delegate accessor methods in your class header file. Delegates and Data Sources Data Sources 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 27- (id)delegate; - (void)setDelegate:(id)newDelegate; ● Implement the accessor methods. In a memory-managed program, to avoid retain cycles, the setter method should not retain or copy your delegate. - (id)delegate { return delegate; } - (void)setDelegate:(id)newDelegate { delegate = newDelegate; } In a garbage-collected environment, where retain cycles are not a problem, you should not make the delegate a weak reference (by using the __weak type modifier). For more on retain cycles, see Advanced MemoryManagement ProgrammingGuide . Formore on weak referencesin garbage collection,see “Garbage Collection for Cocoa Essentials” in Garbage Collection Programming Guide . ● Declare a formal or informal protocol containing the programmatic interface for the delegate. Informal protocols are categories on the NSObject class. If you declare a formal protocol for your delegate, make sure you mark groups of optional methods with the @optional directive. “The Form of Delegation Messages” (page 24) gives advice for naming your own delegation methods. ● Before invoking a delegation method, make sure the delegate implements it by sending it a respondsToSelector: message. - (void)someMethod { if ( [delegate respondsToSelector:@selector(operationShouldProceed)] ) { if ( [delegate operationShouldProceed] ) { // do something appropriate } } } The precaution is necessary only for optional methods in a formal protocol or methods of an informal protocol. Delegates and Data Sources Implementing a Delegate for a Custom Class 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 28Introspection is a powerful feature of object-oriented languages and environments, and introspection in Objective-C and Cocoa is no exception. Introspection refersto the capability of objectsto divulge details about themselves as objects at runtime. Such details include an object’s place in the inheritance tree, whether it conforms to a specific protocol, and whether it responds to a certain message. The NSObject protocol and class define many introspection methodsthat you can use to query the runtime in order to characterize objects. Used judiciously, introspection makes an object-oriented program more efficient and robust. It can help you avoid message-dispatch errors, erroneous assumptions of object equality, and similar problems. The following sections show how you might effectively use the NSObject introspection methods in your code. Evaluating Inheritance Relationships Once you know the class an object belongs to, you probably know quite a bit about the object. You might know what its capabilities are, what attributes it represents, and what kinds of messages it can respond to. Even if after introspection you are unfamiliar with the classto which an object belongs, you now know enough to not send it certain messages. The NSObject protocol declares several methods for determining an object’s position in the class hierarchy. These methods operate at different granularities. The class and superclass instance methods, for example, return the Class objects representing the class and superclass, respectively, of the receiver. These methods require you to compare one Class object with another. Listing 4-1 gives a simple (one might say trivial) example of their use. Listing 4-1 Using the class and superclass methods // ... while ( id anObject = [objectEnumerator nextObject] ) { if ( [self class] == [anObject superclass] ) { // do something appropriate... } } 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 29 IntrospectionNote: Sometimes you use the class or superclass methods to obtain an appropriate receiver for a class message. More commonly, to check an object’s class affiliation, you would send it a isKindOfClass: or isMemberOfClass: message. The former method returns whether the receiver is an instance of a given class or an instance of any class that inherits from that class. A isMemberOfClass: message, on the other hand, tells you if the receiver is an instance of the specified class. The isKindOfClass: method is generally more useful because from it you can know at once the complete range of messages you can send to an object. Consider the code snippet in Listing 4-2. Listing 4-2 Using isKindOfClass: if ([item isKindOfClass:[NSData class]]) { const unsigned char *bytes = [item bytes]; unsigned int length = [item length]; // ... } By learning that the object item inheritsfrom the NSData class, this code knowsit can send it the NSDatabytes and lengthmessages. The difference between isKindOfClass: and isMemberOfClass: becomes apparent if you assume that item is an instance of NSMutableData. If you use isMemberOfClass: instead of isKindOfClass:, the code in the conditionalized block is never executed because item is not an instance of NSData but rather of NSMutableData, a subclass of NSData. Method Implementation and Protocol Conformance Two of the more powerful introspection methods of NSObject are respondsToSelector: and conformsToProtocol:. These methodstell you, respectively, whether an object implements a certain method and whether an object conforms to a specified formal protocol (that is, adopts the protocol, if necessary, and implements all the methods of the protocol). You use these methodsin a similarsituation in your code. They enable you to discover whethersome potentially anonymous object can respond appropriately to a particular message or set of messages before you send it any of those messages. By making this check before sending a message, you can avoid the risk of runtime exceptionsresulting from unrecognized selectors. The AppKit framework implementsinformal protocols—the basis of delegation—by checking whether delegates implement a delegation method (using respondsToSelector:) prior to invoking that method. Introspection Method Implementation and Protocol Conformance 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 30Listing 4-3 illustrates how you might use the respondsToSelector: method in your code. Listing 4-3 Using respondsToSelector: - (void)doCommandBySelector:(SEL)aSelector { if ([self respondsToSelector:aSelector]) { [self performSelector:aSelector withObject:nil]; } else { [_client doCommandBySelector:aSelector]; } } Listing 4-4 illustrates how you might use the conformsToProtocol: method in your code. Listing 4-4 Using conformsToProtocol: // ... if (!([((id)testObject) conformsToProtocol:@protocol(NSMenuItem)])) { NSLog(@"Custom MenuItem, '%@', not loaded; it must conform to the 'NSMenuItem' protocol.\n", [testObject class]); [testObject release]; testObject = nil; } Object Comparison Although they are not strictly introspection methods, the hash and isEqual: methods fulfill a similar role. They are indispensable runtime toolsfor identifying and comparing objects. But instead of querying the runtime for information about an object, they rely on class-specific comparison logic. The hash and isEqual: methods, both declared by the NSObject protocol, are closely related. The hash method must be implemented to return an integer that can be used as a table addressin a hash table structure. If two objects are equal (as determined by the isEqual: method), they must have the same hash value. If your object could be included in collections such as NSSet objects, you need to define hash and verify the invariant that if two objects are equal, they return the same hash value. The default NSObject implementation of isEqual: simply checks for pointer equality. Introspection Object Comparison 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 31Using the isEqual: method is straightforward; it compares the receiver against the object supplied as a parameter. Object comparison frequently informsruntime decisions about whatshould be done with an object. As Listing 4-5 illustrates, you can use isEqual: to decide whether to perform an action, in this case to save user preferences that have been modified. Listing 4-5 Using isEqual: - (void)saveDefaults { NSDictionary *prefs = [self preferences]; if (![origValues isEqual:prefs]) [Preferences savePreferencesToDefaults:prefs]; } If you are creating a subclass, you might need to override isEqual: to add further checksfor points of equality. The subclass might define an extra attribute that has to be the same value in two instances for them to be considered equal. For example, say you create a subclass of NSObject called MyWidget that contains two instance variables, name and data. Both of these must be the same value for two instances of MyWidget to be considered equal. Listing 4-6 illustrates how you might implement isEqual: for the MyWidget class. Listing 4-6 Overriding isEqual: - (BOOL)isEqual:(id)other { if (other == self) return YES; if (!other || ![other isKindOfClass:[self class]]) return NO; return [self isEqualToWidget:other]; } - (BOOL)isEqualToWidget:(MyWidget *)aWidget { if (self == aWidget) return YES; if (![(id)[self name] isEqual:[aWidget name]]) return NO; if (![[self data] isEqualToData:[aWidget data]]) return NO; return YES; Introspection Object Comparison 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 32} This isEqual: method first checks for pointer equality, then class equality, and finally invokes an object comparator whose name indicates the class of object involved in the comparison. This type of comparator, which forcestype checking of the object passed in, is a common convention in Cocoa; the isEqualToString: method of the NSString class and the isEqualToTimeZone: method of the NSTimeZone class are but two examples. The class-specific comparator—isEqualToWidget: in this case—performs the checks for name and data equality. In all isEqualToType: methods of the Cocoa frameworks, nil is not a valid parameter and implementations of these methods may raise an exception upon receiving a nil. However, for backward compatibility, isEqual: methods of the Cocoa frameworks do accept nil, returning NO. Introspection Object Comparison 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 33When you allocate an object, part of what happens is what you might expect, given the term. Cocoa allocates enough memory for the object from a region of application virtual memory. To calculate how much memory to allocate, it takes the object’s instance variables into account—including their types and order—as specified by the object’s class. To allocate an object, you send the message alloc or allocWithZone: to the object’s class. In return, you get a “raw” (uninitialized) instance of the class. The alloc variant of the method uses the application’s default zone. A zone is a page-aligned area of memory for holding related objects and data allocated by an application. See Advanced Memory Management Programming Guide for more information on zones. An allocation message does other important things besides allocating memory: ● It sets the object’s retain count to one. ● It initializes the object’s isainstance variable to point to the object’s class, a runtime object in its own right that is compiled from the class definition. ● It initializes all other instance variables to zero (or to the equivalent type for zero, such as nil, NULL, and 0.0). An object’s isa instance variable is inherited from NSObject, so it is common to all Cocoa objects. After allocation sets isa to the object’s class, the object is integrated into the runtime’s view of the inheritance hierarchy and the current network of objects (class and instance) that constitute a program. Consequently an object can find whatever information it needs at runtime, such as another object’s place in the inheritance hierarchy, the protocols that other objects conform to, and the location of the method implementations it can perform in response to messages. In summary, allocation not only allocates memory for an object but initializes two small but very important attributes of any object: its isa instance variable and itsretain count. It also sets all remaining instance variables to zero. But the resulting object is not yet usable. Initializing methods such as init must yet initialize objects with their particular characteristics and return a functional object. 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 34 Object AllocationInitialization sets the instance variables of an object to reasonable and useful initial values. It can also allocate and prepare other global resources needed by the object, loading them if necessary from an external source such as a file. Every object that declares instance variables should implement an initializing method—unless the defaultset-everything-to-zero initialization issufficient. If an object does not implement an initializer, Cocoa invokes the initializer of the nearest ancestor instead. The Form of Initializers NSObject declares the init prototype for initializers; it is an instance method typed to return an object of type id. Overriding init is fine for subclasses that require no additional data to initialize their objects. But often initialization depends on external data to set an object to a reasonable initial state. For example, say you have an Account class; to initialize an Account object appropriately requires a unique account number, and this must be supplied to the initializer. Thus initializers can take one or more parameters; the only requirement is that the initializing method begins with the letters “init”. (The stylistic convention init... is sometimes used to refer to initializers.) Note: Instead of implementing an initializer with parameters, a subclass can implement only a simple init method and then use “set” accessor methods immediately after initialization to set the object to a useful initial state. (Accessor methods enforce encapsulation of object data by setting and getting the values of instance variables.) Or, if the subclass uses properties and the related access syntax, it may assign values to the properties immediately after initialization. Cocoa has plenty of examples of initializers with parameters. Here are a few (with the defining class in parentheses): - (id)initWithArray:(NSArray *)array; (from NSSet) - (id)initWithTimeInterval:(NSTimeInterval)secsToBeAdded sinceDate:(NSDate *)anotherDate; (from NSDate) - (id)initWithContentRect:(NSRect)contentRect styleMask:(unsigned int)aStyle backing:(NSBackingStoreType)bufferingType defer:(BOOL)flag; (from NSWindow) - (id)initWithFrame:(NSRect)frameRect; (from NSControl and NSView) 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 35 Object InitializationThese initializers are instance methods that begin with “init” and return an object of the dynamic type id. Other than that, they follow the Cocoa conventions for multiparameter methods, often using WithType: or FromSource: before the first and most important parameter. Issues with Initializers Although init... methods are required by their method signature to return an object, that object is not necessarily the one that was most recently allocated—the receiver of the init... message. In other words, the object you get back from an initializer might not be the one you thought was being initialized. Two conditions prompt the return of something other than the just-allocated object. The first involves two related situations: when there must be a singleton instance or when the defining attribute of an object must be unique. Some Cocoa classes—NSWorkspace, for instance—allow only one instance in a program; a class in such a case must ensure (in an initializer or, more likely, in a class factory method) that only one instance is created, returning this instance if there is any further request for a new one. A similar situation arises when an object is required to have an attribute that makes it unique. Recall the hypothetical Account class mentioned earlier. An account of any sort must have a unique identifier. If the initializer for this class—say, initWithAccountID:—is passed an identifier that has already been associated with an object, it must do two things: ● Release the newly allocated object (in memory-managed code) ● Return the Account object previously initialized with this unique identifier By doing this, the initializer ensures the uniqueness of the identifier while providing what was asked for: an Account instance with the requested identifier. Sometimes an init... method cannot perform the initialization requested. For example, an initFromFile: method expects to initialize an object from the contents of a file, the path to which is passed as a parameter. But if no file exists at that location, the object cannot be initialized. A similar problem happens if an initWithArray: initializer is passed an NSDictionary object instead of an NSArray object. When an init... method cannot initialize an object, it should: ● Release the newly allocated object (in memory-managed code) ● Return nil Returning nil from an initializer indicates that the requested object cannot be created. When you create an object, you should generally check whether the returned value is nil before proceeding: Object Initialization Issues with Initializers 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 36id anObject = [[MyClass alloc] init]; if (anObject) { [anObject doSomething]; // more messages... } else { // handle error } Because an init... method might return nil or an object other than the one explicitly allocated, it is dangerous to use the instance returned by alloc or allocWithZone: instead of the one returned by the initializer. Consider the following code: id myObject = [MyClass alloc]; [myObject init]; [myObject doSomething]; The init method in this example could have returned nil or could have substituted a different object. Because you can send a message to nil without raising an exception, nothing would happen in the former case except (perhaps) a debugging headache. But you should always rely on the initialized instance instead of the “raw” just-allocated one. Therefore, you should nest the allocation message inside the initialization message and test the object returned from the initializer before proceeding. id myObject = [[MyClass alloc] init]; if ( myObject ) { [myObject doSomething]; } else { // error recovery... } Once an object is initialized, you should not initialize it again. If you attempt a reinitialization, the framework class of the instantiated object often raises an exception. For example, the second initialization in this example would result in NSInvalidArgumentException being raised. NSString *aStr = [[NSString alloc] initWithString:@"Foo"]; aStr = [aStr initWithString:@"Bar"]; Object Initialization Issues with Initializers 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 37Implementing an Initializer There are several critical rules to follow when implementing an init... method that serves as a class’s sole initializer or, if there are multiple initializers, its designated initializer (described in “Multiple Initializers and the Designated Initializer” (page 40)): ● Always invoke the superclass (super) initializer first. ● Check the object returned by the superclass. If it is nil, then initialization cannot proceed; return nil to the receiver. ● When initializing instance variables that are references to objects, retain or copy the object as necessary (in memory-managed code). ● After setting instance variables to valid initial values, return self unless: ● It was necessary to return a substituted object, in which case release the freshly allocated object first (in memory-managed code). ● A problem prevented initialization from succeeding, in which case return nil. - (id)initWithAccountID:(NSString *)identifier { if ( self = [super init] ) { Account *ac = [accountDictionary objectForKey:identifier]; if (ac) { // object with that ID already exists [self release]; return [ac retain]; } if (identifier) { accountID = [identifier copy]; // accountID is instance variable [accountDictionary setObject:self forKey:identifier]; return self; } else { [self release]; return nil; } } else return nil; } Object Initialization Implementing an Initializer 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 38Note: Although, for the sake of simplicity, this example returns nil if the parameter is nil, the better Cocoa practice is to raise an exception. It isn’t necessary to initialize all instance variables of an object explicitly, just those that are necessary to make the object functional. The default set-to-zero initialization performed on an instance variable during allocation is often sufficient. Make sure that you retain or copy instance variables, as required for memory management. The requirement to invoke the superclass’s initializer as the first action is important. Recall that an object encapsulates not only the instance variables defined by its class but the instance variables defined by all of its ancestor classes. By invoking the initializer of super first, you help to ensure that the instance variables defined by classes up the inheritance chain are initialized first. The immediate superclass, in its initializer, invokes the initializer of its superclass, which invokes the main init... method of its superclass, and so on (see Figure 6-1). The proper order of initialization is critical because the later initializations of subclasses may depend on superclass-defined instance variables being initialized to reasonable values. Figure 6-1 Initialization up the inheritance chain super super Class A Class B Class C inherits from inherits from self - (id)initWithName:birthday: - (id)initWithName: Instance variables: NSString *name: Instance variables: NSString *name: NSDate *dob: - (id)initWithName:birthday:ssn: Instance variables: NSString *name: NSDate *dob: NSNumber *ssn: sets sets sets Object Initialization Implementing an Initializer 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 39Inherited initializers are a concern when you create a subclass. Sometimes a superclass init... method sufficiently initializes instances of your class. But because it is more likely it won’t, you should override the superclass’s initializer. If you don’t, the superclass’s implementation is invoked, and because the superclass knows nothing about your class, your instances may not be correctly initialized. Multiple Initializers and the Designated Initializer A class can define more than one initializer. Sometimes multiple initializers let clients of the class provide the input for the same initialization in different forms. The NSSet class, for example, offers clientsseveral initializers that accept the same data in different forms; one takes an NSArray object, another a counted list of elements, and another a nil-terminated list of elements: - (id)initWithArray:(NSArray *)array; - (id)initWithObjects:(id *)objects count:(unsigned)count; - (id)initWithObjects:(id)firstObj, ...; Some subclasses provide convenience initializers that supply default values to an initializer that takes the full complement of initialization parameters. Thisinitializer is usually the designated initializer, the most important initializer of a class. For example, assume there is a Task class and it declares a designated initializer with this signature: - (id)initWithTitle:(NSString *)aTitle date:(NSDate *)aDate; The Task class might include secondary, or convenience, initializersthatsimply invoke the designated initializer, passing it default values for those parameters the secondary initializer doesn’t explicitly request. This example shows a designated initializer and a secondary initializer. - (id)initWithTitle:(NSString *)aTitle { return [self initWithTitle:aTitle date:[NSDate date]]; } - (id)init { return [self initWithTitle:@"Task"]; } Object Initialization Multiple Initializers and the Designated Initializer 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 40The designated initializer plays an important role for a class. It ensures that inherited instance variables are initialized by invoking the designated initializer of the superclass. It is typically the init... method that has the most parameters and that does most of the initialization work, and it is the initializer that secondary initializers of the class invoke with messages to self. When you define a subclass, you must be able to identify the designated initializer of the superclass and invoke it in your subclass’s designated initializer through a message to super. You must also make sure that inherited initializers are covered in some way. And you may provide as many convenience initializers as you deem necessary. When designing the initializers of your class, keep in mind that designated initializers are chained to each other through messages to super; whereas other initializers are chained to the designated initializer of their class through messages to self. An example will make this clearer. Let’s say there are three classes, A, B, and C; class B inherits from class A, and class C inherits from class B. Each subclass adds an attribute as an instance variable and implements an init... method—the designated initializer—to initialize this instance variable. They also define secondary initializers and ensure that inherited initializers are overridden, if necessary. Figure 6-2 illustrates the initializers of all three classes and their relationships. Figure 6-2 Interactions of secondary and designated initializers - (id)init super super Class A Class B Class C inherits from inherits from self - (id)init - (id)initWithTitle: self - (id)initWithTitle: - (id)initWithTitle:date: Object Initialization Multiple Initializers and the Designated Initializer 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 41The designated initializer for each class is the initializer with the most coverage; it is the method that initializes the attribute added by the subclass. The designated initializer is also the init... method that invokes the designated initializer of the superclass in a message to super. In this example, the designated initializer of class C, initWithTitle:date:, invokesthe designated initializer of itssuperclass, initWithTitle:, which in turn invokes the init method of class A. When creating a subclass, it’s always important to know the designated initializer of the superclass. Although designated initializers are thus connected up the inheritance chain through messages to super, secondary initializers are connected to their class’s designated initializer through messagesto self. Secondary initializers (as in this example) are frequently overridden versions of inherited initializers. Class C overrides initWithTitle: to invoke its designated initializer, passing it a default date. This designated initializer, in turn, invokes the designated initializer of class B, which is the overridden method, initWithTitle:. If you sent an initWithTitle: message to objects of class B and class C, you’d be invoking different method implementations. On the other hand, if class C did not override initWithTitle: and you sent the message to an instance of class C, the class B implementation would be invoked. Consequently, the C instance would be incompletely initialized (since it would lack a date). When creating a subclass, it’s important to make sure that all inherited initializers are adequately covered. Sometimes the designated initializer of a superclass may be sufficient for the subclass, and so there is no need for the subclass to implement its own designated initializer. Other times, a class’s designated initializer may be an overridden version of its superclass's designated initializer. This is frequently the case when the subclass needs to supplement the work performed by the superclass’s designated initializer, even though the subclass does not add any instance variables of its own (or the instance variables it does add don’t require explicit initialization). Object Initialization Multiple Initializers and the Designated Initializer 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 42The Model-View-Controller design pattern (MVC) is quite old. Variations of it have been around at least since the early days of Smalltalk. It is a high-level pattern in that it concerns itself with the global architecture of an application and classifies objects according to the general rolesthey play in an application. It is also a compound pattern in that it comprises several, more elemental patterns. Object-oriented programs benefit in several ways by adapting the MVC design pattern for their designs. Many objectsin these programstend to be more reusable and their interfacestend to be better defined. The programs overall are more adaptable to changing requirements—in other words, they are more easily extensible than programs that are not based on MVC. Moreover, many technologies and architectures in Cocoa—such as bindings, the document architecture, and scriptability—are based on MVC and require that your custom objects play one of the roles defined by MVC. Roles and Relationships of MVC Objects The MVC design pattern considersthere to be three types of objects: model objects, view objects, and controller objects. The MVC pattern defines the roles that these types of objects play in the application and their lines of communication. When designing an application, a major step is choosing—or creating custom classes for—objects that fall into one of these three groups. Each of the three types of objects is separated from the others by abstract boundaries and communicates with objects of the other types across those boundaries. Model Objects Encapsulate Data and Basic Behaviors Model objects represent special knowledge and expertise. They hold an application’s data and define the logic that manipulates that data. A well-designed MVC application has all its important data encapsulated in model objects. Any data that is part of the persistent state of the application (whether that persistent state is stored in files or databases) should reside in the model objects once the data is loaded into the application. Because they represent knowledge and expertise related to a specific problem domain, they tend to be reusable. Ideally, a model object has no explicit connection to the user interface used to present and edit it. For example, if you have a model object that represents a person (say you are writing an address book), you might want to store a birthdate. That’s a good thing to store in your Person model object. However, storing a date format string or other information on how that date is to be presented is probably better off somewhere else. 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 43 Model-View-ControllerIn practice, thisseparation is not alwaysthe best thing, and there issome room for flexibility here, but in general a model object should not be concerned with interface and presentation issues. One example where a bit of an exception isreasonable is a drawing application that has model objectsthat represent the graphics displayed. It makes sense for the graphic objects to know how to draw themselves because the main reason for their existence is to define a visual thing. But even in this case, the graphic objects should not rely on living in a particular view or any view at all, and they should not be in charge of knowing when to draw themselves. They should be asked to draw themselves by the view object that wants to present them. View Objects Present Information to the User A view object knows how to display, and might allow users to edit, the data from the application’s model. The view should not be responsible forstoring the data it is displaying. (This does not mean the view never actually stores data it’s displaying, of course. A view can cache data or do similar tricks for performance reasons). A view object can be in charge of displaying just one part of a model object, or a whole model object, or even many different model objects. Views come in many different varieties. View objects tend to be reusable and configurable, and they provide consistency between applications. In Cocoa, the AppKit framework defines a large number of view objects and provides many of them in the Interface Builder library. By reusing the AppKit’s view objects, such as NSButton objects, you guarantee that buttons in your application behave just like buttonsin any other Cocoa application, assuring a high level of consistency in appearance and behavior across applications. A view should ensure it is displaying the model correctly. Consequently, it usually needsto know about changes to the model. Because model objects should not be tied to specific view objects, they need a generic way of indicating that they have changed. Controller Objects Tie the Model to the View A controller object acts as the intermediary between the application's view objects and its model objects. Controllers are often in charge of making sure the views have access to the model objects they need to display and act as the conduit through which views learn about changes to the model. Controller objects can also perform set-up and coordinating tasks for an application and manage the life cycles of other objects. In a typical Cocoa MVC design, when users enter a value or indicate a choice through a view object, that value or choice is communicated to a controller object. The controller object might interpret the user input in some application-specific way and then either may tell a model object what to do with thisinput—for example, "add a new value" or "delete the current record"—or it may have the model object reflect a changed value in one of its properties. Based on this same user input, some controller objects might also tell a view object to change an aspect of its appearance or behavior, such as telling a button to disable itself. Conversely, when a model object changes—say, a new data source is accessed—the model object usually communicates that change to a controller object, which then requests one or more view objects to update themselves accordingly. Model-View-Controller Roles and Relationships of MVC Objects 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 44Controller objects can be either reusable or nonreusable, depending on their general type. “Types of Cocoa Controller Objects” (page 45) describes the different types of controller objects in Cocoa. Combining Roles One can merge the MVC roles played by an object, making an object, for example, fulfill both the controller and view roles—in which case, it would be called a view controller. In the same way, you can also have model-controller objects. For some applications, combining roles like this is an acceptable design. A model controller is a controller that concerns itself mostly with the model layer. It “owns” the model; its primary responsibilities are to manage the model and communicate with view objects. Action methods that apply to the model as a whole are typically implemented in a model controller. The document architecture provides a number of these methods for you; for example, an NSDocument object (which is a central part of the document architecture) automatically handles action methods related to saving files. A view controller is a controller that concerns itself mostly with the view layer. It “owns” the interface (the views); its primary responsibilities are to manage the interface and communicate with the model. Action methods concerned with data displayed in a view are typically implemented in a view controller. An NSWindowController object (also part of the document architecture) is an example of a view controller. “Design Guidelinesfor MVC Applications” (page 50) offerssome design advice concerning objects with merged MVC roles. FurtherReading: Document-BasedApplicationsOverview discussesthedistinctionbetweenamodel controller and a view controller from another perspective. Types of Cocoa Controller Objects “Controller Objects Tie the Model to the View” (page 44) sketches the abstract outline of a controller object, but in practice the picture is far more complex. In Cocoa there are two general kinds of controller objects: mediating controllers and coordinating controllers. Each kind of controller object is associated with a different set of classes and each provides a different range of behaviors. A mediating controller is typically an object that inherits from the NSControllerclass. Mediating controller objects are used in the Cocoa bindings technology. They facilitate—or mediate—the flow of data between view objects and model objects. Model-View-Controller Types of Cocoa Controller Objects 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 45iOS Note: AppKit implements the NSController class and its subclasses. These classes and the bindings technology are not available in iOS. Mediating controllers are typically ready-made objects that you drag from the Interface Builder library. You can configure these objects to establish the bindings between properties of view objects and properties of the controller object, and then between those controller properties and specific properties of a model object. As a result, when users change a value displayed in a view object, the new value is automatically communicated to a model object forstorage—via the mediating controller; and when a property of a model changesits value, that change is communicated to a view for display. The abstract NSController class and its concrete subclasses—NSObjectController, NSArrayController, NSUserDefaultsController, and NSTreeController—provide supporting features such as the ability to commit and discard changes and the management of selections and placeholder values. A coordinating controller istypically an NSWindowController or NSDocumentControllerobject (available only in AppKit), or an instance of a custom subclass of NSObject. Its role in an application is to oversee—or coordinate—the functioning of the entire application or of part of the application,such asthe objects unarchived from a nib file. A coordinating controller provides services such as: ● Responding to delegation messages and observing notifications ● Responding to action messages ● Managing the life cycle of owned objects (for example, releasing them at the proper time) ● Establishing connections between objects and performing other set-up tasks NSWindowController and NSDocumentController are classes that are part of the Cocoa architecture for document-based applications. Instances of these classes provide default implementations for several of the services listed above, and you can create subclasses of them to implement more application-specific behavior. You can even use NSWindowController objects to manage windows in an application that is not based on the document architecture. A coordinating controller frequently owns the objects archived in a nib file. As File’s Owner, the coordinating controller is external to the objects in the nib file and manages those objects. These owned objects include mediating controllers as well as window objects and view objects. See “MVC as a Compound Design Pattern” (page 47) for more on coordinating controllers as File's Owner. Instances of custom NSObject subclasses can be entirely suitable as coordinating controllers. These kinds of controller objects combine both mediating and coordinating functions. For their mediating behavior, they make use of mechanismssuch astarget-action, outlets, delegation, and notificationsto facilitate the movement of data between view objects and model objects. They tend to contain a lot of glue code and, because that code is exclusively application-specific, they are the least reusable kind of object in an application. Model-View-Controller Types of Cocoa Controller Objects 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 46Further Reading: For more on the Cocoa bindings technology, see Cocoa Bindings Programming Topics. MVC as a Compound Design Pattern Model-View-Controller is a design pattern that is composed of several more basic design patterns. These basic patterns work together to define the functional separation and paths of communication that are characteristic of an MVC application. However, the traditional notion of MVC assigns a set of basic patterns different from those that Cocoa assigns. The difference primarily lies in the roles given to the controller and view objects of an application. In the original (Smalltalk) conception, MVC is made up of the Composite, Strategy, and Observer patterns. ● Composite—The view objectsin an application are actually a composite of nested viewsthat work together in a coordinated fashion (that is, the view hierarchy). These display components range from a window to compound views, such as a table view, to individual views, such as buttons. User input and display can take place at any level of the composite structure. ● Strategy—A controller object implements the strategy for one or more view objects. The view object confines itself to maintaining its visual aspects, and it delegates to the controller all decisions about the application-specific meaning of the interface behavior. ● Observer—A model object keeps interested objects in an application—usually view objects—advised of changes in its state. The traditional way the Composite, Strategy, and Observer patterns work together is depicted by Figure 7-1: The user manipulates a view at some level of the composite structure and, as a result, an event is generated. A controller object receives the event and interprets it in an application-specific way—that is, it applies a strategy. This strategy can be to request (via message) a model object to change its state or to request a view Model-View-Controller MVC as a Compound Design Pattern 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 47object (at some level of the composite structure) to change its behavior or appearance. The model object, in turn, notifies all objects who have registered as observers when its state changes; if the observer is a view object, it may update its appearance accordingly. Figure 7-1 Traditional version of MVC as a compound pattern User action Update Get changed state Update Strategy Controller Composite View Notify Observer Model The Cocoa version of MVC as a compound pattern has some similarities to the traditional version, and in fact it is quite possible to construct a working application based on the diagram in Figure 7-1. By using the bindings technology, you can easily create a Cocoa MVC application whose views directly observe model objects to receive notifications of state changes. However, there is a theoretical problem with this design. View objects and model objects should be the most reusable objects in an application. View objects represent the "look and feel" of an operating system and the applications that system supports; consistency in appearance and behavior is essential, and that requires highly reusable objects. Model objects by definition encapsulate the data associated with a problem domain and perform operations on that data. Design-wise, it's best to keep model and view objects separate from each other, because that enhances their reusability. In most Cocoa applications, notifications of state changes in model objects are communicated to view objects through controller objects. Figure 7-2 shows this different configuration, which appears much cleaner despite the involvement of two more basic design patterns. Figure 7-2 Cocoa version of MVC as a compound design pattern User action Update Update Notify Mediator Strategy Controller View Model Command Composite Observer Model-View-Controller MVC as a Compound Design Pattern 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 48The controller object in this compound design pattern incorporatesthe Mediator pattern as well asthe Strategy pattern; it mediates the flow of data between model and view objects in both directions. Changes in model state are communicated to view objects through the controller objects of an application. In addition, view objects incorporate the Command pattern through their implementation of the target-action mechanism. Note: The target-action mechanism, which enables view objects to communicate user input and choices, can be implemented in both coordinating and mediating controller objects. However, the design of the mechanism differs in each controller type. For coordinating controllers, you connect the view object to its target (the controller object) in Interface Builder and specify an action selector that must conform to a certain signature. Coordinating controllers, by virtue of being delegates of windows and the global application object, can also be in the responder chain. The bindings mechanism used by mediating controllers also connects view objects to targets and allows action signatures with a variable number of parameters of arbitrary types. Mediating controllers, however, aren’t in the responder chain. There are practical reasons as well as theoretical ones for the revised compound design pattern depicted in Figure 7-2, especially when it comesto the Mediator design pattern. Mediating controllers derive from concrete subclasses of NSController, and these classes, besides implementing the Mediator pattern, offer many features that applications should take advantage of, such as the management of selections and placeholder values. And if you opt not to use the bindings technology, your view object could use a mechanism such as the Cocoa notification center to receive notifications from a model object. But this would require you to create a custom view subclass to add the knowledge of the notifications posted by the model object. Model-View-Controller MVC as a Compound Design Pattern 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 49In a well-designed Cocoa MVC application, coordinating controller objects often own mediating controllers, which are archived in nib files. Figure 7-3 shows the relationships between the two types of controller objects. Figure 7-3 Coordinating controller as the owner of a nib file Owns Coordinating Controller Nib file Data flow Data flow View Mediating Controller Model Design Guidelines for MVC Applications The following guidelines apply to Model-View-Controller considerations in the design of applications: ● Although you can use an instance of a custom subclass of NSObject as a mediating controller, there's no reason to go through all the work required to make it one. Use instead one of the ready-made NSController objects designed for the Cocoa bindings technology; that is, use an instance of NSObjectController, NSArrayController, NSUserDefaultsController, or NSTreeController—or a custom subclass of one of these concrete NSController subclasses. However, if the application is very simple and you feel more comfortable writing the glue code needed to implement mediating behavior using outlets and target-action, feel free to use an instance of a custom NSObject subclass as a mediating controller. In a custom NSObject subclass, you can also implement a mediating controller in the NSController sense, using key-value coding, key-value observing, and the editor protocols. Model-View-Controller Design Guidelines for MVC Applications 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 50● Although you can combine MVC roles in an object, the best overall strategy is to keep the separation between roles. This separation enhances the reusability of objects and the extensibility of the program they're used in. If you are going to merge MVC roles in a class, pick a predominant role for that class and then (for maintenance purposes) use categories in the same implementation file to extend the class to play other roles. ● A goal of a well-designed MVC application should be to use as many objects as possible that are (theoretically, at least) reusable. In particular, view objects and model objects should be highly reusable. (The ready-made mediating controller objects, of course, are reusable.) Application-specific behavior is frequently concentrated as much as possible in controller objects. ● Although it is possible to have views directly observe models to detect changes in state, it is best not to do so. A view object should always go through a mediating controller object to learn about changes in an model object. The reason is two-fold: ● If you use the bindings mechanism to have view objects directly observe the properties of model objects, you bypass all the advantages that NSController and its subclasses give your application: selection and placeholder management as well as the ability to commit and discard changes. ● If you don't use the bindings mechanism, you have to subclass an existing view classto add the ability to observe change notifications posted by a model object. ● Strive to limit code dependency in the classes of your application. The greater the dependency a class has on another class, the less reusable it is. Specific recommendations vary by the MVC roles of the two classes involved: ● A view classshouldn't depend on a model class(although this may be unavoidable with some custom views). ● A view class shouldn't have to depend on a mediating controller class. ● A model class shouldn't depend on anything other than other model classes. ● A mediating controller class shouldn’t depend on a model class (although, like views, this may be necessary if it's a custom controller class). ● A mediating controller class shouldn't depend on view classes or on coordinating controller classes. ● A coordinating controller class depends on classes of all MVC role types. ● If Cocoa offers an architecture that solves a programming problem, and this architecture assigns MVC roles to objects of specific types, use that architecture. It will be much easier to put your project together if you do. The document architecture, for example, includes an Xcode project template that configures an NSDocument object (per-nib model controller) as File's Owner. Model-View-Controller Design Guidelines for MVC Applications 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 51Model-View-Controller in Cocoa (OS X) The Model-View-Controller design pattern is fundamental to many Cocoa mechanisms and technologies. As a consequence, the importance of using MVC in object-oriented design goes beyond attaining greater reusability and extensibility for your own applications. If your application is to incorporate a Cocoa technology that is MVC-based, your application will work best if its design also follows the MVC pattern. It should be relatively painless to use these technologies if your application has a good MVC separation, but it will take more effort to use such a technology if you don’t have a good separation. Cocoa in OS X includes the following architectures, mechanisms, and technologies that are based on Model-View-Controller: ● Document architecture. In this architecture, a document-based application consists of a controller object for the entire application (NSDocumentController), a controller object for each document window (NSWindowController), and an object that combines controller and model roles for each document (NSDocument). ● Bindings. MVC is central to the bindings technology of Cocoa. The concrete subclasses of the abstract NSController provide ready-made controller objects that you can configure to establish bindings between view objects and properly designed model objects. ● Application scriptability. When designing an application to make it scriptable, it is essential not only that it follow the MVC design pattern but that your application’s model objects are properly designed. Scripting commandsthat access application state and request application behaviorshould usually be sent to model objects or controller objects. ● Core Data. The Core Data framework manages graphs of model objects and ensures the persistence of those objects by saving them to (and retrieving them from) a persistentstore. Core Data istightly integrated with the Cocoa bindings technology. The MVC and object modeling design patterns are essential determinants of the Core Data architecture. ● Undo. In the undo architecture, model objects once again play a central role. The primitive methods of model objects (which are usually its accessor methods) are often where you implement undo and redo operations. The view and controller objects of an action may also be involved in these operations; for example, you might have such objects give specific titles to the undo and redo menu items, or you might have them undo selections in a text view. Model-View-Controller Model-View-Controller in Cocoa (OS X) 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 52This section defines terms and presents examples of object modeling and key-value coding that are specific to Cocoa bindings and the Core Data framework. Understanding terms such as key paths is fundamental to using these technologies effectively. This section is recommended reading if you are new to object-oriented design or key-value coding. When using the Core Data framework, you need a way to describe your model objects that does not depend on views and controllers. In a good reusable design, views and controllers need a way to access model properties without imposing dependencies between them. The Core Data framework solves this problem by borrowing concepts and terms from database technology—specifically, the entity-relationship model. Entity-relationship modeling is a way of representing objects typically used to describe a data source’s data structures in a way that allows those data structures to be mapped to objects in an object-oriented system. Note that entity-relationship modeling isn’t unique to Cocoa; it’s a popular discipline with a set of rules and terms that are documented in database literature. It is a representation that facilitates storage and retrieval of objects in a data source. A data source can be a database, a file, a web service, or any other persistent store. Because it is not dependent on any type of data source it can also be used to represent any kind of object and its relationship to other objects. In the entity-relationship model, the objects that hold data are called entities, the components of an entity are called attributes, and the referencesto other data-bearing objects are called relationships. Together, attributes and relationships are known as properties. With these three simple components (entities, attributes, and relationships), you can model systems of any complexity. Cocoa uses a modified version of the traditional rules of entity-relationship modeling referred to in this document as object modeling . Object modeling is particularly useful in representing model objects in the Model-View-Controller (MVC) design pattern. Thisis notsurprising because even in a simple Cocoa application, models are typically persistent—that is, they are stored in a data container such as a file. Entities Entities are model objects. In the MVC design pattern, model objects are the objects in your application that encapsulate specified data and provide methods that operate on that data. They are usually persistent but more importantly, model objects are not dependent on how the data is displayed to the user. 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 53 Object ModelingFor example, a structured collection of model objects (an object model) can be used to represent a company’s customer base, a library of books, or a network of computers. A library book has attributes—such as the book title, ISBN number, and copyright date—and relationships to other objects—such as the author and library member. In theory, if the parts of a system can be identified, the system can be expressed as an object model. Figure 8-1 shows an example object model used in an employee management application. In this model, Department models a department and Employee models an employee. Figure 8-1 Employee management application object diagram Department name budget Employee firstName lastName salary Attributes Attributes represent structures that contain data. An attribute of an object may be a simple value, such as a scalar (for example, an integer, float, or double value), but can also be a C structure (for example an array of char values or an NSPoint structure) or an instance of a primitive class (such as, NSNumber, NSData, or NSColor in Cocoa). Immutable objects such as NSColor are usually considered attributes too. (Note that Core Data natively supports only a specific set of attribute types, as described in NSAttributeDescription Class Reference . You can, however, use additional attribute types, as described in “Non-Standard Persistent Attributes” in Core Data Programming Guide .) Object Modeling Attributes 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 54In Cocoa, an attribute typically corresponds to a model’s instance variable or accessor method. For example, Employee has firstName, lastName, and salary instance variables. In an employee management application, you might implement a table view to display a collection of Employee objects and some of their attributes, as shown in Figure 8-2. Each row in the table correspondsto an instance of Employee, and each column corresponds to an attribute of Employee. Figure 8-2 Employees table view Relationships Not all properties of a model are attributes—some properties are relationshipsto other objects. Your application is typically modeled by multiple classes. At runtime, your object model is a collection of related objects that make up an object graph. These are typically the persistent objects that your users create and save to some data container or file before terminating the application (asin a document-based application). The relationships between these model objects can be traversed at runtime to access the properties of the related objects. For example, in the employee management application, there are relationships between an employee and the department in which the employee works, and between an employee and the employee’s manager. Because a manager is also an employee, the employee–manager relationship is an example of a reflexive relationship—a relationship from an entity to itself. Relationships are inherently bidirectional, so conceptually at least there are also relationships between a department and the employees that work in the department, and an employee and the employee’s direct reports. Figure 8-3 (page 56) illustrates the relationships between a Department and an Employee entity, and the Employee reflexive relationship. In this example, the Department entity’s “employees” relationship is the inverse of the Employee entity’s “department” relationship. It is possible, however, for relationships to be navigable in only one direction—for there to be no inverse relationship. If, for example, you are never interested in finding out from a department object what employees are associated with it, then you do not have to model Object Modeling Relationships 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 55that relationship. (Note that although thisistrue in the general case, Core Data may impose additional constraints over general Cocoa object modeling—not modeling the inverse should be considered an extremely advanced option.) Figure 8-3 Relationships in the employee management application Department name budget Employee firstName lastName salary department employees manager directReports Relationship Cardinality and Ownership Every relationship has a cardinality ; the cardinality tells you how many destination objects can (potentially) resolve the relationship. If the destination object is a single object, then the relationship is called a to-one relationship . If there may be more than one object in the destination, then the relationship is called a to-many relationship . Relationships can be mandatory or optional. A mandatory relationship is one where the destination is required—for example, every employee must be associated with a department. An optional relationship is, as the name suggests, optional—for example, not every employee has direct reports. So the directReports relationship depicted in Figure 8-4 (page 56) is optional. It is also possible to specify a range for the cardinality. An optional to-one relationship has a range 0-1. An employee may have any number of direct reports, or a range that specifies a minimum and a maximum, for example, 0-15, which also illustrates an optional to-many relationship. Figure 8-4 illustrates the cardinalities in the employee management application. The relationship between an Employee object and a Department object is a mandatory to-one relationship—an employee must belong to one, and only one, department. The relationship between a Department and its Employee objectsis an optional to-many relationship (represented by a “*”). The relationship between an employee and a manager is an optional to-one relationship (denoted by the range 0-1)—top-ranking employees do not have managers. Figure 8-4 Relationship cardinality 1 department employees * 0..1 manager * directReports Department name budget Employee firstName lastName salary Note also that destination objects of relationships are sometimes owned and sometimes shared. Object Modeling Relationships 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 56Accessing Properties In order for models, views, and controllers to be independent of each other, you need to be able to access properties in a way that is independent of a model’s implementation. This is accomplished by using key-value pairs. Keys You specify properties of a model using a simple key, often a string. The corresponding view or controller uses the key to look up the corresponding attribute value. This design enforces the notion that the attribute itself doesn’t necessarily contain the data—the value can be indirectly obtained or derived. Key-value coding is used to perform thislookup; it is a mechanism for accessing an object’s propertiesindirectly and, in certain contexts, automatically. Key-value coding works by using the names of the object’s properties—typically itsinstance variables or accessor methods—as keysto accessthe values of those properties. For example, you might obtain the name of a Department object using a name key. If the Department object either has an instance variable or a method called name then a value for the key can be returned (if it doesn’t have either, an error is returned). Similarly, you might obtain Employee attributes using the firstName, lastName, and salary keys. Values All values for a particular attribute of a given entity are of the same data type. The data type of an attribute is specified in the declaration of its corresponding instance variable or in the return value of its accessor method. For example, the data type of the Department object name attribute may be an NSString object in Objective-C. Note that key-value coding returns only object values. If the return type or the data type for the specific accessor method or instance variable used to supply the value for a specified key is not an object, then an NSNumber or NSValue object is created for that value and returned in its place. If the name attribute of Department is of type NSString, then, using key-value coding, the value returned for the name key of a Department object is an NSString object. If the budget attribute of Department is of type float, then, using key-value coding, the value returned for the budget key of a Department object is an NSNumber object. Similarly, when you set a value using key-value coding, if the data type required by the appropriate accessor or instance variable for the specified key is not an object, then the value is extracted from the passed object using the appropriate -typeValue method. The value of a to-one relationship is simply the destination object of that relationship. For example, the value of the department property of an Employee object is a Department object. The value of a to-many relationship is the collection object. The collection can be a set or an array. If you use Core Data it is a set; otherwise, it is Object Modeling Accessing Properties 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 57typically an array) that contains the destination objects of that relationship. For example, the value of the employees property of an Department object is a collection containing Employee objects. Figure 8-5 shows an example object graph for the employee management application. Figure 8-5 Object graph for the employee management application Department name: "Marketing" budget: 2000000 employees Collection Collection Employee firstName: "Toni" lastName: "Lau" salary: 7000 manager department directReports Employee firstName: "Joe" lastName: "Jackson" salary: 5000 manager department directReports Key Paths A key path is a string of dot-separated keysthatspecify a sequence of object propertiesto traverse. The property of the first key is determined by, and each subsequent key is evaluated relative to, the previous property. Key paths allow you to specify the properties of related objects in a way that is independent of the model implementation. Using key paths you can specify the path through an object graph, of whatever depth, to a specific attribute of a related object. The key-value coding mechanism implements the lookup of a value given a key path similar to key-value pairs. For example, in the employee-management application you might access the name of a Department via an Employee object using the department.name key path where department is a relationship of Employee and name is an attribute of Department. Key paths are useful if you want to display an attribute of a destination Object Modeling Accessing Properties 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 58entity. For example, the employee table view in Figure 8-6 is configured to display the name of the employee’s department object, not the department object itself. Using Cocoa bindings, the value of the Department column is bound to department.name of the Employee objects in the displayed array. Figure 8-6 Employees table view showing department name Not every relationship in a key path necessarily has a value. For example, the manager relationship can be nil if the employee is the CEO. In this case, the key-value coding mechanism does not break—it simply stops traversing the path and returns an appropriate value, such as nil. Object Modeling Accessing Properties 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 59Cocoa objects are either mutable or immutable. You cannot change the encapsulated values of immutable objects; once such an object is created, the value it represents remains the same throughout the object’s life. But you can change the encapsulated value of a mutable object at any time. The following sections explain the reasons for having mutable and immutable variants of an object type, describe the characteristics and side-effects of object mutability, and recommend how best to handle objects when their mutability is an issue. Why Mutable and Immutable Object Variants? Objects by default are mutable. Most objects allow you to change their encapsulated data through setter accessor methods. For example, you can change the size, positioning, title, buffering behavior, and other characteristics of an NSWindow object. A well-designed model object—say, an object representing a customer record—requires setter methods to change its instance data. The Foundation framework adds some nuance to this picture by introducing classes that have mutable and immutable variants. The mutable subclasses are typically subclasses of their immutable superclass and have “Mutable” embedded in the class name. These classes include the following: NSMutableArray NSMutableDictionary NSMutableSet NSMutableIndexSet NSMutableCharacterSet NSMutableData NSMutableString NSMutableAttributedString NSMutableURLRequest 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 60 Object MutabilityNote: Except for NSMutableParagraphStyle in the AppKit framework, the Foundation framework currently defines all explicitly named mutable classes. However, any Cocoa framework can potentially have its own mutable and immutable class variants. Although these classes have atypical names, they are closer to the mutable norm than their immutable counterparts. Why this complexity? What purpose does having an immutable variant of a mutable objectserve? Consider a scenario where all objects are capable of being mutated. In your application you invoke a method and are handed back a reference to an object representing a string. You use this string in your user interface to identify a particular piece of data. Now another subsystem in your application gets its own reference to that same string and decidesto mutate it. Suddenly your label has changed out from under you. Things can become even more dire if, for instance, you get a reference to an array that you use to populate a table view. The user selects a row corresponding to an object in the array that has been removed by some code elsewhere in the program, and problems ensue. Immutability is a guarantee that an object won’t unexpectedly change in value while you’re using it. Objects that are good candidates for immutability are ones that encapsulate collections of discrete values or contain values that are stored in buffers (which are themselves kinds of collections, either of characters or bytes). But not all such value objects necessarily benefit from having mutable versions. Objects that contain a single simple value, such as instances of NSNumber or NSDate, are not good candidates for mutability. When the represented value changes in these cases, it makes more sense to replace the old instance with a new instance. Performance is also a reason for immutable versions of objects representing things such as strings and dictionaries. Mutable objectsfor basic entitiessuch asstrings and dictionaries bring some overhead with them. Because they must dynamically manage a changeable backing store—allocating and deallocating chunks of memory as needed—mutable objects can be less efficient than their immutable counterparts. Although in theory immutability guarantees that an object’s value is stable, in practice this guarantee isn’t always assured. A method may choose to hand out a mutable object under the return type of its immutable variant; later, it may decide to mutate the object, possibly violating assumptions and choices the recipient has made based on the earlier value. The mutability of an object itself may change as it undergoes various transformations. For example, serializing a property list (using the NSPropertyListSerialization class) does not preserve the mutability aspect of objects, only their general kind—a dictionary, an array, and so on. Thus, when you deserialize this property list, the resulting objects might not be of the same class asthe original objects. For instance, what was once an NSMutableDictionary object might now be a NSDictionary object. Object Mutability Why Mutable and Immutable Object Variants? 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 61Programming with Mutable Objects When the mutability of objects is an issue, it’s best to adopt some defensive programming practices. Here are a few general rules or guidelines: ● Use a mutable variant of an object when you need to modify its contents frequently and incrementally after it has been created. ● Sometimes it’s preferable to replace one immutable object with another; for example, most instance variables that hold string values should be assigned immutable NSString objects that are replaced with setter methods. ● Rely on the return type for indications of mutability. ● If you have any doubts about whether an object is, or should be, mutable, go with immutable. This section explores the gray areas in these guidelines, discussing typical choices you have to make when programming with mutable objects. It also gives an overview of methods in the Foundation framework for creating mutable objects and for converting between mutable and immutable object variants. Creating and Converting Mutable Objects You can create a mutable object through the standard nested alloc-init message—for example: NSMutableDictionary *mutDict = [[NSMutableDictionary alloc] init]; However, many mutable classes offer initializers and factory methodsthat let you specify the initial or probable capacity of the object, such as the arrayWithCapacity: class method of NSMutableArray: NSMutableArray *mutArray = [NSMutableArray arrayWithCapacity:[timeZones count]]; The capacity hint enables more efficient storage of the mutable object’s data. (Because the convention for class factory methods is to return autoreleased instances, be sure to retain the object if you wish to keep it viable in your code.) You can also create a mutable object by making a mutable copy of an existing object of that general type. To do so, invoke the mutableCopy method that each immutable super class of a Foundation mutable class implements: NSMutableSet *mutSet = [aSet mutableCopy]; Object Mutability Programming with Mutable Objects 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 62In the other direction, you can send copy to a mutable object to make an immutable copy of the object. Many Foundation classes with immutable and mutable variants include methods for converting between the variants, including: ● typeWithType:—for example, arrayWithArray: ● setType:—for example, setString: (mutable classes only) ● initWithType:copyItems:—for example, initWithDictionary:copyItems: Storing and Returning Mutable Instance Variables In Cocoa development you often have to decide whether to make an instance variable mutable or immutable. For an instance variable whose value can change, such as a dictionary or string, when is it appropriate to make the object mutable? And when is it better to make the object immutable and replace it with another object when its represented value changes? Generally, when you have an object whose contents change wholesale, it’s better to use an immutable object. Strings (NSString) and data objects (NSData) usually fall into this category. If an object is likely to change incrementally, it is a reasonable approach to make it mutable. Collections such as arrays and dictionaries fall into this category. However, the frequency of changes and the size of the collection should be factors in this decision. For example, if you have a small array that seldom changes, it’s better to make it immutable. There are a couple of other considerations when deciding on the mutability of a collection held as an instance variable: ● If you have a mutable collection that is frequently changed and that you frequently hand out to clients (that is, you return it directly in a getter accessor method), you run the risk of mutating something that your clients might have a reference to. If this risk is probable, the instance variable should be immutable. ● If the value of the instance variable frequently changes but you rarely return it to clientsin getter methods, you can make the instance variable mutable but return an immutable copy of it in your accessor method; in memory-managed programs, this object would be autoreleased (Listing 9-1). Listing 9-1 Returning an immutable copy of a mutable instance variable @interface MyClass : NSObject { // ... NSMutableSet *widgets; } // ... @end Object Mutability Programming with Mutable Objects 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 63@implementation MyClass - (NSSet *)widgets { return (NSSet *)[[widgets copy] autorelease]; } One sophisticated approach for handling mutable collections that are returned to clients is to maintain a flag that records whether the object is currently mutable or immutable. If there is a change, make the object mutable and apply the change. When handing out the collection, make the object immutable (if necessary) before returning it. Receiving Mutable Objects The invoker of a method is interested in the mutability of a returned object for two reasons: ● It wants to know if it can change the object’s value. ● It wants to know if the object’s value will change unexpectedly while it has a reference to it. Use Return Type, Not Introspection To determine whether it can change a received object, the receiver of a message must rely on the formal type of the return value. If it receives, for example, an array object typed as immutable, it should not attempt to mutate it. It is not an acceptable programming practice to determine if an object is mutable based on its class membership—for example: if ( [anArray isKindOfClass:[NSMutableArray class]] ) { // add, remove objects from anArray } For reasons related to implementation, what isKindOfClass: returns in this case may not be accurate. But for reasons other than this, you should not make assumptions about whether an object is mutable based on class membership. Your decision should be guided solely by what the signature of the method vending the object says about its mutability. If you are not sure whether an object is mutable or immutable, assume it’s immutable. A couple of examples might help clarify why this guideline is important: Object Mutability Programming with Mutable Objects 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 64● You read a property list from a file. When the Foundation framework processes the list, it notices that various subsets of the property list are identical, so it creates a set of objects that it shares among all those subsets. Afterward you look at the created property list objects and decide to mutate one subset. Suddenly, and without being aware of it, you’ve changed the tree in multiple places. ● You ask NSView for its subviews (with the subviews method) and it returns an object that is declared to be an NSArray but which could be an NSMutableArray internally. Then you pass that array to some other code that, through introspection, determinesit to be mutable and changesit. By changing this array, the code is mutating internal data structures of the NSView class. So don’t make an assumption about object mutability based on what introspection tells you about an object. Treat objects as mutable or not based on what you are handed at the API boundaries (that is, based on the return type). If you need to unambiguously mark an object as mutable or immutable when you passit to clients, pass that information as a flag along with the object. Make Snapshots of Received Objects If you want to ensure that a supposedly immutable object received from a method does not mutate without your knowing about it, you can make snapshots of the object by copying it locally. Then occasionally compare the stored version of the object with the most recent version. If the object has mutated, you can adjust anything in your program that is dependent on the previous version of the object. Listing 9-2 shows a possible implementation of this technique. Listing 9-2 Making a snapshot of a potentially mutable object static NSArray *snapshot = nil; - (void)myFunction { NSArray *thingArray = [otherObj things]; if (snapshot) { if ( ![thingArray isEqualToArray:snapshot] ) { [self updateStateWith:thingArray]; } } snapshot = [thingArray copy]; } A problem with making snapshots of objects for later comparison is that it is expensive. You’re required to make multiple copies of the same object. A more efficient alternative isto use key-value observing. See Key-Value Observing Programming Guide for a description of this protocol. Object Mutability Programming with Mutable Objects 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 65Mutable Objects in Collections Storing mutable objects in collection objects can cause problems. Certain collections can become invalid or even corrupt if objects they contain mutate because, by mutating, these objects can affect the way they are placed in the collection. First, the properties of objects that are keys in hashing collections such as NSDictionary objects or NSSet objects will, if changed, corrupt the collection if the changed properties affect the results of the object’s hash or isEqual: methods. (If the hash method of the objectsin the collection does not depend on their internal state, corruption is less likely.) Second, if an object in an ordered collection such as a sorted array has its properties changed, this might affect how the object compares to other objects in the array, thus rendering the ordering invalid. Object Mutability Programming with Mutable Objects 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 66An outlet is a property of an object that references another object. The reference is archived through Interface Builder. The connections between the containing object and its outlets are reestablished every time the containing object is unarchived from its nib file. The containing object holds an outlet declared as a property with the type qualifier of IBOutlet and a weak option. For example: @interface AppController : NSObject { } @property (weak) IBOutlet NSArray *keywords; Because it is a property, an outlet becomes part of an object’s encapsulated data and is backed by an instance variable. But an outlet is more than a simple property. The connection between an object and its outlets is archived in a nib file; when the nib file is loaded, each connection is unarchived and reestablished, and is thus always available whenever it becomes necessary to send messages to the other object. The type qualifier IBOutlet is a tag applied to an property declaration so that the Interface Builder application can recognize the property as an outlet and synchronize the display and connection of it with Xcode. An outlet is declared as a weak reference (weak) to prevent strong reference cycles. You create and connect an outlet in the Interface Builder feature of Xcode.The property declaration for the outlet must be tagged with the IBOutlet qualifier. An application typically sets outlet connections between its custom controller objects and objects on the user interface, but they can be made between any objects that can be represented as instances in Interface Builder, even between two custom objects. As with any item of object state, you should be able to justify its inclusion in a class; the more outlets an object has, the more memory it takes up. If there are other ways to obtain a reference to an object, such as finding it through its index position in a matrix, or through its inclusion as a function parameter, or through use of a tag (an assigned numeric identifier), you should do that instead. Outlets are a form of object composition, which is a dynamic pattern that requires an object to somehow acquire referencesto its constituent objectsso that it can send messagesto them. It typically holdsthese other objects as properties backed by instance variables. These variables must be initialized with the appropriate references at some point during the execution of the program. 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 67 OutletsThe Receptionist design pattern addresses the general problem of redirecting an event occurring in one execution context of an application to another execution context for handling. It is a hybrid pattern. Although it doesn’t appear in the “Gang of Four” book, it combines elements of the Command, Memo, and Proxy design patterns described in that book. It is also a variant of the Trampoline pattern (which also doesn’t appear in the book); in this pattern, an event initially is received by a trampoline object, so-called because it immediately bounces, or redirects, the event to a target object for handling. The Receptionist Design Pattern in Practice A KVO notification invokes the observeValueForKeyPath:ofObject:change:context: method implemented by an observer. If the change to the property occurs on a secondary thread, the observeValueForKeyPath:ofObject:change:context: code executes on that same thread. There the central object in this pattern, the receptionist, acts as a thread intermediary. As Figure 11-1 illustrates, a receptionist object is assigned as the observer of a model object’s property. The receptionist implements observeValueForKeyPath:ofObject:change:context: to redirect the notification received on a secondary thread to another execution context—the main operation queue, in this case. When the property 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 68 Receptionist Patternchanges, the receptionist receives a KVO notification. The receptionist immediately adds a block operation to the main operation queue; the block contains code—specified by the client—that updates the user interface appropriately. Figure 11-1 Bouncing KVO updates to the main operation queue Main thread self.value = newValue; observeValueForKeyPath: ofObject:change:context: addOperation: modelObject receptionist Secondary Main operation queue thread task You define a receptionist class so that it has the elements it needs to add itself as an observer of a property and then convert a KVO notification into an update task. Thus it must know what object it’s observing, the property of the object that it’s observing, what update task to execute, and what queue to execute it on. Listing 11-1 shows the initial declaration of the RCReceptionist class and its instance variables. Listing 11-1 Declaring the receptionist class @interface RCReceptionist : NSObject { id observedObject; NSString *observedKeyPath; RCTaskBlock task; NSOperationQueue *queue; } Receptionist Pattern The Receptionist Design Pattern in Practice 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 69The RCTaskBlock instance variable is a block object of the following declared type: typedef void (^RCTaskBlock)(NSString *keyPath, id object, NSDictionary *change); These parameters are similar to those of the observeValueForKeyPath:ofObject:change:context: method. Next, the parameter class declares a single class factory method in which an RCTaskBlock object is a parameter: + (id)receptionistForKeyPath:(NSString *)path object:(id)obj queue:(NSOperationQueue *)queue task:(RCTaskBlock)task; It implementsthis method to assign the passed-in value to instance variables of the created receptionist object and to add that object as an observer of the model object’s property, as shown in Listing 11-2. Listing 11-2 The class factory method for creating a receptionist object + (id)receptionistForKeyPath:(NSString *)path object:(id)obj queue:(NSOperationQueue *)queue task:(RCTaskBlock)task { RCReceptionist *receptionist = [RCReceptionist new]; receptionist->task = [task copy]; receptionist->observedKeyPath = [path copy]; receptionist->observedObject = [obj retain]; receptionist->queue = [queue retain]; [obj addObserver:receptionist forKeyPath:path options:NSKeyValueObservingOptionNew | NSKeyValueObservingOptionOld context:0]; return [receptionist autorelease]; } Note that the code copies the block object instead of retaining it. Because the block was probably created on the stack, it must be copied to the heap so it exists in memory when the KVO notification is delivered. Finally, the parameter class implements the observeValueForKeyPath:ofObject:change:context: method. The implementation (see Listing 11-3) is simple. Receptionist Pattern The Receptionist Design Pattern in Practice 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 70Listing 11-3 Handling the KVO notification - (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *)context { [queue addOperationWithBlock:^{ task(keyPath, object, change); }]; } This code simply enqueues the task onto the given operation queue, passing the task block the observed object, the key path for the changed property, and the dictionary containing the new value. The task is encapsulated in an NSBlockOperation object that executes the task on the queue. The client object supplies the block code that updates the user interface when it creates a receptionist object, as shown in Listing 11-4. Note that when it creates the receptionist object, the client passes in the operation queue on which the block is to be executed, in this case the main operation queue. Listing 11-4 Creating a receptionist object RCReceptionist *receptionist = [RCReceptionist receptionistForKeyPath:@"value" object:model queue:mainQueue task:^(NSString *keyPath, id object, NSDictionary *change) { NSView *viewForModel = [modelToViewMap objectForKey:model]; NSColor *newColor = [change objectForKey:NSKeyValueChangeNewKey]; [[[viewForModel subviews] objectAtIndex:0] setFillColor:newColor]; }]; When to Use the Receptionist Pattern You can adopt the Receptionist design pattern whenever you need to bounce off work to another execution context for handling. When you observe a notification, or implement a block handler, or respond to an action message and you want to ensure that your code executes in the appropriate execution context, you can implement the Receptionist pattern to redirect the work that must be done to that execution context. With the Receptionist pattern, you might even perform some filtering or coalescing of the incoming data before you bounce off a task to processthe data. For example, you could collect data into batches, and then at intervals dispatch those batches elsewhere for processing. Receptionist Pattern When to Use the Receptionist Pattern 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 71One common situation where the Receptionist pattern is useful is key-value observing. In key-value observing, changes to the value of an model object’s property are communicated to observers via KVO notifications. However, changes to a model object can occur on a background thread. This results in a thread mismatch, because changes to a model object’s state typically result in updates to the user interface, and these must occur on the main thread. In this case, you want to redirect the KVO notifications to the main thread. where the updates to an application’s user interface can occur. Receptionist Pattern When to Use the Receptionist Pattern 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 72Although delegation, bindings, and notification are useful for handling certain forms of communication between objects in a program, they are not particularly suitable for the most visible sort of communication. A typical application’s user interface consists of a number of graphical objects, and perhaps the most common of these objects are controls. A control is a graphical analog of a real-world or logical device (button, slider, checkboxes, and so on); as with a real-world control, such as a radio tuner, you use it to convey your intent to some system of which it is a part—that is, an application. The role of a control on a user interface is simple: It interprets the intent of the user and instructs some other object to carry out that request. When a user acts on the control by, say, clicking it or pressing the Return key, the hardware device generates a raw event. The control accepts the event (as appropriately packaged for Cocoa) and translates it into an instruction that is specific to the application. However, events by themselves don't give much information about the user's intent; they merely tell you that the user clicked a mouse button or pressed a key. So some mechanism must be called upon to provide the translation between event and instruction. This mechanism is called target-action . Cocoa uses the target-action mechanism for communication between a control and another object. This mechanism allows the control and, in OS X its cell or cells, to encapsulate the information necessary to send an application-specific instruction to the appropriate object. The receiving object—typically an instance of a custom class—is called the target. The action is the message that the control sends to the target. The object that is interested in the user event—the target—is the one that imparts significance to it, and this significance is usually reflected in the name it gives to the action. The Target A target is a receiver of an action message. A control or, more frequently, its cell holds the target of its action message as an outlet (see “Outlets” (page 67)). The target usually is an instance of one of your custom classes, although it can be any Cocoa object whose class implements the appropriate action method. You can also set a cell’s or control’s target outlet to nil and let the target object be determined at runtime. When the targetis nil,the application object(NSApplication or UIApplication)searchesfor an appropriate receiver in a prescribed order: 1. It begins with the first responder in the key window and follows nextResponder links up the responder chain to the window object’s (NSWindow or UIWindow) content view. 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 73 Target-ActionNote: A key window in OS X responds to key presses for an application and is the receiver of messages from menus and dialogs. An application’s main window is the principal focus of user actions and often has key status as well. 2. It tries the window object and then the window object’s delegate. 3. If the main window is different from the key window, it then starts over with the first responder in the main window and works its way up the main window’s responder chain to the window object and its delegate. 4. Next, the application object tries to respond. If it can’t respond, it tries its delegate. The application object and its delegate are the receivers of last resort. Control objects do not (and should not) retain their targets. However, clients of controlssending action messages (applications, usually) are responsible for ensuring that their targets are available to receive action messages. To do this, they may have to retain their targets in memory-managed environments. This precaution applies equally to delegates and data sources. The Action An action is the message a control sends to the target or, from the perspective of the target, the method the target implements to respond to the action message. A control or—as is frequently the case in AppKit—a control’s cell stores an action as an instance variable of type SEL. SEL is an Objective-C data type used to specify the signature of a message. An action message must have a simple, distinct signature. The method it invokes returns nothing and usually has a sole parameter of type id. This parameter, by convention, is named sender. Here is an example from the NSResponder class, which defines a number of action methods: - (void)capitalizeWord:(id)sender; Action methods declared by some Cocoa classes can also have the equivalent signature: - (IBAction) deleteRecord:(id)sender; In this case, IBAction does not designate a data type for a return value; no value is returned. IBAction is a type qualifier that Interface Builder notices during application development to synchronize actions added programmatically with its internal list of action methods defined for a project. Target-Action The Action 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 74iOS Note: In UIKit, action selectors can also take two other forms. See “Target-Action in UIKit” (page 78) for details. The senderparameter usually identifies the control sending the action message (although it can be another object substituted by the actual sender). The idea behind this is similar to a return address on a postcard. The target can query the sender for more information if it needsto. If the actualsending objectsubstitutes another object as sender, you should treat that object in the same way. For example, say you have a text field and when the user enters text, the action method nameEntered: is invoked in the target: - (void)nameEntered:(id) sender { NSString *name = [sender stringValue]; if (![name isEqualToString:@""]) { NSMutableArray *names = [self nameList]; [names addObject:name]; [sender setStringValue:@""]; } } Here the responding method extracts the contents of the text field, adds the string to an array cached as an instance variable, and clears the field. Other possible queries to the sender would be asking an NSMatrix object for its selected row ([sender selectedRow]), asking an NSButton object for its state ([sender state]), and asking any cell associated with a control for its tag ([[sender cell] tag]), a tag being a numeric identifier. Target-Action in the AppKit Framework The AppKit framework uses specific architectures and conventions in implementing target-action. Controls, Cells, and Menu Items Most controls in AppKit are objects that inherit from the NSControl class. Although a control has the initial responsibility for sending an action message to its target, it rarely carries the information needed to send the message. For this, it usually relies on its cell or cells. A control almost always has one or more cells—objects that inherit from NSCell—associated with it. Why is there this association? A control is a relatively “heavy” object because it inherits all the combined instance variables of its ancestors, which include the NSView and NSResponder classes. Because controls are expensive, Target-Action Target-Action in the AppKit Framework 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 75cells are used to subdivide the screen real estate of a control into various functional areas. Cells are lightweight objects that can be thought of as overlaying all or part of the control. But it's not only a division of area, it's a division of labor. Cells do some of the drawing that controls would otherwise have to do, and cells hold some of the data that controls would otherwise have to carry. Two items of this data are the instance variables for target and action. Figure 12-1 (page 76) depicts the control-cell architecture. Being abstract classes, NSControl and NSCell both incompletely handle the setting of the target and action instance variables. By default, NSControl simply sets the information in its associated cell, if one exists. (NSControl itself supports only a one-to-one mapping between itself and a cell; subclasses of NSControl such as NSMatrix support multiple cells.) In its default implementation, NSCell simply raises an exception. You must go one step further down the inheritance chain to find the class that really implements the setting of target and action: NSActionCell. Objects derived from NSActionCell provide target and action values to their controls so the controls can compose and send an action message to the proper receiver. An NSActionCell object handles mouse (cursor) tracking by highlighting its area and assisting its control in sending action messages to the specified target. In most cases, the responsibility for an NSControl object’s appearance and behavior is completely given over to a corresponding NSActionCell object. (NSMatrix, and itssubclass NSForm, are subclasses of NSControl that don’t follow this rule.) Figure 12-1 How the target-action mechanism works in the control-cell architecture washerObject dryerObject washerCell dryerCell Target Action Target Action Control (NSMatrix) Cells (NSButtonCell) (void)dryIt: (id)sender (void)washIt: (id)sender Target-Action Target-Action in the AppKit Framework 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 76When users choose an item from a menu, an action is sent to a target. Yet menus (NSMenu objects) and their items (NSMenuItem objects) are completely separate, in an architectural sense, from controls and cells. The NSMenuItem class implements the target-action mechanism for its own instances; an NSMenuItem object has both target and action instance variables (and related accessor methods) and sends the action message to the target when a user chooses it. Note: See Control and Cell Programming Topics for Cocoa and Application Menu and Pop-up List Programming Topics for more information about the control-cell architecture. Setting the Target and Action You can set the targets and actions of cells and controls programmatically or by using Interface Builder. For most developers and mostsituations, Interface Builder isthe preferred approach. When you use it to set controls and targets, Interface Builder provides visual confirmation, allows you to lock the connections, and archives the connections to a nib file. The procedure is simple: 1. Declare an action method in the header file of your custom class that has the IBAction qualifier. 2. In Interface Builder, connect the control sending the message to the action method of the target. If the action is handled by a superclass of your custom class or by an off-the-shelf AppKit or UIKit class, you can make the connection without declaring any action method. Of course, if you declare an action method yourself, you must be sure to implement it. To set the action and the target programmatically, use the following methods to send messages to a control or cell object: - (void)setTarget:(id)anObject; - (void)setAction:(SEL)aSelector; The following example shows how you might use these methods: [aCell setTarget:myController]; [aControl setAction:@selector(deleteRecord:)]; [aMenuItem setAction:@selector(showGuides:)]; Target-Action Target-Action in the AppKit Framework 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 77Programmatically setting the target and action does have its advantages and in certain situations it is the only possible approach. For example, you might want the target or action to vary according to some runtime condition, such as whether a network connection exists or whether an inspector window has been loaded. Another example is when you are dynamically populating the items of a pop-up menu, and you want each pop-up item to have its own action. Actions Defined by AppKit The AppKit framework not only includes many NSActionCell-based controls for sending action messages, it defines action methods in many of its classes. Some of these actions are connected to default targets when you create a Cocoa application project. For example, the Quit command in the application menu is connected to the terminate: method in the global application object (NSApp). The NSResponder class also defines many default action messages (also known as standard commands) for common operations on text. This allowsthe Cocoa textsystem to send these action messages up an application’s responder chain—a hierarchical sequence of event-handling objects—where it can be handled by the first NSView, NSWindow, or NSApplication object that implements the corresponding method. Target-Action in UIKit The UIKit framework also declares and implements a suite of control classes; the control classesin thisframework inherit from the UIControl class, which defines most of the target-action mechanism for iOS. However there are some fundamental differences in how the AppKit and UIKit frameworks implement target-action. One of these differences is that UIKit does not have any true cell classes. Controls in UIKit do not rely upon their cells for target and action information. A larger difference in how the two frameworks implement target-action lies in the nature of the event model. In the AppKit framework, the user typically uses a mouse and keyboard to register events for handling by the system. These events—such as clicking on a button—are limited and discrete. Consequently, a control object in AppKit usually recognizes a single physical event as the trigger for the action it sends to its target. (In the case of buttons, this is a mouse-up event.) In iOS, the user’s fingers are what originate events instead of mouse clicks, mouse drags, or physical keystrokes. There can be more than one finger touching an object on the screen at one time, and these touches can even be going in different directions. To account for this multitouch event model, UIKit declares a set of control-event constants in UIControl.h that specify various physical gestures that users can make on controls, such as lifting a finger from a control, dragging a finger into a control, and touching down within a text field. You can configure a control object so that it responds to one or more of these touch events by sending an action message to a target. Many of the Target-Action Target-Action in UIKit 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 78control classes in UIKit are implemented to generate certain control events; for example, instances of the UISlider class generate a UIControlEventValueChanged control event, which you can use to send an action message to a target object. You set up a control so that it sends an action message to a target object by associating both target and action with one or more control events. To do this, send addTarget:action:forControlEvents: to the control for each target-action pair you want to specify. When the user touches the control in a designated fashion, the control forwards the action message to the global UIApplication object in a sendAction:to:from:forEvent: message. As in AppKit, the global application object is the centralized dispatch point for action messages. If the control specifies a nil target for an action message, the application queries objects in the responder chain until it finds one that is willing to handle the action message—that is, one implementing a method corresponding to the action selector. In contrast to the AppKit framework, where an action method may have only one or perhapstwo valid signatures, the UIKit framework allows three different forms of action selector: - (void)action - (void)action:(id)sender - (void)action:(id)sender forEvent:(UIEvent *)event To learn more about the target-action mechanism in UIKit, read UIControl Class Reference . Target-Action Target-Action in UIKit 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 79There are a number of data types in the Core Foundation framework and the Foundation framework that can be used interchangeably. This capability, called toll-free bridging , means that you can use the same data type as the parameter to a Core Foundation function call or as the receiver of an Objective-C message. For example, NSLocale (see NSLocale Class Reference ) is interchangeable with its Core Foundation counterpart, CFLocale (see CFLocale Reference ). Therefore, in a method where you see an NSLocale * parameter, you can pass a CFLocaleRef, and in a function where you see a CFLocaleRef parameter, you can pass an NSLocale instance. You cast one type to the other to suppress compiler warnings, as illustrated in the following example. NSLocale *gbNSLocale = [[NSLocale alloc] initWithLocaleIdentifier:@"en_GB"]; CFLocaleRef gbCFLocale = (CFLocaleRef) gbNSLocale; CFStringRef cfIdentifier = CFLocaleGetIdentifier (gbCFLocale); NSLog(@"cfIdentifier: %@", (NSString *)cfIdentifier); // logs: "cfIdentifier: en_GB" CFRelease((CFLocaleRef) gbNSLocale); CFLocaleRef myCFLocale = CFLocaleCopyCurrent(); NSLocale * myNSLocale = (NSLocale *) myCFLocale; [myNSLocale autorelease]; NSString *nsIdentifier = [myNSLocale localeIdentifier]; CFShow((CFStringRef) [@"nsIdentifier: " stringByAppendingString:nsIdentifier]); // logs identifier for current locale Note from the example that the memory management functions and methods are also interchangeable—you can use CFRelease with a Cocoa object and release and autorelease with a Core Foundation object. 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 80 Toll-Free BridgingNote: When using garbage collection, there are important differencesto how memory management works for Cocoa objects and Core Foundation objects. See “Using Core Foundation with Garbage Collection” for details. Toll-free bridging has been available since OS X v10.0. Table 13-1 provides a list of the data types that are interchangeable between Core Foundation and Foundation. For each pair, the table also lists the version of OS X in which toll-free bridging between them became available. Table 13-1 Data types that can be used interchangeably between Core Foundation and Foundation Core Foundation type Foundation class Availability CFArrayRef NSArray OS X v10.0 CFAttributedStringRef NSAttributedString OS X v10.4 CFCalendarRef NSCalendar OS X v10.4 CFCharacterSetRef NSCharacterSet OS X v10.0 CFDataRef NSData OS X v10.0 CFDateRef NSDate OS X v10.0 CFDictionaryRef NSDictionary OS X v10.0 CFErrorRef NSError OS X v10.5 CFLocaleRef NSLocale OS X v10.4 CFMutableArrayRef NSMutableArray OS X v10.0 CFMutableAttributedStringRef NSMutableAttributedString OS X v10.4 CFMutableCharacterSetRef NSMutableCharacterSet OS X v10.0 CFMutableDataRef NSMutableData OS X v10.0 CFMutableDictionaryRef NSMutableDictionary OS X v10.0 CFMutableSetRef NSMutableSet OS X v10.0 CFMutableStringRef NSMutableString OS X v10.0 CFNumberRef NSNumber OS X v10.0 CFReadStreamRef NSInputStream OS X v10.0 Toll-Free Bridging 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 81Core Foundation type Foundation class Availability CFRunLoopTimerRef NSTimer OS X v10.0 CFSetRef NSSet OS X v10.0 CFStringRef NSString OS X v10.0 CFTimeZoneRef NSTimeZone OS X v10.0 CFURLRef NSURL OS X v10.0 CFWriteStreamRef NSOutputStream OS X v10.0 Note: Not all data types are toll-free bridged, even though their names might suggest that they are. For example, NSRunLoop is not toll-free bridged to CFRunLoop, NSBundle is not toll-free bridged to CFBundle, and NSDateFormatter is not toll-free bridged to CFDateFormatter. Toll-Free Bridging 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 82This table describes the changes to Concepts in Objective-C Programming . Date Notes Descriptions of design patterns, architectures, and other concepts important in Cocoa and Cocoa Touch development. 2012-01-09 2012-01-09 | © 2012 Apple Inc. All Rights Reserved. 83 Document Revision HistoryApple Inc. © 2012 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, Cocoa Touch, Objective-C, OS X, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Local and Push Notification Programming GuideContents About Local Notifications and Push Notifications 5 At a Glance 6 The Problem That Local and Push Notifications Solve 6 Local and Push Notifications Are Different in Origination 6 You Schedule a Local Notification, Register a Push Notification, and Handle Both 6 The Apple Push Notification Service Is the Gateway for Push Notifications 7 You Must Obtain Security Credentials for Push Notifications 7 The Provider Communicates with APNs over a Binary Interface 7 Prerequisites 8 See Also 8 Local and Push Notifications in Depth 9 Push and Local Notifications Appear the Same to Users 9 More About Local Notifications 12 More About Push Notifications 13 Scheduling, Registering, and Handling Notifications 15 Preparing Custom Alert Sounds 15 Scheduling Local Notifications 16 Registering for Remote Notifications 19 Handling Local and Remote Notifications 21 Passing the Provider the Current Language Preference (Remote Notifications) 26 Apple Push Notification Service 28 A Push Notification and Its Path 28 Feedback Service 29 Quality of Service 30 Security Architecture 30 Service-to-Device Connection Trust 31 Provider-to-Service Connection Trust 31 Token Generation and Dispersal 32 Token Trust (Notification) 34 Trust Components 34 The Notification Payload 35 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 2Localized Formatted Strings 37 Examples of JSON Payloads 39 Provisioning and Development 42 Sandbox and Production Environments 42 Provisioning Procedures 43 Creating the SSL Certificate and Keys 43 Creating and Installing the Provisioning Profile 44 Installing the SSL Certificate and Key on the Server 45 Provider Communication with Apple Push Notification Service 47 General Provider Requirements 47 The Binary Interface and Notification Formats 48 The Feedback Service 53 Document Revision History 55 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 3 ContentsFigures, Tables, and Listings Local and Push Notifications in Depth 9 Figure 1-1 A notification alert 10 Figure 1-2 An application icon with a badge number (iOS) 11 Figure 1-3 A notification alert message with the action button suppressed 11 Scheduling, Registering, and Handling Notifications 15 Listing 2-1 Creating, configuring, and scheduling a local notification 17 Listing 2-2 Presenting a local notification immediately while running in the background 18 Listing 2-3 Registering for remote notifications 21 Listing 2-4 Handling a local notification when an application is launched 23 Listing 2-5 Downloading data from a provider 24 Listing 2-6 Handling a local notification when an application is already running 25 Listing 2-7 Getting the current supported language and sending it to the provider 26 Apple Push Notification Service 28 Figure 3-1 A push notification from a provider to a client application 29 Figure 3-2 Push notifications from multiple providers to multiple devices 29 Figure 3-3 Sharing the device token 33 Table 3-1 Keys and values of the aps dictionary 36 Table 3-2 Child properties of the alert property 36 Provider Communication with Apple Push Notification Service 47 Figure 5-1 Simple notification format 49 Figure 5-2 Enhanced notification format 50 Figure 5-3 Format of error-response packet 51 Figure 5-4 Binary format of a feedback tuple 54 Table 5-1 Codes in error-response packet 51 Listing 5-1 Sending a notification in the simple format via the binary interface 49 Listing 5-2 Sending a notification in the enhanced format via the binary interface 52 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 4Local notifications and push notifications are ways for an application that isn’t running in the foreground to let its users know it has information for them. The information could be a message, an impending calendar event, or new data on a remote server. When presented by the operating system, local and push notifications look and sound the same. They can display an alert message or they can badge the application icon. They can also play a sound when the alert or badge number is shown. Push notifications were introduced in iOS 3.0 and in OS X version 10.7. Local notifications were introduced in iOS 4.0; they are not available in OS X. When users are notified that the application has a message, event, or other data for them, they can launch the application and see the details. They can also choose to ignore the notification, in which case the application is not activated. 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 5 About Local Notifications and Push NotificationsNote: Push notifications and local notifications are not related to broadcast notifications (NSNotificationCenter) or key-value observing notifications. At a Glance Local notifications and push notifications have several important aspects you should be aware of. The Problem That Local and Push Notifications Solve Only one application can be active in the foreground at any time. Many applications operate in a time-based or interconnected environment where events of interest to users can occur when the application is not in the foreground. Local and push notifications allow these applicationsto notify their users when these events occur. Relevant Chapter: “Local and Push Notifications in Depth” (page 9) Local and Push Notifications Are Different in Origination Local and push notifications serve different design needs. A local notification is local to an application on an iPhone, iPad, or iPod touch. Push notifications—also known as remote notifications—arrive from outside a device. They originate on a remote server—the application’s provider—and are pushed to applications on devices (via the Apple Push Notification service) when there are messages to see or data to download. Relevant Chapter: “Local and Push Notifications in Depth” (page 9) You Schedule a Local Notification, Register a Push Notification, and Handle Both To have iOS deliver a local notification at a later time, an application creates a UILocalNotification object, assignsit a delivery date and time,specifies presentation details, and schedulesit. To receive push notifications, an application must register to receive the notifications and then pass to its provider a device token it gets from the operating system. When the operating system delivers a local notification (iOS only) or push notification (iOS or OS X) and the target application is not running in the foreground, it presents the notification (alert, icon badge number, sound). If there is a notification alert and the user taps or clicks the action button (or moves the action slider), the application launches and calls a method to pass in the local-notification object or remote-notification payload. If the application is running in the foreground when the notification is delivered, the application delegate receives a local or push notification. About Local Notifications and Push Notifications At a Glance 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 6Relevant Chapter: “Scheduling, Registering, and Handling Notifications” (page 15) The Apple Push Notification Service Is the Gateway for Push Notifications Apple Push Notification service (APNs) propagates push notificationsto devices having applicationsregistered to receive those notifications. Each device establishes an accredited and encrypted IP connection with the service and receives notifications over this persistent connection. Providers connect with APNs through a persistent and secure channel while monitoring incoming data intended for their client applications. When new data for an application arrives, the provider prepares and sends a notification through the channel to APNs, which pushes the notification to the target device. Related Chapter: “Apple Push Notification Service” (page 28) You Must Obtain Security Credentials for Push Notifications To develop and deploy the provider side of an application for push notifications, you must get SSL certificates from the appropriate Dev Center. Each certificate is limited to a single application, identified by its bundle ID; it is also limited to one of two environments, sandbox (for development and testing) and production. These environments have their own assigned IP address and require their own certificates. You must also obtain provisioning profiles for each of these environments. Related Chapter: “Provisioning and Development” (page 42) The Provider Communicates with APNs over a Binary Interface The binary interface is asynchronous and uses a streaming TCP socket design for sending push notifications as binary content to APNs. There is a separate interface for the sandbox and production environments, each with its own address and port. For each interface, you need to use TLS (or SSL) and the SSL certificate you obtained to establish a secured communications channel. The provider composes each outgoing notification and sends it over this channel to APNs. APNs has a feedback service that maintains a per-application list of devicesfor which there were failed-delivery attempts (that is, APNs was unable to deliver a push notification to an application on a device). Periodically, the provider should connect with the feedback service to see what devices have persistent failures so that it can refrain from sending push notifications to them. About Local Notifications and Push Notifications At a Glance 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 7Related Chapters: “Apple Push Notification Service” (page 28), “Provider Communication with Apple Push Notification Service” (page 47) Prerequisites For local notifications and the client-side implementation of push notifications, familiarity with application development for iOS is assumed. For the provider side of the implementation, knowledge of TLS/SSL and streaming sockets is helpful. See Also You might find these additional sources of information useful for understanding and implementing local and push notifications : ● The reference documentation for UILocalNotification, UIApplication, and UIApplicationDelegate describe the local- and push-notification API for client applications in iOS. ● The reference documentation for NSApplication and NSApplicationDelegate Protocol describe the push-notification API for client applications in OS X. ● Security Overview describes the security technologies and techniques used for the iOS and Macs. ● RFC 5246 is the standard for the TLS protocol. Secure communication between data providers and Apple Push Notification Service requires knowledge of Transport Layer Security (TLS) or its predecessor, Secure Sockets Layer (SSL). Refer to one of the many online or printed descriptions of these cryptographic protocols for further information. About Local Notifications and Push Notifications Prerequisites 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 8The essential purpose of both local and push notifications is to enable an application to inform its users that it hassomething for them—for example, a message or an upcoming appointment—when the application isn’t running in the foreground. The essential difference between local notifications and push notificationsissimple: ● Local notifications are scheduled by an application and delivered by iOS on the same device. Local notifications are available in iOS only. ● Push notifications, also known as remote notifications, are sent by an application’s remote server (its provider) to Apple Push Notification service, which pushes the notification to devices on which the application is installed. Push notifications are available in both iOS and, beginning with OS X v10.7 (Lion), OS X. The following sections describe what local and push notifications have in common and then examine their differences. Note: For usage guidelines for push and local notifications in iOS, see “Enabling Push Notifications” in iOS Human Interface Guidelines. Push and Local Notifications Appear the Same to Users From a user’s perspective, a push notification and a local notification appear to be the same thing. But that’s because the purpose is the same: to notify users of an application—which might not currently be running in the foreground—that there is something of interest for them. Let’s say you’re using your iPhone—making phone calls, surfing the Internet, listening to music. You have a chess application installed on your iPhone, and you decide to start a game with a friend who is playing remotely. You make the first move (which is duly noted by the game’s provider), and then quit the client application to read some email. In the meantime, your friend counters your move. The provider for the chess application learns about this move and, seeing that the chess application on your device is no longer connected, sends a push notification to Apple Push Notification service (APNs). 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 9 Local and Push Notifications in DepthAlmost immediately, your device—or more precisely, the operating system on your device—receives the notification over the Wi-Fi or cellular connection from APNs. Because your chess application is not currently running, iOS displays an alert similar to Figure 1-1. The message consists of the application name, a short message, and (in this case) two buttons: Close and View. The button on the right is called the action button and its default title is“View”. An application can customize the title of the action button and can internationalize the button title and the message so that they are in the user’s preferred language. Figure 1-1 A notification alert If you tap the View button, the chess application launches, connects with its provider, downloads the new data, and adjuststhe chessboard user interface to show your friend’s move. (Pressing Close dismissesthe alert.) OS X Note: Currently, the only type of push notification in OS X for non-running applications is icon badging. In other words, an application’s icon in the Dock is badged only if the application isn’t running. If users have not already placed the icon in the Dock, the system inserts the icon into the Dock so that it can badge it (and removes it after the application next terminates). Running applications may examine the notification payload for other types of notifications(alerts and sounds) and handle them appropriately. Let’s consider a type of application with another requirement. This application manages a to-do list, and each item in the list has a date and time when the item must be completed. The user can request the application to notify it at a specific interval before this due date expires. To effect this, the application schedules a local notification for that date and time. Instead of specifying an alert message, this time the application chooses to specify a badge number (1). At the appointed time, iOS displays a badge number in the upper-right corner of the icon of the application, such as illustrated in Figure 1-2. Local and Push Notifications in Depth Push and Local Notifications Appear the Same to Users 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 10For both local and push notifications, the badge number is specific to an application and can indicate any number of things,such asthe number of impending calendar events or the number of data itemsto download or the number of unread (but already downloaded) email messages. The user sees the badge and taps the application icon—or, in OS X, clicks the icon in the dock—to launch the application, which then displays the to-do item or whatever else is of interest to the user. Figure 1-2 An application icon with a badge number (iOS) In iOS, an application can specify a sound file along with an alert message or badge number. The sound file should contain a short, distinctive sound. At the same moment iOS displays the alert or badges the icon, it plays the sound to alert the user to the incoming notification. Notification alert message can have one button instead of two. In the latter case, the action button issuppressed, as illustrated in Figure 1-3. The user can only dismiss these kinds of alerts. Figure 1-3 A notification alert message with the action button suppressed Local and Push Notifications in Depth Push and Local Notifications Appear the Same to Users 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 11The operating system delivers a local or push notification to an application whether the application is running or not. If the application is running when the notification arrives, no alert is displayed or icon badged or sound played, even if (in iOS) the device screen islocked. Instead, the application delegate isinformed of the notification and can handle it directly. (“Scheduling, Registering, and Handling Notifications” (page 15) discusses the various delivery scenarios in detail.) Users of iPhone, iPad, and iPod touch devices can control whether the device or specific applications installed on the device should receive push notifications. They can also selectively enable or disable push notification types (that is, icon badging, alert messages, and sounds) for specific applications. They set these restrictions in the Notifications preference of the Settings application. The UIKit framework provides a programming interface to detect this user preference for a given application. More About Local Notifications Local notifications(available only in iOS) are ideally suited for applications with time-based behaviors, including simple calendar or to-do list applications. Applicationsthat run in the background for the limited period allowed by iOS might also find local notifications useful. For example, applicationsthat depend on serversfor messages or data can poll their servers for incoming items while running in the background; if a message is ready to view or an update is ready to download, they can then present a local notification immediately to inform their users. A local notification is an instance of UILocalNotification with three general kinds of properties: ● Scheduled time. You must specify the date and time the operating system delivers the notification; this is known as the fire date . You may qualify the fire date with a specific time zone so that the system can make adjustments to the fire date when the user travels. You can also request the operating system to reschedule the notification on some regular interval (weekly, monthly, and so on). ● Notification type. This category includes the alert message, the title of the action button, the application icon badge number, and a sound to play. ● Custom data. Local notifications can include a dictionary of custom data. “Scheduling Local Notifications” (page 16) describesthese propertiesin programmatic detail.Once an application has created a local-notification object, it can either schedule it with the operating system or present it immediately. Each application on a device is limited to the soonest-firing 64 scheduled local notifications. The operating system discards notificationsthat exceed thislimit. It considers a recurring notification to be a single notification. Local and Push Notifications in Depth More About Local Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 12More About Push Notifications An iOS application or a Mac app is often only a part of a larger application based on the client/server model. The client side of the application is installed on the device or computer; the server side of the application has the main function of providing data to its many client applications. (Hence it is termed a provider.) A client application occasionally connects with its provider and downloads the data that is waiting for it. Email and social-networking applications are examples of this client/server model. But what if the application is not connected to its provider or even running on the device or computer when the provider has new data for it to download? How does it learn about this waiting data? Push notifications are the solution to this dilemma. A push notification is a short message that a provider has delivered to the operating system of a device or computer; the operating system, in turn, informsthe user of a client application that there is data to be downloaded, a message to be viewed, and so on. If the user enables this feature (on iOS) and the application is properly registered, the notification is delivered to the operating system and possibly to the application. Apple Push Notification service is the primary technology for the push-notification feature. Push notificationsserve much the same purpose as a background application on a desktop system, but without the additional overhead. For an application that is not currently running—or, in the case of iOS, not running in the foreground—the notification occurs indirectly. The operating system receives a push notification on behalf of the application and alerts the user. Once alerted, users may choose to launch the application, which then downloads the data from its provider. If an application is running when a notification comes in, the application can choose to handle the notification directly. iOS Note: Beginning with iOS 4.0, applications can run in the background, but only for a limited period. Only one application may be executing in the foreground at a time. As its name suggests, Apple Push Notification service (APNs) uses a push design to deliver notifications to devices and computers. A push design differs from its opposite, a pull design, in that the immediate recipient of the notification—in this case, the operating system—passively listensfor updatesrather than actively polling for them. A push design makes possible a wide and timely dissemination of information with few of the scalability problems inherent with pull designs. APNs uses a persistent IP connection for implementing push notifications. Most of a push notification consists of a payload: a property list containing APNs-defined propertiesspecifying how the user is to be notified. For performance reasons, the payload is deliberately small. Although you may define custom properties for the payload, you should never use the remote-notification mechanism for data transport because delivery of push notificationsis not guaranteed. For more on the payload,see “The Notification Payload” (page 35). Local and Push Notifications in Depth More About Push Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 13APNs retains the last notification it receives from a provider for an application on a device; so, if a device or computer comes online and has not received the notification, APNs pushes the stored notification to it. A device running iOS receives push notifications over both Wi-Fi and cellular connections; a computer running OS X receives push notifications over both WiFi and Ethernet connections. Important: In iOS, Wi-Fi is used for push notifications only if there is no cellular connection or if the device is an iPod touch. For some devices to receive notifications via Wi-Fi, the device’s display must be on (that is, it cannot be sleeping) or it must be plugged in. The iPad, on the other hand, remains associated with the Wi-Fi access point while asleep, thus permitting the delivery of push notifications. The Wi-Fi radio wakes the host processor for any incoming traffic. Adding the remote-notification feature to your application requires that you obtain the proper certificates from the Dev Center for either iOS or OS X and then write the requisite code for the client and provider sides of the application. “Provisioning and Development” (page 42) explains the provisioning and setup steps, and “Provider Communication with Apple Push Notification Service” (page 47) and “Scheduling, Registering, and Handling Notifications” (page 15) describe the details of implementation. Apple Push Notification service continually monitors providersfor irregular behavior, looking forsudden spikes of activity, rapid connect-disconnect cycles, and similar activity. Apple seeksto notify providers when it detects this behavior, and if the behavior continues, it may put the provider’s certificate on a revocation list and refuse further connections. Any continued irregular or problematic behavior may result in the termination of a provider's access to APNs. Local and Push Notifications in Depth More About Push Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 14This chapter describes the tasks that a iPhone, iPad, or iPod touch application should (or might) do to schedule local notifications, register remote notifications, and handle both local and remote notifications. Because the client-side API for push notifications refers to push notifications as remote notifications, that terminology is used in this chapter. Preparing Custom Alert Sounds For remote notifications in iOS, you can specify a custom sound that iOS plays when it presents a local or remote notification for an application. The sound files must be in the main bundle of the client application. Because custom alert sounds are played by the iOS system-sound facility, they must be in one of the following audio data formats: ● Linear PCM ● MA4 (IMA/ADPCM) ● µLaw ● aLaw You can package the audio data in an aiff, wav, or caf file. Then, in Xcode, add the sound file to your project as a nonlocalized resource of the application bundle. You may use the afconvert tool to convert sounds. For example, to convert the 16-bit linear PCM system sound Submarine.aiff to IMA4 audio in a CAF file, use the following command in the Terminal application: afconvert /System/Library/Sounds/Submarine.aiff ~/Desktop/sub.caf -d ima4 -f caff -v You can inspect a sound to determine its data format by opening it in QuickTime Player and choosing Show Movie Inspector from the Movie menu. Custom sounds must be under 30 seconds when played. If a custom sound is over that limit, the defaultsystem sound is played instead. 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 15 Scheduling, Registering, and Handling NotificationsScheduling Local Notifications Creating and scheduling local notifications in iOS requires that you perform a few simple steps: 1. Allocate and initialize a UILocalNotification object. 2. Set the date and time that the operating system should deliver the notification. This is the fireDate property. If you set the timeZone property to the NSTimeZone object for the current locale, the system automatically adjusts the fire date when the device travels across (and is reset for) different time zones. (Time zones affect the values of date components—that is, day, month, hour, year, and minute—that the system calculates for a given calendar and date value.) You can also schedule the notification for delivery on a recurring basis (daily, weekly, monthly, and so on). 3. Configure the substance of the notification: alert, icon badge number, and sound. ● The alert has a property for the message (the alertBody property) and for the title of the action button or slider (alertAction); both of these string values can be internationalized for the user’s current language preference. ● You set the badge number to display on the application icon through the applicationIconBadgeNumber property. ● You can assign the filename of a nonlocalized custom sound in the application’s main bundle to the soundName property; to get the default system sound, assign UILocalNotificationDefaultSoundName. Sounds should always accompany an alert message or icon badging; they should not be played otherwise. 4. Optionally, you can attach custom data to the notification through the userInfo property. Keys and values in the userInfo dictionary must be property-list objects. 5. Schedule the local notification for delivery. You schedule a local notification by calling the UIApplicationmethod scheduleLocalNotification:. The application uses the fire date specified in the UILocalNotification object for the moment of delivery. Alternatively, you can present the notification immediately by calling the presentLocalNotificationNow: method. The method in Listing 2-1 creates and schedules a notification to inform the user of a hypothetical to-do list application about the impending due date of a to-do item. There are a couple things to note about it. For the alertBody and alertAction properties, it fetches from the main bundle (via the NSLocalizedString macro) strings localized to the user’s preferred language. It also adds the name of the relevant to-do item to a dictionary assigned to the userInfo property. Scheduling, Registering, and Handling Notifications Scheduling Local Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 16Listing 2-1 Creating, configuring, and scheduling a local notification - (void)scheduleNotificationWithItem:(ToDoItem *)item interval:(int)minutesBefore { NSCalendar *calendar = [NSCalendar autoupdatingCurrentCalendar]; NSDateComponents *dateComps = [[NSDateComponents alloc] init]; [dateComps setDay:item.day]; [dateComps setMonth:item.month]; [dateComps setYear:item.year]; [dateComps setHour:item.hour]; [dateComps setMinute:item.minute]; NSDate *itemDate = [calendar dateFromComponents:dateComps]; [dateComps release]; UILocalNotification *localNotif = [[UILocalNotification alloc] init]; if (localNotif == nil) return; localNotif.fireDate = [itemDate addTimeInterval:-(minutesBefore*60)]; localNotif.timeZone = [NSTimeZone defaultTimeZone]; localNotif.alertBody = [NSString stringWithFormat:NSLocalizedString(@"%@ in %i minutes.", nil), item.eventName, minutesBefore]; localNotif.alertAction = NSLocalizedString(@"View Details", nil); localNotif.soundName = UILocalNotificationDefaultSoundName; localNotif.applicationIconBadgeNumber = 1; NSDictionary *infoDict = [NSDictionary dictionaryWithObject:item.eventName forKey:ToDoItemKey]; localNotif.userInfo = infoDict; [[UIApplication sharedApplication] scheduleLocalNotification:localNotif]; [localNotif release]; } Scheduling, Registering, and Handling Notifications Scheduling Local Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 17You can cancel a specific scheduled notification by calling cancelLocalNotification: on the application object, and you can cancel all scheduled notifications by calling cancelAllLocalNotifications. Both of these methods also programmatically dismiss a currently displayed notification alert. Applications might also find local notifications useful when they run in the background and some message, data, or other item arrivesthat might be of interest to the user. In this case, they should present the notification immediately using the UIApplication method presentLocalNotificationNow: (iOS gives an application a limited time to run in the background). Listing 2-2 illustrates how you might do this. Listing 2-2 Presenting a local notification immediately while running in the background - (void)applicationDidEnterBackground:(UIApplication *)application { NSLog(@"Application entered background state."); // bgTask is instance variable NSAssert(self->bgTask == UIInvalidBackgroundTask, nil); bgTask = [application beginBackgroundTaskWithExpirationHandler: ^{ dispatch_async(dispatch_get_main_queue(), ^{ [application endBackgroundTask:self->bgTask]; self->bgTask = UIInvalidBackgroundTask; }); }]; dispatch_async(dispatch_get_main_queue(), ^{ while ([application backgroundTimeRemaining] > 1.0) { NSString *friend = [self checkForIncomingChat]; if (friend) { UILocalNotification *localNotif = [[UILocalNotification alloc] init]; if (localNotif) { localNotif.alertBody = [NSString stringWithFormat: NSLocalizedString(@"%@ has a message for you.", nil), friend]; localNotif.alertAction = NSLocalizedString(@"Read Message", nil); localNotif.soundName = @"alarmsound.caf"; localNotif.applicationIconBadgeNumber = 1; [application presentLocalNotificationNow:localNotif]; Scheduling, Registering, and Handling Notifications Scheduling Local Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 18[localNotif release]; friend = nil; break; } } } [application endBackgroundTask:self->bgTask]; self->bgTask = UIInvalidBackgroundTask; }); } Registering for Remote Notifications An application must register with Apple Push Notification service for the operating systems on a device and on a computer to receive remote notifications sent by the application’s provider. Registration has three stages: 1. The application calls the registerForRemoteNotificationTypes: method. 2. The delegate implements the application:didRegisterForRemoteNotificationsWithDeviceToken: method to receive the device token. 3. It passes the device token to its provider as a non-object, binary value. Note: Unless otherwise noted, all methods cited in thissection are declared with identicalsignatures by both UIApplication and NSApplication, and, for delegates, by both NSApplicationDelegate Protocol and UIApplicationDelegate. What happens between the application, the device, Apple Push Notification Service, and the provider during this sequence is illustrated by Figure 3-3 in “Token Generation and Dispersal” (page 32). An application should register every time it launches and give its provider the current token. It calls theregisterForRemoteNotificationTypes: method to kick off the registration process. The parameter of this method takes a UIRemoteNotificationType (or, for OS X, a NSRemoteNotificationType) bit mask that specifies the initial types of notifications that the application wishes to receive—for example, icon-badging and sounds, but not alert messages. In iOS, users can thereafter modify the enabled notification types in the Notifications preference of the Settings application. In both iOS and OS X, you can retrieve the Scheduling, Registering, and Handling Notifications Registering for Remote Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 19currently enabled notification types by calling the enabledRemoteNotificationTypes method. The operating system does not badge icons, display alert messages, or play alertsoundsif any of these notifications types are not enabled, even if they are specified in the notification payload. OS XNote: Because the only notification type supported for non-running applicationsisicon-badging, simply pass NSRemoteNotificationTypeBadge as the parameter of registerForRemoteNotificationTypes:. If registration issuccessful, APNsreturns a device token to the device and iOS passesthe token to the application delegate in the application:didRegisterForRemoteNotificationsWithDeviceToken: method. The application should connect with its provider and pass it this token, encoded in binary format. If there is a problem in obtaining the token, the operating system informs the delegate by calling the application:didFailToRegisterForRemoteNotificationsWithError:method. The NSError object passed into this method clearly describes the cause of the error. The error might be, for instance, an erroneous aps-environment value in the provisioning profile. You should view the error as a transient state and not attempt to parse it. (See “Creating and Installing the Provisioning Profile” (page 44) for details.) iOS Note: If a cellular or Wi-Fi connection is not available, neither the application:didRegisterForRemoteNotificationsWithDeviceToken: method or the application:didFailToRegisterForRemoteNotificationsWithError: method is called. For Wi-Fi connections, this sometimes occurs when the device cannot connect with APNs over port 5223. If this happens, the user can move to another Wi-Fi network that isn’t blocking this port or, on an iPhone or iPad, wait until the cellular data service becomes available. In either case, the connection should then succeed and one of the delegation methods is called. By requesting the device token and passing it to the provider every time your application launches, you help to ensure that the provider has the current token for the device. If a user restores a backup to a device or computer other than the one that the backup was created for (for example, the user migrates data to a new device or computer), he or she must launch the application at least once for it to receive notifications again. If the user restores backup data to a new device or computer, or reinstalls the operating system, the device token changes. Moreover, never cache a device token and give that to your provider; always get the token from the system whenever you need it. If your application has previously registered, calling registerForRemoteNotificationTypes: resultsin the operating system passing the device token to the delegate immediately without incurring additional overhead. Listing 2-3 gives a simple example of how you might register for remote notifications in an iOS application. The code would be nearly identical for a Mac app. (SendProviderDeviceToken is a hypothetical method defined by the client in which it connects with its provider and passes it the device token.) Scheduling, Registering, and Handling Notifications Registering for Remote Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 20Listing 2-3 Registering for remote notifications - (void)applicationDidFinishLaunching:(UIApplication *)app { // other setup tasks here.... [[UIApplication sharedApplication] registerForRemoteNotificationTypes:(UIRemoteNotificationTypeBadge | UIRemoteNotificationTypeSound)]; } // Delegation methods - (void)application:(UIApplication *)app didRegisterForRemoteNotificationsWithDeviceToken:(NSData *)devToken { const void *devTokenBytes = [devToken bytes]; self.registered = YES; [self sendProviderDeviceToken:devTokenBytes]; // custom method } - (void)application:(UIApplication *)app didFailToRegisterForRemoteNotificationsWithError:(NSError *)err { NSLog(@"Error in registration. Error: %@", err); } Handling Local and Remote Notifications Let’s review the possible scenarios when the operating delivers a local notification or a remote notification for an application. ● The notification is delivered when the application isn’t running in the foreground. In this case, the system presents the notification, displaying an alert, badging an icon, perhaps playing a sound. ● As a result of the presented notification, the user taps the action button of the alert or taps (or clicks) the application icon. If the action button is tapped (on a device running iOS), the system launches the application and the application calls its delegate’s application:didFinishLaunchingWithOptions: method (if implemented); it passesin the notification payload (for remote notifications) or the local-notification object (for local notifications). Scheduling, Registering, and Handling Notifications Handling Local and Remote Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 21If the application icon is tapped on a device running iOS, the application calls the same method, but furnishes no information about the notification . If the application icon is clicked on a computer running OS X, the application calls the delegate’s applicationDidFinishLaunching: method in which the delegate can obtain the remote-notification payload. iOS Note: The application delegate could implement applicationDidFinishLaunching: rather than application:didFinishLaunchingWithOptions:, but that is strongly discouraged. The latter method allows the application to receive information related to the reason for its launching, which can include things other than notifications. ● The notification is delivered when the application is running in the foreground. The application calls its delegate’s application:didReceiveRemoteNotification: method (for remote notifications) or application:didReceiveLocalNotification:method (forlocal notifications) and passes in the notification payload or the local-notification object. Note: The delegate methods cited in this section that have “RemoteNotification” in their name are declared with identical signatures by by both NSApplicationDelegate Protocol and UIApplicationDelegate. An application can use the passed-in remote-notification payload or, in iOS, the UILocalNotification object to help set the context for processing the item related to the notification. Ideally, the delegate does the following on each platform to handle the delivery of remote and local notifications in all situations: ● For OS X, it should adopt the NSApplicationDelegate Protocol protocol and implement both the applicationDidFinishLaunching: method and the application:didReceiveRemoteNotification: method. ● For iOS, it should should adopt the UIApplicationDelegate protocol and implement both the application:didFinishLaunchingWithOptions: method and the application:didReceiveRemoteNotification: or application:didReceiveLocalNotification: method. Scheduling, Registering, and Handling Notifications Handling Local and Remote Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 22iOS Note: In iOS, you can determine whether an application is launched as a result of the user tapping the action button or whether the notification was delivered to the already-running application by examining the application state. In the delegate’s implementation of the application:didReceiveRemoteNotification: or application:didReceiveLocalNotification: method, get the value of the applicationState property and evaluate it. If the value is UIApplicationStateInactive, the user tapped the action button; if the value is UIApplicationStateActive, the application was frontmost when it received the notification. The delegate for an iOS application in Listing 2-4 implements the application:didFinishLaunchingWithOptions: method to handle a local notification. It gets the associated UILocalNotification object from the launch-options dictionary using the UIApplicationLaunchOptionsLocalNotificationKey key. From the UILocalNotification object’s userInfo dictionary, it accesses the to-do item that is the reason for the notification and uses it to set the application’s initial context. As shown in this example, you should appropriately reset the badge number on the application icon—or remove it if there are no outstanding items—as part of handling the notification. Listing 2-4 Handling a local notification when an application is launched - (BOOL)application:(UIApplication *)app didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { UILocalNotification *localNotif = [launchOptions objectForKey:UIApplicationLaunchOptionsLocalNotificationKey]; if (localNotif) { NSString *itemName = [localNotif.userInfo objectForKey:ToDoItemKey]; [viewController displayItem:itemName]; // custom method application.applicationIconBadgeNumber = localNotif.applicationIconBadgeNumber-1; } [window addSubview:viewController.view]; [window makeKeyAndVisible]; return YES; } The implementation for a remote notification would be similar, except that you would use a specially declared constant in each platform as a key to access the notification payload: Scheduling, Registering, and Handling Notifications Handling Local and Remote Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 23● In iOS, the delegate, in its implementation of the application:didFinishLaunchingWithOptions: method, usesthe UIApplicationLaunchOptionsRemoteNotificationKey key to accessthe payload from the launch-options dictionary. ● In OS X, the delegate, in its implementation of the applicationDidFinishLaunching: method, uses the the NSApplicationLaunchRemoteNotificationKey key to access the payload dictionary from the userInfo dictionary of the NSNotification object that is passed into the method. The payload itself is an NSDictionary object that contains the elements of the notification—alert message, badge number, sound, and so on. It can also contain custom data the application can use to provide context when setting up the initial user interface. See “The Notification Payload” (page 35) for details about the remote-notification payload. Important: You should never define custom properties in the notification payload for the purpose of transporting customer data or any other sensitive data. Delivery of remote notifications is not guaranteed. One example of an appropriate usage for a custom payload property is a string identifying an email account from which messages are downloaded to an email client; the application can incorporate this string in its download user-interface. Another example of custom payload property is a timestamp for when the provider first sent the notification; the client application can use this value to gauge how old the notification is. When handling remote notifications in application:didFinishLaunchingWithOptions: or applicationDidFinishLaunching:, the application delegate might perform a major additional task. Just after the application launches, the delegate should connect with its provider and fetch the waiting data. Listing 2-5 gives a schematic illustration of this procedure. Listing 2-5 Downloading data from a provider - (void)application:(UIApplication *)app didFinishLaunchingWithOptions:(NSDictionary *)opts { // check launchOptions for notification payload and custom data, set UI context [self startDownloadingDataFromProvider]; // custom method app.applicationIconBadgeNumber = 0; // other setup tasks here.... } Scheduling, Registering, and Handling Notifications Handling Local and Remote Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 24Note: A client application should always communicate with its provider asynchronously or on a secondary thread. The code in Listing 2-6 shows an implementation of the application:didReceiveLocalNotification: method which, as you’ll recall, is called when application is running in the foreground. Here the application delegate doesthe same work asit doesin Listing 2-4. It can accessthe UILocalNotification object directly this time because this object is an argument of the method. Listing 2-6 Handling a local notification when an application is already running - (void)application:(UIApplication *)app didReceiveLocalNotification:(UILocalNotification *)notif { NSString *itemName = [notif.userInfo objectForKey:ToDoItemKey] [viewController displayItem:itemName]; // custom method application.applicationIconBadgeNumber = notification.applicationIconBadgeNumber-1; } If you want your application to catch remote notifications that the system delivers while it is running in the foreground, the application delegate should implement the application:didReceiveRemoteNotification: method. The delegate should begin the procedure for downloading the waiting data, message, or other item and, after this concludes, it should remove the badge from the application icon. (If your application frequently checks with its provider for new data, implementing this method might not be necessary.) The dictionary passed in the second parameter of this method is the notification payload; you should not use any custom properties it contains to alter your application’s current context. Even though the only supported notification type for nonrunning applications in OS X is icon-badging, the delegate can implement application:didReceiveRemoteNotification: to examine the notification payload for other types of notifications and handle them appropriately (that is, display an alert or play a sound). Scheduling, Registering, and Handling Notifications Handling Local and Remote Notifications 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 25iOS Note: If the user unlocks the device shortly after a remote-notification alert is displayed, the operating system automatically triggers the action associated with the alert. (This behavior is consistent with SMS and calendar alerts.) This makes it even more important that actions related to remote notifications do not have destructive consequences. A user should always make decisions that result in the destruction of data in the context of the application that stores the data. Passing the Provider the Current Language Preference (Remote Notifications) If an application doesn’t use the loc-key and loc-args properties of the aps dictionary for client-side fetching of localized alert messages, the provider needs to localize the text of alert messages it puts in the notification payload. To do this, however, the provider needs to know the language that the device user has selected as the preferred language. (The user sets this preference in the General > International > Language view of the Settings application.) The client application should send its provider an identifier of the preferred language; this could be a canonicalized IETF BCP 47 language identifier such as “en” or “fr”. Note: For more information about the loc-key and loc-args properties and client-side message localizations, see “The Notification Payload” (page 35). Listing 2-7 illustrates a technique for obtaining the currently selected language and communicating it to the provider. In iOS, the array returned by the preferredLanguages of NSLocale contains one object: an NSString object encapsulating the language code identifying the preferred language. The UTF8String coverts the string object to a C string encoded as UTF8. Listing 2-7 Getting the current supported language and sending it to the provider NSString *preferredLang = [[NSLocale preferredLanguages] objectAtIndex:0]; const char *langStr = [preferredLang UTF8String]; [self sendProviderCurrentLanguage:langStr]; // custom method } The application might send its provider the preferred language every time the user changes something in the current locale. To do this, you can listen for the notification named NSCurrentLocaleDidChangeNotification and, in your notification-handling method, get the code identifying the preferred language and send that to your provider. Scheduling, Registering, and Handling Notifications Passing the Provider the Current Language Preference (Remote Notifications) 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 26If the preferred language is not one the application supports, the provider should localize the message text in a widely spoken fallback language such as English or Spanish. Scheduling, Registering, and Handling Notifications Passing the Provider the Current Language Preference (Remote Notifications) 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 27Apple Push Notification service (APNsforshort) isthe centerpiece of the push notificationsfeature. It is a robust and highly efficientservice for propagating information to devicessuch asiPhone, iPad, and iPod touch devices. Each device establishes an accredited and encrypted IP connection with the service and receives notifications over this persistent connection. If a notification for an application arrives when that application is not running, the device alerts the user that the application has data waiting for it. Software developers (“providers”) originate the notifications in their server software. The provider connects with APNs through a persistent and secure channel while monitoring incoming data intended for their client applications. When new data for an application arrives, the provider prepares and sends a notification through the channel to APNs, which pushes the notification to the target device. In addition to being a simple but efficient and high-capacity transport service, APNs includes a default quality-of-service component that provides store-and-forward capabilities. See “Quality of Service” (page 30) for more information. “Provider Communication with Apple Push Notification Service” (page 47) and “Scheduling, Registering, and Handling Notifications” (page 15) discuss the specific implementation requirements for providers and iOS applications, respectively. A Push Notification and Its Path Apple Push Notification service transports and routes a notification from a given provider to a given device. A notification is a short message consisting of two major pieces of data: the device token and the payload. The device token is analogous to a phone number; it contains information that enables APNs to locate the device on which the client application is installed. APNs also uses it to authenticate the routing of a notification. The payload is a JSON-defined property list thatspecifies how the user of an application on a device isto be alerted. Note: For more information about the device token,see “Security Architecture” (page 30); for further information about the notification payload, see “The Notification Payload” (page 35) . The flow of remote-notification data is one-way. The provider composes a notification package that includes the device token for a client application and the payload. The provider sends the notification to APNs which in turn pushes the notification to the device. 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 28 Apple Push Notification ServiceWhen it authenticates itself to APNs, a provider furnishes the service with its topic, which identifies the application for which it’s providing data. The topic is currently the bundle identifier of the target application on an iOS device. Figure 3-1 A push notification from a provider to a client application Provider APNS notification notification Client App iPhone notification Figure 3-1 is a greatly simplified depiction of the virtual network APNs makes possible among providers and devices. The device-facing and provider-facing sides of APNs both have multiple points of connection; on the provider-facing side, these are called gateways. There are typically multiple providers, each making one or more persistent and secure connections with APNs through these gateways. And these providers are sending notifications through APNs to many devices on which their client applications are installed. Figure 3-2 is a slightly more realistic depiction. Figure 3-2 Push notifications from multiple providers to multiple devices APNS Provider B Provider A Feedback Service Sometimes APNs might attempt to deliver notifications for an application on a device, but the device may repeatedly refuse delivery because there is no target application. This often happens when the user has uninstalled the application. In these cases, APNs informs the provider through a feedback service that the provider connects with. The feedback service maintains a list of devices per application for which there were recent, repeated failed attempts to deliver notifications. The provider should obtain this list of devices and stop sending notifications to them. For more on this service, see “The Feedback Service” (page 53). Apple Push Notification Service Feedback Service 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 29Quality of Service Apple Push Notification Service includes a default Quality of Service (QoS) component that performs a store-and-forward function. If APNs attempts to deliver a notification but the device is offline, the QoS stores the notification. It retains only one notification per application on a device: the last notification received from a provider for that application. When the offline device later reconnects, the QoS forwardsthe stored notification to the device. The QoS retains a notification for a limited period before deleting it. Security Architecture To enable communication between a provider and a device, Apple Push Notification Service must expose certain entry points to them. But then to ensure security, it must also regulate access to these entry points. For this purpose, APNs requires two different levels of trust for providers, devices, and their communications. These are known as connection trust and token trust. Connection trust establishes certainty that, on one side, the APNs connection is with an authorized provider with whom Apple has agreed to deliver notifications. At the device side of the connection, APNs must validate that the connection is with a legitimate device. After APNs has established trust at the entry points, it must then ensure that it conveys notificationsto legitimate end points only. To do this, it must validate the routing of messages traveling through the transport; only the device that is the intended target of a notification should receive it. In APNs, assurance of accurate message routing—or token trust—is made possible through the device token. A device token is an opaque identifier of a device that APNs gives to the device when it first connects with it. The device shares the device token with its provider. Thereafter, this token accompanies each notification from the provider. It is the basis for establishing trust that the routing of a particular notification is legitimate. (In a metaphorical sense, it has the same function as a phone number, identifying the destination of a communication.) Note: A device token is not the same thing asthe device UDID returned by the uniqueIdentifier property of UIDevice. The following sections discuss the requisite components for connection trust and token trust as well as the four procedures for establishing trust. Apple Push Notification Service Quality of Service 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 30Service-to-Device Connection Trust APNs establishes the identity of a connecting device through TLS peer-to-peer authentication. (Note that iOS takes care of this stage of connection trust; you do not need to implement anything yourself.) In the course of this procedure, a device initiates a TLS connection with APNs, which returns its server certificate. The device validates this certificate and then sends its device certificate to APNs, which validates that certificate. TLS initiation Device certificate Server certificate TLS established Validate device certificate Device APNS Validate server certificate Provider-to-Service Connection Trust Connection trust between a provider and APNs is also established through TLS peer-to-peer authentication. The procedure is similar to that described in “Service-to-Device Connection Trust” (page 31). The provider initiates a TLS connection, getsthe server certificate from APNs, and validatesthat certificate. Then the provider Apple Push Notification Service Security Architecture 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 31sends its provider certificate to APNs, which validates it on its end. Once this procedure is complete, a secure TLS connection has been established; APNsis now satisfied that the connection has been made by a legitimate provider. TLS initiation Provider certificate Server certificate TLS established Validate provider certificate Provider APNS Validate server certificate Note that provider connection is valid for delivery to only one specific application, identified by the topic (bundle ID) specified in the certificate. APNs also maintains a certificate revocation list; if a provider’s certificate is on this list, APNs may revoke provider trust (that is, refuse the connection). Token Generation and Dispersal An iOS-based application must register to receive push notifications; it typically does this right after it is installed on a device. (This procedure is described in “Scheduling, Registering, and Handling Notifications” (page 15).) iOS receives the registration request from an application, connects with APNs, and forwards the request. Apple Push Notification Service Security Architecture 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 32APNs generates a device token using information contained in the unique device certificate. The device token contains an identifier of the device. It then encrypts the device token with a token key and returns it to the device. Token Connect (Token, ...) Token Generate token package Encrypt token with token key Generate device ID from device certificate Provider Device APNS The device returns the device token to the requesting application as an NSData object. The application then must then deliver the device token to its provider in either binary or hexadecimal format. Figure 3-3 also illustratesthe token generation and dispersalsequence, but in addition showsthe role of the client application in furnishing its provider with the device token. Figure 3-3 Sharing the device token deviceToken APNS Provider Client App 2 1 3 4 SSL connect deviceToken deviceToken The form of this phase of token trust ensures that only APNs generates the token which it will later honor, and it can assure itself that a token handed to it by a device is the same token that it previously provisioned for that particular device—and only for that device. Apple Push Notification Service Security Architecture 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 33Token Trust (Notification) After iOS obtains a device token from APNs, as described in “Token Generation and Dispersal” (page 32), it must provide APNs with the token every time it connects with it. APNs decrypts the device token and validates that the token was generated for the connecting device. To validate, APNs ensures that the device identifier contained in the token matches the device identifier in the device certificate. Every notification that a provider sends to APNs for delivery to a device must be accompanied by the device token it obtained from an application on that device. APNs decrypts the token using the token key, thereby ensuring that the notification is valid. It then uses the device ID contained in the device token to determine the destination device for the notification. Token, Payload Response (OK) Payload Connect (Token, ...) Decrypt token and validate with device certificate Provider APNS Device Decrypt token with token key Trust Components To support the security model for APNs, providers and devices must possess certain certificates, certificate authority (CA) certificates, or tokens. ● Provider: Each provider requires a unique provider certificate and private cryptographic key for validating their connection with APNs. This certificate, provisioned by Apple, must identify the particular topic published by the provider; the topic is the bundle ID of the client application. For each notification, the provider must furnish APNs with a device token identifying the target device. The provider may optionally wish to validate the service it is connecting to using the public server certificate provided by the APNs server. Apple Push Notification Service Security Architecture 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 34● Device: iOS uses the public server certificate passed to it by APNs to authenticate the service that it has connected to. It has a unique private key and certificate that it uses to authenticate itself to the service and establish the TLS connection. It obtains the device certificate and key during device activation and stores them in the keychain. iOS also holds its particular device token, which it receives during the service connection process. Each registered client application isresponsible for delivering thistoken to its content provider. APNs servers also have the necessary certificates, CA certificates, and cryptographic keys (private and public) for validating connections and the identities of providers and devices. The Notification Payload Each push notification carries with it a payload. The payload specifies how users are to be alerted to the data waiting to be downloaded to the client application. The maximum size allowed for a notification payload is 256 bytes; Apple Push Notification Service refuses any notification that exceeds this limit. Remember that delivery of notifications is “best effort” and is not guaranteed. For each notification, providers must compose a JSON dictionary object that strictly adheres to RFC 4627. This dictionary must contain another dictionary identified by the key aps. The aps dictionary contains one or more properties that specify the following actions: ● An alert message to display to the user ● A number to badge the application icon with ● A sound to play Note: Although you can combine an alert message, icon badging, and a sound in a single notification, you should consider the human-interface implications with push notifications. For example, a user might find frequent alert messages with accompanying sound more annoying than useful, especially when the data to be downloaded is not critical. If the target application isn’t running when the notification arrives, the alert message, sound, or badge value is played orshown. If the application isrunning, iOS deliversit to the application delegate as an NSDictionary object. The dictionary contains the corresponding Cocoa property-list objects (plus NSNull). Providers can specify custom payload values outside the Apple-reserved aps namespace. Custom values must use the JSON structured and primitive types: dictionary (object), array,string, number, and Boolean. You should not include customer information as custom payload data. Instead, use it for such purposes as setting context (for the user interface) or internal metrics. For example, a custom payload value might be a conversation Apple Push Notification Service The Notification Payload 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 35identifier for use by an instant-message client application or a timestamp identifying when the provider sent the notification. Any action associated with an alert message should not be destructive—for example, deleting data on the device. Important: Because delivery is not guaranteed, you should not depend on the remote-notificationsfacility for delivering critical data to an application via the payload. And never include sensitive data in the payload. You should use it only to notify the user that new data is available. Table 3-1 lists the keys and expected values of the aps payload. Table 3-1 Keys and values of the aps dictionary Key Value type Comment If this property is included, iOS displays a standard alert. You may specify a string asthe value of alert or a dictionary asits value. If you specify a string, it becomes the message text of an alert with two buttons: Close and View. If the user taps View, the application is launched. Alternatively, you can specify a dictionary as the value of alert. See Table 3-2 (page 36) for descriptions of the keys of this dictionary. string or dictionary alert The number to display as the badge of the application icon. If this property is absent, the badge is not changed. To remove the badge, set the value of this property to 0. badge number The name of a sound file in the application bundle. The sound in this file is played as an alert. If the sound file doesn’t exist or default is specified as the value, the default alert sound is played. The audio must be in one of the audio data formats that are compatible with system sounds; see “Preparing Custom Alert Sounds” (page 15) for details. sound string Table 3-2 Child properties of the alert property Value Comment type Key body string The text of the alert message. Apple Push Notification Service The Notification Payload 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 36Value Comment type Key If a string is specified, displays an alert with two buttons, whose behavior is described in Table 3-1. However, iOS uses the string as a key to get a localized string in the current localization to use for the right button’s title instead of “View”. If the value is null, the system displays an alert with a single OK button that simply dismisses the alert when tapped. See “Localized Formatted Strings” (page 37) for more information. string or null action-loc-key A key to an alert-message string in a Localizable.strings file for the current localization (which is set by the user’s language preference). The key string can be formatted with %@ and %n$@ specifiers to take the variables specified in loc-args. See “Localized Formatted Strings” (page 37) for more information. loc-key string Variable string values to appear in place of the format specifiers in loc-key. See “Localized Formatted Strings” (page 37) for more information. array of strings loc-args The filename of an image file in the application bundle; it may include the extension or omit it. The image is used as the launch image when userstap the action button or move the action slider. If this property is notspecified, the system either usesthe previous snapshot,uses the image identified by the UILaunchImageFile key in the application’s Info.plist file, or falls back to Default.png. This property was added in iOS 4.0. launch-image string Note: If you want the iPhone, iPad, or iPod touch device to display the message text as-is in an alert that has both the Close and View buttons, then specify a string as the direct value of alert. Don’t specify a dictionary as the value of alert if the dictionary only has the body property. Localized Formatted Strings You can display localized alert messages in two ways. The server originating the notification can localize the text; to do this, it must discover the current language preference selected for the device (see “Passing the Provider the Current Language Preference (Remote Notifications)” (page 26)). Or the client application can store in its bundle the alert-message strings translated for each localization it supports. The provider specifies the loc-key and loc-args properties in the aps dictionary of the notification payload. When the device receives the notification (assuming the application isn’t running), it uses these aps-dictionary properties to find and format the string localized for the current language, which it then displays to the user. Apple Push Notification Service The Notification Payload 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 37Here’s how that second option works in a little more detail. An iOS application can internationalize resources such as images, sounds, and text for each language that it supports, Internationalization collects the resources and puts them in a subdirectory of the bundle with a two-part name: a language code and an extension of .lproj (for example, fr.lproj). Localized strings that are programmatically displayed are put in a file called Localizable.strings. Each entry in this file has a key and a localized string value; the string can have format specifiers for the substitution of variable values. When an application asksfor a particular resource—say a localized string—it getsthe resource that islocalized for the language currently selected by the user. For example, if the preferred language is French, the corresponding string value for an alert message would be fetched from Localizable.strings in the fr.lproj directory in the application bundle. (iOS makes this request through the NSLocalizedString macro.) Note: This general pattern is also followed when the value of the action-loc-key property is a string. This string is a key into the Localizable.strings in the localization directory for the currently selected language. iOS uses this key to get the title of the button on the right side of an alert message (the “action” button). To make this clearer, let’s consider an example. The provider specifies the following dictionary as the value of the alert property: "alert" : { "loc-key" : "GAME_PLAY_REQUEST_FORMAT", "loc-args" : [ "Jenna", "Frank"] }, When the device receives the notification, it uses "GAME_PLAY_REQUEST_FORMAT" as a key to look up the associated string value in the Localizable.strings file in the .lproj directory for the current language. Assuming the current localization has an Localizable.strings entry such as this: "GAME_PLAY_REQUEST_FORMAT" = "%@ and %@ have invited you to play Monopoly"; the device displays an alert with the message “Jenna and Frank have invited you to play Monopoly”. In addition to the format specifier %@, you can %n$@ format specifiers for positional substitution of string variables. The n is the index (starting with 1) of the array value in loc-args to substitute. (There’s also the %% specifier for expressing a percentage sign (%).) So if the entry in Localizable.strings is this: "GAME_PLAY_REQUEST_FORMAT" = "%2$@ and %1$@ have invited you to play Monopoly"; Apple Push Notification Service The Notification Payload 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 38the device displays an alert with the message "Frank and Jenna have invited you to play Monopoly". For a full example of a notification payload that usesthe loc-key and loc-arg properties,see the last example of “Examples of JSON Payloads.” To learn more about internationalization in iOS, see ““Advanced App Tricks”” in iOS App Programming Guide ; for general information about internationalization, see Internationalization Programming Topics. String formatting is discussed in “Formatting String Objects” in String Programming Guide . Note: You should use the loc-key and loc-args properties—and the alert dictionary in general—only if you absolutely need to. The values of these properties, especially if they are long strings, might use up more bandwidth than is good for performance. Many if not most applications may not need these properties because their message strings are originated by users and thus are implicitly "localized." Examples of JSON Payloads The following examples of the payload portion of notifications illustrate the practical use of the properties listed in Table 3-1. Properties with “acme” in the key name are examples of custom payload data. The examples include whitespace and newline characters for readability; for better performance, providers should omit whitespace and newline characters. Example 1: The following payload has an aps dictionary with a simple, recommended form for alert messages with the default alert buttons (Close and View). It uses a string as the value of alert rather than a dictionary. This payload also has a custom array property. { "aps" : { "alert" : "Message received from Bob" }, "acme2" : [ "bang", "whiz" ] } Example 2. The payload in the example uses an aps dictionary to request that the device display an alert message with an Close button on the left and a localized title for the "action" button on the right side of the alert. In this case, “PLAY” is used as a key into the Localizable.strings file for the currently selected language to get the localized equivalent of “Play”. The aps dictionary also requests that the application icon be badged with 5. { "aps" : { "alert" : { "body" : "Bob wants to play poker", "action-loc-key" : "PLAY" }, "badge" : 5, }, "acme1" : "bar", "acme2" : [ "bang", "whiz" ] } Apple Push Notification Service The Notification Payload 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 39Example 3. The payload in this example specifies that device should display an alert message with both Close and View buttons. It also request that the application icon be badged with 9 and that a bundled alert sound be played when the notification is delivered. { "aps" : { "alert" : "You got your emails.", "badge" : 9, "sound" : "bingbong.aiff" }, "acme1" : "bar", "acme2" : 42 } Example 4. The interesting thing about the payload in this example is that it uses the loc-key and loc-args child properties of the alert dictionary to fetch a formatted localized string from the application’s bundle and substitute the variable string values(loc-args) in the appropriate places. It also specifies a custom sound and include a custom property. { "aps" : { "alert" : { "loc-key" : "GAME_PLAY_REQUEST_FORMAT", "loc-args" : [ "Jenna", "Frank"] }, "sound" : "chime" }, "acme" : "foo" } Example 5. The following example shows an empty aps dictionary; because the badge property is missing, any current badge number shown on the application icon is removed. The acme2 custom property is an array of two integers. { "aps" : { }, "acme2" : [ 5, 8 ] Apple Push Notification Service The Notification Payload 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 40} Remember, for better performance, you should strip all whitespace and newline characters from the payload before including it in the notification. Apple Push Notification Service The Notification Payload 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 41Sandbox and Production Environments To develop and deploy the provider side of a client/server application, you must get SSL certificates from the appropriate Dev Center. Each certificate is limited to a single application, identified by its bundle ID. Each certificate is also limited to one of two development environments, each with its own assigned IP address: ● Sandbox: The sandbox environment is used for initial development and testing of the provider application. It provides the same set of services as the production environment, although with a smaller number of server units. The sandbox environment also acts a virtual device, enabling simulated end-to-end testing. You accessthe sandbox environment at gateway.sandbox.push.apple.com, outbound TCP port 2195. ● Production: Use the production environment when building the production version of the provider application. Applications using the production environment must meet Apple’s reliability requirements. You access the production environment at gateway.push.apple.com, outbound TCP port 2195. You must getseparate certificatesfor the sandbox (development) environment and the production environment. The certificates are associated with an identifier of the application that is the recipient of push notifications; this identifier includes the application’s bundle ID. When you create a provisioning profile for one of the environments, the requisite entitlements are automatically added to the profile, including the entitlement specific to push notifications, . The two provisioning profiles are called Development and Distribution. The Distribution provisioning profile is a requirement for submitting your application to the App Store. OS X Note: The entitlement for the OS X provisioning profile is com.apple.developer.aps-environment, which scopes it to the platform. You can determine in Xcode which environment you are in by the selection of a code-signing identity. If you see an “iPhone Developer: Firstname Lastname ” certificate/provisioning profile pair, you are in the sandbox environment. If you see an “iPhone Distribution: Companyname ” certificate/provisioning profile pair, you are in the production environment. It is a good idea to create a Distribution release configuration in Xcode to help you further differentiate the environments. 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 42 Provisioning and DevelopmentAlthough an SSL certificate is not put into a provisioning profile, the is added to the profile because of the association of the certificate and a particular application ID. As a result this entitlement is built into the application, which enables it to receive push notifications. Provisioning Procedures In the iOS Developer Program, each member on a development team has one of three roles: team agent, team admin, and team member. The roles differ in relation to iPhone development certificates and provisioning profiles. The team agent isthe only person on the team who can create Development (Sandbox) SSL certificates and Distribution (Production) SSL certificates. The team admin and the team agent can both create both Development and Distribution provisioning profiles. Team members may only download and install certificates and provisioning profiles. The procedures in the following sections make reference to these roles. Note: The iOS Provisioning Portal makes available to all iOS Developer Program members a user guide and a series of videos that explain all aspects of certificate creation and provisioning. The following sections focus on APNs-specific aspects of the process and summarize other aspects. To access the portal, iOS Developer Program members should go to the iOS Dev Center (http://developer.apple.com/devcenter/ios), log in, and click then go to the iOS Provisioning Portal page (there’s a link in the upper right). Creating the SSL Certificate and Keys In the provisioning portal of the iOS Dev Center, the team agent selects the application IDs for APNs. He also completes the following steps to create the SSL certificate: 1. Click App IDs in the sidebar on the left side of the window. The next page displays your valid application IDs. An application ID consists of an application’s bundle ID prefixed with a ten-character code generated by Apple. The team admin must enter the bundle ID. For a certificate, it must incorporate a specific bundle ID; you cannot use a “wildcard” application ID. 2. Locate the application ID for the sandbox SSL certificate (and that is associated with the Development provisioning profile) and click Configure. You must see “Available” under the Apple Push Notification Service column to configure a certificate for this application ID. 3. In the Configure App ID page, check the Enable Push Notification Services box and click the Configure button. Clicking this button launches an APNs Assistant, which guides you through the next series of steps. Provisioning and Development Provisioning Procedures 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 434. The first step requires that you launch the Keychain Access application and generate a Certificate Signing Request (CSR). Follow the instructions presented in the assistant. When you are finished generating a CSR, click Continue in Keychain Access to return to the APNs Assistant. When you create a CSR, Keychain Access generates a private and a public cryptographic key pair. The private key is put into your Login keychain by default. The public key is included in the CSR sent to the provisioning authority. When the provisioning authority sendsthe certificate back to you, one of the items in that certificate is the public key. 5. In the Submit Certificate Signing Request pane, click Choose File. Navigate to the CSR file you created in the previous step and select it. 6. Click the Generate button. While displaying the Generate Your Certificate pane, the Assistant configures and generates your Client SSL Certificate. If it succeeds, it displays the message “Your APNs Certificate has been generated.” Click Continue to proceed to the next step. 7. In the next pane, click the Download Now button to download the certificate file to your download location. Navigate to that location and double-click the certificate file (which has an extension of cer) to install it in your keychain. When you are finished, click Done in the APNs Assistant. Double-clicking the file launches Keychain Access. Make sure you install the certificate in your login keychain on the computer you are using for provider development. In Keychain Access, ensure that your certificate user ID matches your application’s bundle ID. The APNs SSL certificate should be installed on your notification server. When you finish these steps you are returned to the Configure App ID page of the iOS Dev Center portal. The certificate should be badged with a green circle and the label “Enabled”. To create a certificate for the production environment, repeat the same procedure but choose the application ID for the production certificate. Creating and Installing the Provisioning Profile The Team Admin or Team Agent must next create the provisioning profile (Development or Distribution) used in the server side of remote-notification development. The provisioning profile is a collection of assets that associates developers of an application and their devices with an authorized development team and enables those devicesto be used for testing. The profile contains certificates, device identifiers, the application’s bundle ID, and all entitlements, including . All team members must install the provisioning profile on the devices on which they will run and test application code. Provisioning and Development Provisioning Procedures 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 44Note: Refer to the program user guide for the details of creating a provisioning profile. To download and install the provisioning profile, team members should complete the following steps: 1. Go to the provisioning portal in the iOS Dev Center. 2. Create a new provisioning profile that contains the App ID you registered for APNs. 3. Modify any existing profile before you download the new one. You have to modify the profile in some minor way (for example, toggle an option) for the portal to generate a new provisioning profile. If the profile isn't so “dirtied,” you're given the original profile without the push entitlements. 4. From the download location, drag the profile file (which has an extension of mobileprovision) onto the Xcode or iTunes application icons. Alternatively, you can move the profile file to ~/Library/MobileDevice/Provisioning Profiles. Create the directory if it does not exist. 5. Verify that the entitlements in the provisioning-profile file are correct. To do this, open the .mobileprovision file in a text editor. The contents of the file are structured in XML. In the Entitlements dictionary locate the aps-environment key. For a development provisioning profile, the string value of this key should be development; for a distribution provisioning profile, the string value should be production. 6. In the Xcode Organizer window, go the Provisioning Profiles section and install the profile on your device. When you build the project, the binary is now signed by the certificate using the private key. Installing the SSL Certificate and Key on the Server You should install the SSL distribution certificate and private cryptographic key you obtained earlier on the server computer on which the provider code runs and from which it connects with the sandbox or production versions of APNs. To do so, complete the following steps: 1. Open Keychain Access utility and click the My Certificates category in the left pane. 2. Find the certificate you want to install and disclose its contents. You'll see both a certificate and a private key. 3. Select both the certificate and key, choose File > Export Items, and export them as a Personal Information Exchange (.p12) file. 4. Servers implemented in languages such as Ruby and Perl often are better able to deal with certificates in the Personal Information Exchange format. To convert the certificate to thisformat, complete the following steps: Provisioning and Development Provisioning Procedures 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 45a. In KeyChain Access,select the certificate and choose File > Export Items. Select the Personal Information Exchange (.p12) option, select a save location, and click Save. b. Launch the Terminal application and enter the following command after the prompt: openssl pkcs12 -in CertificateName.p12 -out CertificateName.pem -nodes 5. Copy the .pem certificate to the new computer and install it in the appropriate place. Provisioning and Development Provisioning Procedures 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 46This chapter describesthe interfacesthat providers use for communication with Apple Push Notification service (APNs) and discusses some of the functions that providers are expected to fulfill. General Provider Requirements As a provider you communicate with Apple Push Notification service over a binary interface. This interface is a high-speed, high-capacity interface for providers; it uses a streaming TCP socket design in conjunction with binary content. The binary interface is asynchronous. The binary interface of the production environment is available through gateway.push.apple.com, port 2195; the binary interface of the sandbox (development) environment is available through gateway.sandbox.push.apple.com, port 2195. You may establish multiple, parallel connections to the same gateway or to multiple gateway instances. For each interface you should use TLS (or SSL) to establish a secured communications channel. The SSL certificate required for these connections is provisioned through the iOS Provisioning Portal. (See “Provisioning and Development” (page 42) for details.) To establish a trusted provider identity, you should present this certificate to APNs at connection time using peer-to-peer authentication. Note: To establish a TLS session with APNs, an Entrust Secure CA root certificate must be installed on the provider’s server. If the server is running OS X, this root certificate is already in the keychain. On other systems, the certificate might not be available. You can download this certificate from the Entrust SSL Certificates website. You should also retain connections with APNs across multiple notifications. APNs may consider connections that are rapidly and repeatedly established and torn down as a denial-of-service attack. Upon error, APNs closes the connection on which the error occurred. As a provider, you are responsible for the following aspects of push notifications: ● You must compose the notification payload (see “The Notification Payload” (page 35)). ● You are responsible for supplying the badge number to be displayed on the application icon. 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 47 Provider Communication with Apple Push Notification Service● You should regularly connect with the feedback web server and fetch the current list of those devices that have repeatedly reported failed-delivery attempts. Then you should cease sending notifications to the devices associated with those applications. See “The Feedback Service” (page 53) for more information. If you intend to support notification messagesin multiple languages, but do not use the loc-key and loc-args properties of the aps payload dictionary for client-side fetching of localized alert strings, you need to localize the text of alert messages on the server side. To do this, you need to find out the current language preference from the client application. “Scheduling, Registering, and Handling Notifications” (page 15) suggests an approach for obtaining this information. See “The Notification Payload” (page 35) for information about the loc-key and loc-args properties. The Binary Interface and Notification Formats The binary interface employs a plain TCP socket for binary content that is streaming in nature. For optimum performance, you should batch multiple notificationsin a single transmission over the interface, either explicitly or using a TCP/IP Nagle algorithm. The interface supports two formats for notification packets, a simple format and an enhanced format that addresses some of the issues with the simple format: ● Notification expiry. APNs has a store-and-forward feature that keeps the most recent notification sent to an application on a device. If the device is offline at time of delivery, APNs delivers the notification when the device next comes online. With the simple format, the notification is delivered regardless of the pertinence of the notification. In other words, the notification can become “stale” over time. The enhanced format includes an expiry value that indicates the period of validity for a notification. APNs discards a notification in store-and-forward when this period expires. ● Error response. With the simple format, if you send a notification packet that is malformed in some way—for example, the payload exceeds the stipulated limit—APNs responds by severing the connection. It gives no indication why it rejected the notification. The enhanced format lets a provider tag a notification with an arbitrary identifier. If there is an error, APNs returns a packet that associates an error code with the identifier. This response enables the provider to locate and correct the malformed notification. The enhanced format is recommended for most providers. Provider Communication with Apple Push Notification Service The Binary Interface and Notification Formats 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 48Let’s examine the simple notification format first because much of this format is shared with the enhanced format. Figure 5-1 illustrates this format. Figure 5-1 Simple notification format 0 0 32 deviceToken (binary) 0 34 {"aps":{"alert":"You have mail!"}} Bytes: 1 2 32 2 Command Token length Payload length (big endian) (big endian) 34 The first byte in the simple format is a command value of 0 (zero). The lengths of the device token and the payload must be in network order (that is, big endian). In addition, you should encode the device token in binary format. The payload must not exceed 256 bytes and must not be null-terminated. Listing 5-1 gives an example of a function that sends a push notification to APNs over the binary interface using the simple notification format. The example assumes prior SSL connection to gateway.push.apple.com (or gateway.sandbox.push.apple.com) and peer-exchange authentication. Listing 5-1 Sending a notification in the simple format via the binary interface static bool sendPayload(SSL *sslPtr, char *deviceTokenBinary, char *payloadBuff, size_t payloadLength) { bool rtn = false; if (sslPtr && deviceTokenBinary && payloadBuff && payloadLength) { uint8_t command = 0; /* command number */ char binaryMessageBuff[sizeof(uint8_t) + sizeof(uint16_t) + DEVICE_BINARY_SIZE + sizeof(uint16_t) + MAXPAYLOAD_SIZE]; /* message format is, |COMMAND|TOKENLEN|TOKEN|PAYLOADLEN|PAYLOAD| */ char *binaryMessagePt = binaryMessageBuff; uint16_t networkOrderTokenLength = htons(DEVICE_BINARY_SIZE); uint16_t networkOrderPayloadLength = htons(payloadLength); /* command */ *binaryMessagePt++ = command; /* token length network order */ Provider Communication with Apple Push Notification Service The Binary Interface and Notification Formats 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 49memcpy(binaryMessagePt, &networkOrderTokenLength, sizeof(uint16_t)); binaryMessagePt += sizeof(uint16_t); /* device token */ memcpy(binaryMessagePt, deviceTokenBinary, DEVICE_BINARY_SIZE); binaryMessagePt += DEVICE_BINARY_SIZE; /* payload length network order */ memcpy(binaryMessagePt, &networkOrderPayloadLength, sizeof(uint16_t)); binaryMessagePt += sizeof(uint16_t); /* payload */ memcpy(binaryMessagePt, payloadBuff, payloadLength); binaryMessagePt += payloadLength; if (SSL_write(sslPtr, binaryMessageBuff, (binaryMessagePt - binaryMessageBuff)) > 0) rtn = true; } return rtn; } Figure 5-2 depicts the enhanced format for notification packets. With this format, if APNs encounters an unintelligible command, it returns an error response before disconnecting. Figure 5-2 Enhanced notification format 1 Identifier Bytes: 1 4 Expiry 4 Command 32 deviceToken (binary) 0 34 {"aps":{"alert":"You have mail!"}} 2 32 2 Token length Payload length (big endian) (big endian) 34 0 The first byte in the enhanced notification format is a command value of 1. The two new fields in this format are for an identifier and an expiry value. (Everything else is the same as the simple notification format.) ● Identifier—An arbitrary value that identifies this notification. This same identifier is returned in a error-response packet if APNs cannot interpret a notification. Provider Communication with Apple Push Notification Service The Binary Interface and Notification Formats 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 50● Expiry—A fixed UNIX epoch date expressed in seconds (UTC) that identifies when the notification is no longer valid and can be discarded. The expiry value should be in network order (big endian). If the expiry value is positive, APNs tries to deliver the notification at least once. You can specify zero or a value less than zero to request that APNs not store the notification at all. If you send a notification and APNs finds the notification malformed or otherwise unintelligible, it returns an error-response packet prior to disconnecting. (If there is no error, APNs doesn’t return anything.) Figure 5-3 depicts the format of the error-response packet. Figure 5-3 Format of error-response packet 8 Bytes: 1 Command Status n Identifier 1 4 The packet has a command value of 8 followed by a one-byte status code and the same notification identifier specified by the provider when it composed the notification. Table 5-1 lists the possible status codes and their meanings. Table 5-1 Codes in error-response packet Status code Description 0 No errors encountered 1 Processing error 2 Missing device token 3 Missing topic 4 Missing payload 5 Invalid token size 6 Invalid topic size 7 Invalid payload size 8 Invalid token 255 None (unknown) Provider Communication with Apple Push Notification Service The Binary Interface and Notification Formats 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 51Listing 5-2 modifies the code in Listing 5-1 (page 49) to compose a push notification in the enhanced format before sending it to APNs. As with the earlier example, it assumes prior SSL connection to gateway.push.apple.com (or gateway.sandbox.push.apple.com) and peer-exchange authentication. Listing 5-2 Sending a notification in the enhanced format via the binary interface static bool sendPayload(SSL *sslPtr, char *deviceTokenBinary, char *payloadBuff, size_t payloadLength) { bool rtn = false; if (sslPtr && deviceTokenBinary && payloadBuff && payloadLength) { uint8_t command = 1; /* command number */ char binaryMessageBuff[sizeof(uint8_t) + sizeof(uint32_t) + sizeof(uint32_t) + sizeof(uint16_t) + DEVICE_BINARY_SIZE + sizeof(uint16_t) + MAXPAYLOAD_SIZE]; /* message format is, |COMMAND|ID|EXPIRY|TOKENLEN|TOKEN|PAYLOADLEN|PAYLOAD| */ char *binaryMessagePt = binaryMessageBuff; uint32_t whicheverOrderIWantToGetBackInAErrorResponse_ID = 1234; uint32_t networkOrderExpiryEpochUTC = htonl(time(NULL)+86400); // expire message if not delivered in 1 day uint16_t networkOrderTokenLength = htons(DEVICE_BINARY_SIZE); uint16_t networkOrderPayloadLength = htons(payloadLength); /* command */ *binaryMessagePt++ = command; /* provider preference ordered ID */ memcpy(binaryMessagePt, &whicheverOrderIWantToGetBackInAErrorResponse_ID, sizeof(uint32_t)); binaryMessagePt += sizeof(uint32_t); /* expiry date network order */ memcpy(binaryMessagePt, &networkOrderExpiryEpochUTC, sizeof(uint32_t)); binaryMessagePt += sizeof(uint32_t); Provider Communication with Apple Push Notification Service The Binary Interface and Notification Formats 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 52/* token length network order */ memcpy(binaryMessagePt, &networkOrderTokenLength, sizeof(uint16_t)); binaryMessagePt += sizeof(uint16_t); /* device token */ memcpy(binaryMessagePt, deviceTokenBinary, DEVICE_BINARY_SIZE); binaryMessagePt += DEVICE_BINARY_SIZE; /* payload length network order */ memcpy(binaryMessagePt, &networkOrderPayloadLength, sizeof(uint16_t)); binaryMessagePt += sizeof(uint16_t); /* payload */ memcpy(binaryMessagePt, payloadBuff, payloadLength); binaryMessagePt += payloadLength; if (SSL_write(sslPtr, binaryMessageBuff, (binaryMessagePt - binaryMessageBuff)) > 0) rtn = true; } return rtn; } Take note that the device token in the production environment and the device token in the development (sandbox) environment are not the same value. The Feedback Service If a provider attempts to deliver a push notification to an application, but the application no longer exists on the device, the device reports that fact to Apple Push Notification Service. This situation often happens when the user has uninstalled the application. If a device reports failed-delivery attempts for an application, APNs needs some way to inform the provider so that it can refrain from sending notifications to that device. Doing this reduces unnecessary message overhead and improves overall system performance. For this purpose Apple Push Notification Service includes a feedback service that APNs continually updates with a per-application list of devices for which there were failed-delivery attempts. The devices are identified by device tokens encoded in binary format. Providers should periodically query the feedback service to get Provider Communication with Apple Push Notification Service The Feedback Service 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 53the list of device tokens for their applications, each of which is identified by its topic. Then, after verifying that the application hasn’t recently been re-registered on the identified devices, a provider should stop sending notifications to these devices. Access to the feedback service takes place through a binary interface similar to that used for sending push notifications. You access the production feedback service via feedback.push.apple.com, port 2196; you access the sandbox feedback service via feedback.sandbox.push.apple.com, port 2196. As with the binary interface for push notifications, you must use TLS (or SSL) to establish a secured communications channel. The SSL certificate required for these connections is the same one that is provisioned for sending notifications. To establish a trusted provider identity, you should present this certificate to APNs at connection time using peer-to-peer authentication. Once you are connected, transmission begins immediately; you do not need to send any command to APNs. Begin reading the stream written by the feedback service until there is no more data to read. The received data is in tuples having the following format: Figure 5-4 Binary format of a feedback tuple n n n n 0 32 deviceToken (binary) Bytes: 4 2 32 Token length time_t (big endian) (big endian) A timestamp (as a four-byte time_t value) indicating when the APNs determined that the application no longer exists on the device. This value, which is in network order, represents the seconds since 1970, anchored to UTC. You should use the timestamp to determine if the application on the device re-registered with your service since the moment the device token was recorded on the feedback service. If it hasn’t, you should cease sending push notifications to the device. Timestamp Token length The length of the device token as a two-byte integer value in network order. Device token The device token in binary format. Note: APNs monitors providers for their diligence in checking the feedback service and refraining from sending push notifications to nonexistent applications on devices. Provider Communication with Apple Push Notification Service The Feedback Service 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 54This table describes the changes to Local and Push Notification Programming Guide . Date Notes Added information about implementing push notifications on an OS X desktop client. Unified the guide for iOS and OS X. 2011-08-09 Describes how to determine if an application is launched because the user tapped the notification alert's action button. 2010-08-03 2010-07-08 Changed occurrences of "iPhone OS" to "iOS." Updated and reorganized to describe local notifications, a feature introduced in iOS 4.0. Also describes a new format for push notifications sent to APNs. 2010-05-27 2010-01-28 Made many small corrections. Made minor corrections and linked to short inline articles on Cocoa concepts. 2009-08-14 Added notes about Wi-Fi and frequency of registration, and gateway address for sandbox. Updated with various clarifications and enhancements. 2009-05-22 First version of a document that explains how providers can send push notifications to client applications using Apple Push Notification Service. 2009-03-15 2011-08-09 | © 2011 Apple Inc. All Rights Reserved. 55 Document Revision HistoryApple Inc. © 2011 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, iPad, iPhone, iPod, iPod touch, iTunes, Keychain, Mac, OS X, QuickTime, Sand, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. App Store is a service mark of Apple Inc. Times is a registered trademark of Heidelberger Druckmaschinen AG, available from Linotype Library GmbH. UNIX is a registered trademark of The Open Group. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Mac App Programming GuideContents About OS X App Design 7 At a Glance 7 Cocoa Helps You Create Great Apps for OS X 7 Common Behaviors Make Apps Complete 8 Get It Right: Meet System and App Store Requirements 8 Finish Your App with Performance Tuning 8 How to Use This Document 9 See Also 9 The Mac Application Environment 10 An Environment Designed for Ease of Use 10 A Sophisticated Graphics Environment 11 Low-Level Details of the Runtime Environment 12 Based on UNIX 12 Concurrency and Threading 12 The File System 13 Security 18 The Core App Design 21 Fundamental Design Patterns 21 The App Style Determines the Core Architecture 23 The Core Objects for All Cocoa Apps 26 Additional Core Objects for Multiwindow Apps 29 Integrating iCloud Support Into Your App 30 Shoebox-Style Apps Should Not Use NSDocument 31 Document-Based Apps Are Based on an NSDocument Subclass 31 Documents in OS X 31 The Document Architecture Provides Many Capabilities for Free 32 The App Life Cycle 33 The main Function is the App Entry Point 33 The App’s Main Event Loop Drives Interactions 34 Automatic and Sudden Termination of Apps Improve the User Experience 36 Support the Key Runtime Behaviors in Your Apps 36 Automatic Termination 37 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 2Sudden Termination 38 User Interface Preservation 39 Apps Are Built Using Many Different Pieces 43 The User Interface 44 Event Handling 45 Graphics, Drawing, and Printing 46 Text Handling 47 Implementing the Application Menu Bar 47 Xcode Templates Provide the Menu Bar 48 Connect Menu Items to Your Code or Your First Responder 48 Implementing the Full-Screen Experience 49 Full-Screen API in NSApplication 49 Full-Screen API in NSWindow 50 Full-Screen API in NSWindowDelegate Protocol 50 Supporting Common App Behaviors 53 You Can Prevent the Automatic Relaunch of Your App 53 Making Your App Accessible Enables Many Users 53 Provide User Preferences for Customization 56 Integrate Your App With Spotlight Search 57 Use Services to Increase Your App’s Usefulness 58 Optimize for High Resolution 58 Think About Points, Not Pixels 58 Provide High-Resolution Versions of Graphics 59 Use High-Resolution-Savvy Image-Loading Methods 60 Use APIs That Support High Resolution 60 Prepare for Fast User Switching 61 Take Advantage of the Dock 62 Build-Time Configuration Details 63 Configuring Your Xcode Project 63 The Information Property List File 64 The OS X Application Bundle 66 Internationalizing Your App 69 Tuning for Performance and Responsiveness 71 Speed Up Your App’s Launch Time 71 Delay Initialization Code 71 Simplify Your Main Nib File 72 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 3 ContentsMinimize Global Variables 72 Minimize File Access at Launch Time 73 Don’t Block the Main Thread 73 Decrease Your App’s Code Size 73 Compiler-Level Optimizations 74 Use Core Data for Large Data Sets 75 Eliminate Memory Leaks 75 Dead Strip Your Code 75 Strip Symbol Information 76 Document Revision History 77 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 4 ContentsFigures, Tables, and Listings The Mac Application Environment 10 Table 1-1 Key directories for Mac apps 14 Table 1-2 Attributes for the OS X file system 17 Listing 1-1 Getting the path to the Application Support directory 16 The Core App Design 21 Figure 2-1 The Calculator single-window utility app 24 Figure 2-2 The iPhoto single-window app 25 Figure 2-3 TextEdit document window 26 Figure 2-4 Key objects in a single-window app 27 Figure 2-5 Key objects in a multiwindow document app 29 Figure 2-6 Document file, object, and data model 32 Figure 2-7 The main event loop 35 Figure 2-8 Responder objects targeted by Cocoa for preservation 40 Figure 2-9 Windows and menus in an app 44 Figure 2-10 Processing events in the main run loop 45 Table 2-1 Fundamental design patterns used by Mac apps 21 Table 2-2 The core objects used by all Cocoa apps 27 Table 2-3 Additional objects used by multiwindow document apps 30 Listing 2-1 The main function of a Mac app 34 Listing 2-2 Returning the main window for a single-window app 43 Implementing the Full-Screen Experience 49 Table 3-1 Window delegate methods supporting full-screen mode 51 Supporting Common App Behaviors 53 Figure 4-1 Universal Access system preference dialog 55 Figure 4-2 Spotlight extracting metadata 57 Figure 4-3 Content appears the same size at standard resolution and high resolution 59 Build-Time Configuration Details 63 Figure 5-1 The information property list editor 65 Figure 5-2 The Language preference view 70 Table 5-1 A typical application bundle 66 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 5Tuning for Performance and Responsiveness 71 Table 6-1 Compiler optimization options 74 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 6 Figures, Tables, and ListingsThis document is the starting point for learning how to create Mac apps. It contains fundamental information about the OS X environment and how your apps interact with that environment. It also contains important information about the architecture of Mac apps and tips for designing key parts of your app. At a Glance Cocoa is the application environment that unlocks the full power of OS X. Cocoa provides APIs, libraries, and runtimes that help you create fast, exciting apps that automatically inherit the beautiful look and feel of OS X, as well as standard behaviors users expect. Cocoa Helps You Create Great Apps for OS X You write apps for OS X using Cocoa, which provides a significant amount of infrastructure for your program. Fundamental design patterns are used throughout Cocoa to enable your app to interface seamlessly with subsystem frameworks, and core application objects provide key behaviorsto supportsimplicity and extensibility in app architecture. Key parts of the Cocoa environment are designed particularly to support ease of use, one of the most important aspects of successful Mac apps. Many apps should adopt iCloud to provide a more coherent user experience by eliminating the need to synchronize data explicitly between devices. 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 7 About OS X App DesignRelevant Chapters: “The Mac Application Environment” (page 10), “The Core App Design” (page 21), and “Integrating iCloud Support Into Your App” (page 30) Common Behaviors Make Apps Complete During the design phase of creating your app, you need to think about how to implement certain features that users expect in well-formed Mac apps. Integrating these features into your app architecture can have an impact on the user experience: accessibility, preferences, Spotlight, services, resolution independence, fast user switching, and the Dock. Enabling your app to assume full-screen mode, taking over the entire screen, provides users with a more immersive, cinematic experience and enables them to concentrate fully on their content without distractions. Relevant Chapters: “Supporting Common App Behaviors” (page 53) and “Implementing the Full-Screen Experience” (page 49) Get It Right: Meet System and App Store Requirements Configuring your app properly is an important part of the development process. Mac apps use a structured directory called a bundle to manage their code and resource files. And although most of the files are custom and exist to support your app, some are required by the system or the App Store and must be configured properly. The application bundle also contains the resources you need to provide to internationalize your app to support multiple languages. Relevant Chapter: “Build-Time Configuration Details” (page 63) Finish Your App with Performance Tuning As you develop your app and your project code stabilizes, you can begin performance tuning. Of course, you want your app to launch and respond to the user’s commands as quickly as possible. A responsive app fits easily into the user’s workflow and gives an impression of being well crafted. You can improve the performance of your app by speeding up launch time and decreasing your app’s code footprint. About OS X App Design At a Glance 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 8Relevant Chapter: “Tuning for Performance and Responsiveness” (page 71) How to Use This Document This guide introduces you to the most important technologies that go into writing an app. In this guide you will see the whole landscape of what's needed to write one. That is, this guide shows you all the "pieces" you need and how they fit together. There are important aspects of app design that this guide does not cover, such as user interface design. However, this guide includes many links to other documents that provide details about the technologies it introduces, as well as links to tutorials that provide a hands-on approach. In addition, this guide emphasizes certain technologies introduced in OS X v10.7, which provide essential capabilities that set your app apart from older ones and give it remarkable ease of use, bringing some of the best features from iOS to OS X. See Also The following documents provide additional information about designing Mac apps, as well as more details about topics covered in this document: ● To work through a tutorial showing you how to create a Cocoa app, see Start Developing Mac Apps Today . ● For information about user interface design enabling you to create effective apps using OS X, see OS X Human Interface Guidelines. ● To understand how to create an explicit app ID, create provisioning profiles, and enable the correct entitlementsfor your application,so you can sell your application through the Mac App Store or use iCloud storage, see Tools Workflow Guide for Mac . ● For information about the design patterns used in Cocoa, see Cocoa Fundamentals Guide . ● For a general survey of OS X technologies, see Mac Technology Overview. ● To understand how to implement a document-based app, see Document-Based App Programming Guide for Mac . About OS X App Design How to Use This Document 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 9OS X incorporates the latest technologies for creating powerful and fun-to-use apps. But the technologies by themselves are not enough to make every app great. What sets an app apart from its peers is how it helps the user achieve some tangible goal. After all, users are not going to care what technologies an app uses, as long as it helps them do what they need to do. An app that gets in the user’s way is going to be forgotten, but one that makes work (or play) easier and more fun is going to be remembered. You use Cocoa to write apps for OS X. Cocoa gives you access to all of the features of OS X and allows you to integrate your app cleanly with the rest of the system. This chapter covers the key parts of OS X that help you create great apps. In particular, this chapter describessome of the important ease-of-use technologiesintroduced in OS X v10.7. For a more thorough list of technologies available in OS X, see Mac Technology Overview. An Environment Designed for Ease of Use OS X strives to provide an environment that is transparent to users and as easy to use as possible. By making hard tasks simple and getting out of the way, the system makes it easier for the user to be creative and spend less time worrying about the steps needed to make the computer work. Of course, simplifying tasks means your app has to do more of the work, but OS X provides help in that respect too. As you design your app, you should think about the tasks that users normally perform and find ways to make them easier. OS X supports powerful ease-of-use features and design principles. For example: ● Users should not have to save their work manually. The document model in Cocoa provides support for saving the user’sfile-based documents without user interaction;see “The Document Architecture Provides Many Capabilities for Free” (page 32). ● Apps should restore the user’s work environment at login time. Cocoa provides support for archiving the current state of the app’s interface (including the state of unsaved documents) and restoring that state at launch time; see “User Interface Preservation” (page 39). ● Appsshould support automatic termination so that the user never hasto quit them. Automatic termination means that when the user closes an app’s windows, the app appears to quit but actually just moves to the background quietly. The advantage is that subsequent launches are nearly instant as the app simply moves back to the foreground; see “Automatic and Sudden Termination of Apps Improve the User Experience” (page 36) 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 10 The Mac Application Environment● You should consider providing your users with an immersive, full-screen experience by implementing a full-screen version of your user interface. The full-screen experience eliminates outside distractions and allows the user to focus on their content; see “Implementing the Full-Screen Experience” (page 49). ● Support trackpad gestures for appropriate actions in your app. Gestures provide simple shortcuts for common tasks and can be used to supplement existing controls and menu commands. OS X provides automatic support for reporting gestures to your app through the normal event-handling mechanism; see Cocoa Event Handling Guide . ● Consider minimizing or eliminating the user’s interactions with the raw file system. Rather than expose the entire file system to the user through the open and save panels, some apps, in the manner of iPhoto and iTunes, can provide a better user experience by presenting the user’s content in a simplified browser designed specifically for the app’s content. OS X uses a well-defined file system structure that allows you to place and find files easily and includes many technologies for accessing those files; see “The File System” (page 13). ● For apps that support custom document types, provide a Quick Look plug-in so that users can view your documents from outside of your app; see Quick Look Programming Guide . ● Apps should support the fundamental features for the OS X user experience that make apps elegant and intuitive,such as direct manipulation and drag-and-drop. Usersshould remain in control, receive consistent feedback, and be able to explore because the app is forgiving with reversible actions; see OS X Human Interface Guidelines. All of the preceding features are supported by Cocoa and can be incorporated with relatively little effort. A Sophisticated Graphics Environment High-quality graphics and animation make your app look great and can convey a lot of information to the user. Animations in particular are a great way to provide feedback about changes to your user interface. So as you design your app, keep the following ideas in mind: ● Use animations to provide feedback and convey changes. Cocoa provides mechanisms for creating sophisticated animations quickly in both the AppKit and Core Animation frameworks. For information about creating view-based animations, see Cocoa Drawing Guide . For information about using Core Animation to create your animations, see Core Animation Programming Guide . ● Include high-resolution versions of your art and graphics. OS X automatically loads high-resolution image resources when an app runs on a screen whose scaling factor is greater than 1.0. Including such image resources makes your app’s graphics look even sharper and crisper on those higher-resolution screens. Forinformation aboutthe graphicstechnologies available inOS X,see “Media Layer” in Mac TechnologyOverview. The Mac Application Environment A Sophisticated Graphics Environment 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 11Low-Level Details of the Runtime Environment When you are ready to begin writing actual code, there are a lot of technologies available to make your life easier. OS X supports all of the basic features such as memory management, file management, networking, and concurrency that you need to write your code. In some cases, though, OS X also provides more sophisticated services (or specific coding conventions) that, when followed, can make writing your code even easier. Based on UNIX OS X is powered by a 64-bit Mach kernel, which manages processor resources, memory, and other low-level behaviors. On top of the kernel sits a modified version of the Berkeley Software Distribution (BSD) operating system, which provides interfaces that apps can use to interact with the lower-level system. This combination of Mach and BSD provides the following system-level support for your apps: ● Preemptive multitasking—All processes share the CPU efficiently. The kernel schedules processes in a way that ensures they all receive the time they need to run. Even background apps continue to receive CPU time to execute ongoing tasks. ● Protected memory—Each process runs in its own protected memory space, which prevents processes from accidentally interfering with each other. (Apps can share part of their memory space to implement fast interprocess communication but take responsibility for synchronizing and locking that memory appropriately.) ● Virtual memory—64-bit apps have a virtual address space of approximately 18 exabytes (18 billion billion bytes). (If you create a 32-bit app, the amount of virtual memory is only 4 GB.) When an app’s memory usage exceedsthe amount of free physical memory, the system transparently writes pagesto disk to make more room. Written out pages remain on disk until they are needed in memory again or the app exits. ● Networking and Bonjour—OS X provides support for the standard networking protocols and services in use today. BSD sockets provide the low-level communication mechanism for apps, but higher-level interfaces also exist. Bonjour simplifies the user networking experience by providing a dynamic way to advertise and connect to network services over TCP/IP. For detailed information about the underlying environment of OS X, see “Kernel and Device Drivers Layer” in Mac Technology Overview. Concurrency and Threading Each process starts off with a single thread of execution and can create more threads as needed. Although you can create threads directly using POSIX and other higher-level interfaces, for most types of work it is better to create them indirectly using block objects with Grand Central Dispatch (GCD) or operation objects, a Cocoa concurrency technology implemented by the NSOperation class. The Mac Application Environment Low-Level Details of the Runtime Environment 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 12GCD and operation objects are an alternative to raw threads that simplify or eliminate many of the problems normally associated with threaded programming,such assynchronization and locking. Specifically, they define an asynchronous programming model in which you specify only the work to be performed and the order in which you want it performed. The system then handles the tedious work required to schedule the necessary threads and execute your tasks as efficiently as possible on the current hardware. You should not use GCD or operations for work requiring time-sensitive data processing (such as audio or video playback), but you can use them for most other types of tasks. For more information on using GCD and operation objects to implement concurrency in your apps, see Concurrency Programming Guide . The File System The file system in OS X is structured to provide a better experience for users. Rather than exposing the entire file system to the user, the Finder hides any files and directories that an average user should not need to use, such as the contents of low-level UNIX directories. This is done to provide a simpler interface for the end user (and only in places like the Finder and the open and save panels). Apps can still access any files and directories for which they have valid permissions, regardless of whether they are hidden by the Finder. When creating apps, you should understand and follow the conventions associated with the OS X file system. Knowing where to put files and how to get information out of the file system ensures a better user experience. A Few Important App Directories The OS X file system is organized in a way that groups related files and data together in specific places. Every file in the file system has its place and apps need to know where to put the files they create. This is especially important if you are distributing your app through the App Store, which expects you to put your app’s data files in specific directories. Table 1-1 lists the directories with which apps commonly interact. Some of these directories are inside the home directory, which is either the user’s home directory or, if the app adopts App Sandbox, the app’s container directory as described in “App Sandbox and XPC” (page 18). Because the actual paths can differ based on these conditions, use the URLsForDirectory:inDomains: method of the NSFileManager classto retrieve the actual directory path. You can then add any custom directory and filename information to the returned URL object to complete the path. The Mac Application Environment Low-Level Details of the Runtime Environment 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 13Table 1-1 Key directories for Mac apps Directory Description Thisisthe installation directory for your app bundle. The path for the global Applications directory is /Applications but each user directory may have a local applications directory containing user-specific apps. Regardless, you should not need to use this path directly. To access resources inside your application bundle, use an NSBundle object instead. For more information about the structure of your application bundle and how you locate resources, see “The OS X Application Bundle” (page 66). Applications directory The configuration of your app determines the location of the home directory seen by your app: ● For apps running in a sandbox in OS X v10.7 and later, the home directory is the app’s container directory. For more information about the container directory, see “The Keychain” (page 20). ● For apps running outside of a sandbox (including those running in versions of OS X before 10.7), the home directory isthe user-specific subdirectory of /Users that contains the user’s files. To retrieve the path to the home directory, use the NSHomeDirectory function. Home directory The Library directory is the top-level directory for storing private app-related data and preferences. There are several Library directories scattered throughout the system but you should always use the one located inside the current home directory. Do not store files directly at the top-level of the Library directory. Instead, store them in one of the specific subdirectories described in this table. In OS X v10.7 and later, the Finder hides the Library directory in the user’s home folder by default. Therefore, you should never store files in this directory that you want the user to access. To get the path to this directory use the NSLibraryDirectory search path key with the NSUserDomainMask domain. Library directory The Mac Application Environment Low-Level Details of the Runtime Environment 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 14Directory Description The Application Support directory is where your app stores any type of file thatsupports the app but is not required for the app to run, such as document templates or configuration files. The files should be app-specific but should never store user data. This directory is located inside the Library directory. Never store files at the top level of this directory: Always put them in a subdirectory named for your app or company. If the resources apply to all users on the system, such as document templates, place them in /Library/Application Support. To get the path to this directory use the NSApplicationSupportDirectory search path key with the NSLocalDomainMask domain. If the resources are user-specific, such as workspace configuration files, place them in the current user’s ~/Library/Application Support directory. To get the path to this directory use the NSApplicationSupportDirectory search path key with the NSUserDomainMask domain. Application Support directory The Caches directory is where you store cache files and other temporary data that your app can re-create as needed. This directory is located inside the Library directory. Never store files at the top level of this directory: Always put them in a subdirectory named for your app or company. Your app is responsible for cleaning out cache data files when they are no longer needed. The system does not delete files from this directory. To get the path to this directory use the NSCachesDirectory search path key with the NSUserDomainMask domain. Caches directory The Movies directory contains the user’s video files. To get the path to this directory use the NSMoviesDirectory search path key with the NSUserDomainMask domain. Movies directory The Music directory contains the user’s music and audio files. To get the path to this directory use the NSMusicDirectory search path key with the NSUserDomainMask domain. Music directory The Pictures directory contains the user’s images and photos. To get the path to this directory use the NSPicturesDirectory search path key with the NSUserDomainMask domain. Pictures directory The Mac Application Environment Low-Level Details of the Runtime Environment 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 15Directory Description The Temporary directory is where you store files that do not need to persist between launches of your app. You normally use this directory for scratch files or other types ofshort-lived data filesthat are not related to your app’s persistent data. This directory is typically hidden from the user. Your app should remove files from this directory as soon as it is done with them. The system may also purge lingering files from this directory at system startup. To get the path to this directory use the NSTemporaryDirectory function. Temporary directory Listing 1-1 shows an example of how to retrieve the base path to the Application Support directory and then append a custom app directory to it. Listing 1-1 Getting the path to the Application Support directory NSFileManager* fileManager = [NSFileManager defaultManager]; NSURL* appSupportDir = nil; NSArray *urls = [fileManager URLsForDirectory:NSApplicationSupportDirectory inDomains:NSUserDomainMask]; if ([paths count] > 0) { appSupportDir = [[urls objectAtIndex:0] URLByAppendingPathComponent:@"com.example.MyApp"]; } For more information about how to access files in well known system directories, see File System Programming Guide . Coordinating File Access with Other Processes In OS X, other processes may have access to the same files that your app does. Therefore, when working with files, you should use the file coordination interfacesintroduced in OS X v10.7 to be notified when other processes (including the Finder) attempt to read or modify files your app is currently using. For example, coordinating file access is critical when your app adopts iCloud storage. The file coordination APIs allow you to assert ownership over files and directories that your app cares about. Any time another process attempts to touch one of those items, your app is given a chance to respond. For example, when an app attemptsto read the contents of a document your app is editing, you can write unsaved changes to disk before the other process is allowed to do its reading. The Mac Application Environment Low-Level Details of the Runtime Environment 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 16Using iCloud document storage, for example, you must incorporate file coordination because multiple apps can access your document files in iCloud. The simplest way to incorporate file coordination into your app is to use the NSDocument class, which handles all of the file-related management for you. See Document-Based App Programming Guide for Mac . On the other hand, if you're writing a library-style (or “shoebox”) app, you must use the file coordination interfaces directly, as described in File System Programming Guide . Interacting with the File System Disks in Macintosh computers are formatted using the HFS+ file system by default. However, Macintosh computers can interact with disks that use other formats so you should never code specifically to any one file system. Table 1-2 lists some of the basic file system attributes you may need to consider in your app and how you should handle them. Table 1-2 Attributes for the OS X file system Attribute Description The HFS+ file system is case-insensitive but also case-preserving. Therefore, when specifying filenames and directoriesin your code, it is best to assume case-sensitivity. Case sensitivity Construct paths using the methods of the NSURL and NSString classes. The NSURL classis preferred for path construction because of its ability to specify not only paths in the local file system but paths to network resources. Path construction Many file-related attributes can be retrieved using the getResourceValue: forKey:error: method of the NSURL class. You can also use an NSFileManager object to retrieve many file-related attributes. File attributes File permissions are managed using access control lists(ACLs) and BSD permissions. The system uses ACLs whenever possible to specify precise permissionsfor files and directories, but it falls back to using BSD permissions when no ACLs are specified. By default, any files your app creates are owned by the current user and given appropriate permissions. Thus, your app should always be able to read and write files it creates explicitly. In addition, the app’s sandbox may allow it to access other filesin specific situations. For more information about the sandbox,see “App Sandbox and XPC” (page 18). File permissions Appsthat cannot use the File Coordination interfaces(see “Coordinating File Access with Other Processes” (page 16)) to track changes to files and directories can use the FSEvents API instead. This API provides a lower-level interface for tracking file system interactions and is available in OS X v10.5 and later. For information on how to use the FSEvents API,see File SystemEvents Programming Guide . Tracking file changes The Mac Application Environment Low-Level Details of the Runtime Environment 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 17Security The security technologies in OS X help you safeguard sensitive data created or managed by your app, and help minimize damage caused by successful attacks from hostile code. These technologies impact how your app interacts with system resources and the file system. App Sandbox and XPC You secure your app against attack from malware by following the practices recommended in Secure Coding Guide . But an attacker needs only to find a single hole in your defenses, or in any of the frameworks and libraries that you link against, to gain control of your app along with all of its privileges. Starting in OS X v10.7, App Sandbox provides a last line of defense against stolen, corrupted, or deleted user data if malicious code exploits your app. App Sandbox also minimizes the damage from coding errors. Its strategy is twofold: 1. App Sandbox enables you to describe how your app interacts with the system. The system then grants your app the access it needs to get its job done, and no more. For your app to provide the highest level of damage containment, the best practice is to adopt the tightest sandbox possible. 2. App Sandbox allows the user to transparently grant your app additional access by way of Open and Save dialogs, drag and drop, and other familiar user interactions. You describe your app’s interaction with the system by way of setting entitlements in Xcode. An entitlement is a key-value pair, defined in a property list file, that confers a specific capability or security permission to a target. For example, there are entitlement keys to indicate that your app needs access to the camera, the network, and user data such as the Address Book. For details on all the entitlements available in OS X, see Entitlement Key Reference . When you adopt App Sandbox, the system provides a special directory for use by your app—and only by your app—called a container. Your app has unfettered read/write access to the container. All OS X path-finding APIs, above the POSIX layer, are relative to the container instead of to the user’s home directory. Othersandboxed apps have no access to your app’s container, as described further in “Code Signing” (page 19). The Mac Application Environment Low-Level Details of the Runtime Environment 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 18iOS Note: Because it is not for user documents, an OS X container differs from an iOS container which, in iOS, is the one and only location for user documents. As the sole local location for user documents, an iOS container is usually known simply as an app’s Documents directory. In addition, an iOS container contains the app itself. This is not so in OS X. iCloud Note: Apple’s iCloud technology, as described in “iCloud Storage”, uses the name “container” as well. There is no functional connection between an iCloud container and an App Sandbox container. Your sandboxed app can access paths outside of its container in the following three ways: ● At the specific direction of the user ● By you configuring your app with entitlements for specific file-system locations, such as the Movies folder ● When a path is in any of certain directories that are world readable The OS X security technology that interacts with the user to expand yoursandbox is called Powerbox. Powerbox has no API. Your app uses Powerbox transparently when, for example, you use the NSOpenPanel and NSSavePanel classes, or when the user employs drag and drop with your app. Some app operations are more likely to be targets of malicious exploitation. Examples are the parsing of data received over a network, and the decoding of video frames. By using XPC, you can improve the effectiveness of the damage containment offered by App Sandbox by separating such potentially dangerous activities into their own address spaces. XPC is an OS X interprocess communication technology that complements App Sandbox by enabling privilege separation. Privilege separation, in turn, is a development strategy in which you divide an app into pieces according to the system resource access that each piece needs. The component pieces that you create are called XPC services. For details on adopting XPC, see Daemons and Services Programming Guide . For a complete explanation of App Sandbox and how to use it, read App Sandbox Design Guide . Code Signing OS X employs the security technology known as code signing to allow you to certify that your app was indeed created by you. After an app is code signed, the system can detect any change to the app—whether the change is introduced accidentally or by malicious code. Various security technologies, including App Sandbox and parental controls, depend on code signing. The Mac Application Environment Low-Level Details of the Runtime Environment 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 19In most cases, you can rely on Xcode’s automatic code signing, which requires only that you specify a code signing identity in the build settings for your project. The steps to take are described in “Code Signing Your App” in Tools Workflow Guide for Mac . If you need to incorporate code signing into an automated build system, or if you link your app against third-party frameworks, refer to the procedures described in Code Signing Guide . When you adopt App Sandbox, you must code sign your app. This is because entitlements (including the special entitlement that enables App Sandbox) are built into an app’s code signature. OS X enforces a tie between an app’s container and the app’s code signature. This important security feature ensures that no other sandboxed app can access your container. The mechanism works as follows: After the system creates a container for an app, each time an app with the same bundle ID launches, the system checks that the app’s code signature matches a code signature expected by the container. If the system detects a mismatch, it prevents the app from launching. For a complete explanation of code signing in the context of App Sandbox, read “App Sandbox in Depth” in App Sandbox Design Guide . The Keychain A keychain is a secure, encrypted container for storing a user’s passwords and other secrets. It is designed to help a user manage their multiple logins, each with its own ID and password. You should always use keychain to store sensitive credentials for your app. For more on the keychain, see “Keychain Services Concepts” in Keychain Services Programming Guide . The Mac Application Environment Low-Level Details of the Runtime Environment 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 20To unleash the power of OS X, you develop apps using the Cocoa application environment. Cocoa presents the app’s user interface and integrates it tightly with the other components of the operating system. Cocoa provides an integrated suite of object-oriented software components packaged in two core class libraries, the AppKit and Foundation frameworks, and a number of underlying frameworks providing supporting technologies. Cocoa classes are reusable and extensible—you can use them as is or extend them for your particular requirements. Cocoa makes it easy to create apps that adopt all of the conventions and expose all of the power of OS X. In fact, you can create a new Cocoa application project in Xcode and, without adding any code, have a functional app. Such an app is able to display its window (or create new documents) and implements many standard system behaviors. And although the Xcode templates provide some code to make this all happen, the amount of code they provide is minimal. Most of the behavior is provided by Cocoa itself. To make a great app, you should build on the foundations Cocoa lays down for you, working with the conventions and infrastructure provided for you. To do so effectively, it'simportant to understand how a Cocoa app fits together. Fundamental Design Patterns Cocoa incorporates many design patterns in its implementation. Table 2-1 lists the key design patterns with which you should be familiar. Table 2-1 Fundamental design patterns used by Mac apps Design pattern Why it is important Use of the Model-View-Controller (MVC) design pattern ensures that the objects you create now can be reused or updated easily in future versions of your app. Cocoa provides most of the classes used to build your app’s controller and view layers. It is your job to customize the classes you need and provide the necessary data model objects to go with them. MVC is central to a good design for a Cocoa application because many Cocoa technologies and architectures are based on MVC and require that your custom objects assume one of the MVC roles. Model-View-Controller 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 21 The Core App DesignDesign pattern Why it is important The delegation design pattern allows you to change the runtime behavior of an object without subclassing. Delegate objects conform to a specific protocol that defines the interaction points between the delegate and the object it modifies. Atspecific points, the master object callsthe methods of its delegate to provide it with information or ask what to do. The delegate can then take whatever actions are appropriate. Delegation The responder chain definesthe relationships between event-handling objects in your app. As events arrive, the app dispatches them to the first responder object for handling. If that object does not want the event, it passes it to the next responder, which can either handle the event or send it to its next responder, and so on up the chain. Windows and views are the most common types of responder objects and are always the first responders for mouse events. Other types of objects, such as your app’s controller objects, may also be responders. Responder chain Controls use the target-action design pattern to notify your app of user interactions. When the user interacts with a control in a predefined way (such as by touching a button), the controlsends a message (the action) to an object you specify (the target). Upon receiving the action message, the target object can then respond in an appropriate manner. Target-action Block objects are a convenient way to encapsulate code and local stack variablesin a form that can be executed later. Blocks are used in lieu of callback functions by many frameworks and are also used in conjunction with Grand Central Dispatch to perform tasks asynchronously. For more information about using blocks, see Blocks Programming Topics. Block objects Notifications are used throughout Cocoa to deliver news of changes to your app. Many objects send notifications at key moments in the object’s life cycle. Intercepting these notifications gives you a chance to respond and add custom behavior. Notifications KVO tracks changes to a specific property of an object. When that property changes, the change generates automatic notifications for any objects that registered an interest in that property. Those observers then have a chance to respond to the change. Key-value observing (KVO) The Core App Design Fundamental Design Patterns 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 22Design pattern Why it is important Cocoa bindings provide a convenient bridge between the model, view, and controller portions of your app. You bind a view to some underlying data object (which can be static or dynamic) through one of your controllers. Changes to the view are then automatically reflected in the data object, and vice versa. The use of bindings is not required for apps but does minimize the amount of code you have to write. You can set up bindings programmatically or using Interface Builder. Bindings For a more detailed discussion of Cocoa and the design patterns you use to implement Mac apps, see Cocoa Fundamentals Guide . The App Style Determines the Core Architecture The style of your app defines which core objects you must use in its implementation. Cocoa supports the creation of both single-window and multiwindow apps. For multiwindow designs, it also provides a document architecture to help manage the files associated with each app window. Thus, apps can have the following forms: ● Single-window utility app ● Single-window library-style app ● Multiwindow document-based app You should choose a basic app style early in your design process because that choice affects everything you do later. The single-window styles are preferred in many cases, especially for developers bringing apps from iOS. The single-window style typically yields a more streamlined user experience, and it also makes it easier for your app to support a full-screen mode. However, if your app works extensively with complex documents, the multiwindow style may be preferable because it provides more document-related infrastructure to help you implement your app. The Core App Design The App Style Determines the Core Architecture 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 23The Calculator app provided with OS X, shown in Figure 2-1, is an example of a single-window utility app. Utility apps typically handle ephemeral data or manage system processes. Calculator does not create or deal with any documents or persistent user data but simply processes numerical data entered by the user into the text field in its single window, displaying the results of its calculations in the same field. When the user quits the app, the data it processed is simply discarded. Figure 2-1 The Calculator single-window utility app Single-window, library-style (or “shoebox”) apps do handle persistent user data. One of the most prominent examples of a library-style app is iPhoto, shown in Figure 2-2. The user data handled by iPhoto are photos (and associated metadata), which the app edits, displays, and stores. All user interaction with iPhoto happens in a single window. Although iPhoto stores its data in files, it doesn’t present the files to the user. The app presents a simplified interface so that users don’t need to manage files in order to use the app. Instead, they work directly with their photos. Moreover, iPhoto hides its files from regular manipulation in the Finder by placing The Core App Design The App Style Determines the Core Architecture 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 24them within a single package. In addition, the app savesthe user’s editing changesto disk at appropriate times. So, users are relieved of the need to manually save, open, or close documents. This simplicity for users is one of the key advantages of the library-style app design. Figure 2-2 The iPhoto single-window app A good example of a multiwindow document-based app is TextEdit, which creates, displays, and edits documents containing plain or styled text and images. TextEdit does not organize or manage its documents—users do that with the Finder. Each TextEdit document opens in its own window, multiple documents can be open at The Core App Design The App Style Determines the Core Architecture 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 25one time, and the user interacts with the frontmost document using controls in the window’s toolbar and the app’s menu bar. Figure 2-3 shows a document created by TextEdit. For more information about the document-based app design, see “Document-Based Apps Are Based on an NSDocument Subclass” (page 31). Figure 2-3 TextEdit document window Both single-window and multiwindow apps can present an effective full-screen mode, which provides an immersive experience that enables users to focus on their tasks without distractions. For information about full-screen mode, see “Implementing the Full-Screen Experience” (page 49). The Core Objects for All Cocoa Apps Regardless of whether you are using a single-window or multiwindow app style, all apps use the same core set of objects. Cocoa provides the default behavior for most of these objects. You are expected to provide a certain amount of customization of these objects to implement your app’s custom behavior. The Core App Design The App Style Determines the Core Architecture 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 26Figure 2-4 shows the relationships among the core objects for the single-window app styles. The objects in this figure are separated according to whether they are part of the model, view, or controller portions of the app. As you can see from the figure, the Cocoa–provided objects provide much of the controller and view layer for your app. Figure 2-4 Key objects in a single-window app Table 2-2 (page 27) describes the roles played by the objects in the diagram. Table 2-2 The core objects used by all Cocoa apps Object Description (Required) Runs the event loop and manage interactions between your app and the system. You typically use the NSApplication class as is, putting any custom app-object-related code in your application delegate object. NSApplication object (Expected) A custom object that you provide which works closely with the NSApplication object to run the app and manage the transitions between different application states. Your application delegate object must conform to the NSApplicationDelegate Protocol. Application delegate object The Core App Design The App Style Determines the Core Architecture 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 27Object Description Store content specific to your app. A banking app might store a database containing financial transactions, whereas a painting app might store an image object or the sequence of drawing commands that led to the creation of that image. Data model objects Responsible for loading and managing a single window each and coordinating with the system to handle standard window behaviors. You subclass NSWindowController tomanage both the window and its contents. Each window controller is responsible for everything that happens in its window. If the contents of your window are simple, the window controller may do all of the management itself. If your window is more complex, the window controller might use one or more view controllers to manage portions of the window. Window controllers Represent your onscreen windows, configured in different styles depending on your app’s needs. For example, most windows have title bars and borders but you can also configure windows without those visual adornments. A window object is almost always managed by a window controller. An app can also have secondary windows, also known as dialogs and panels. These windows are subordinate to the current document window or, in the case of single-window apps, to the main window. They support the document or main window, for example, allowing selection of fonts and color, allowing the selection of tools from a palette, or displaying a warning‚ A secondary window is often modal. Window objects Coordinate the loading of a single view hierarchy into your app. Use view controllersto divide up the work for managing more sophisticated window layouts. Your view controllers work together (with the window controller) to present the window contents. If you have developed iOS apps, be aware that AppKit view controllers play a less prominent role than UIKit view controllers. In OS X, AppKit view controllers are assistantsto the window controller, which is ultimately responsible for everything that goes in the window. The main job of an AppKit view controller is to load its view hierarchy. Everything else is custom code that you write. View controllers Define a rectangular region in a window, draw the contents of that region, and handle events in that region. Views can be layered on top of each other to create view hierarchies, whereby one view obscures a portion of the underlying view. View objects Representstandard system controls. These view subclasses provide standard visual items such as buttons, text fields, and tables that you can use to build your user interface. Although a few controls are used as is to present visual adornments, most work with your code to manage user interactions with your app’s content. Control objects The Core App Design The App Style Determines the Core Architecture 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 28Additional Core Objects for Multiwindow Apps As opposed to a single-window app, a multiwindow app uses several windows to present its primary content. The Cocoa support for multiwindow appsis built around a document-based model implemented by a subsystem called the document architecture. In this model, each document object manages its content, coordinates the reading and writing of that content from disk, and presents the content in a window for editing. All document objects work with the Cocoa infrastructure to coordinate event delivery and such, but each document object is otherwise independent of its fellow document objects. Figure 2-5 shows the relationships among the core objects of a multiwindow document-based app. Many of the same objects in this figure are identical to those used by a single-window app. The main difference is the insertion of the NSDocumentController and NSDocument objects between the application objects and the objects for managing the user interface. Figure 2-5 Key objects in a multiwindow document app Table 2-3 (page 30) describes the role of the inserted NSDocumentController and NSDocument objects. (For information about the roles of the other objects in the diagram, see Table 2-2 (page 27).) The Core App Design The App Style Determines the Core Architecture 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 29Table 2-3 Additional objects used by multiwindow document apps Object Description The NSDocumentController class defines a high-level controller for creating and managing all document objects. In addition to managing documents, the document controller also manages many document-related menu items, such as the Open Recent menu and the open and save panels. Document Controller object The NSDocument class is the base class for implementing documents in a multiwindow app. This class acts as the controller for the data objects associated with the document. You define your own custom subclasses to manage the interactions with your app’s data objects and to work with one or more NSWindowController objectsto display the document contents on the screen. Document object Integrating iCloud Support Into Your App No matter how you store your app’s data, iCloud is a convenient way to make that data available to all of a user’s devices. To integrate iCloud into your app, you change where you store user files. Instead of storing them in the user’s Home folder or in your App Sandbox container, you store them in special file system locations known as ubiquity containers. A ubiquity containerserves asthe local representation of corresponding iCloud storage. It is outside of your App Sandbox container, and so requires specific entitlements for your app to interact with it. In addition to a change in file system locations, your app design needs to acknowledge that your data model is accessible to multiple processes. The following considerations apply: ● Document-based apps get iCloud support through the NSDocument class, which handles most of the interactions required to manage the on-disk file packages that represent documents. ● If you implement a custom data model and manage files yourself, you must explicitly use file coordination to ensure that the changes you make are done safely and in concert with the changes made on the user’s other devices. For details,see “The Role of File Coordinators and Presenters” in File System Programming Guide . ● For storing small amounts of data in iCloud, you use key-value storage. Use key-value storage for such things as stocks or weather information, locations, bookmarks, a recent-documents list, settings and preferences, and simple game state. Every iCloud app should take advantage of key-value storage. To interact with key-value storage, you use the shared NSUbiquitousKeyValueStore object. To learn how to adopt iCloud in your app, read iCloud Design Guide . The Core App Design The App Style Determines the Core Architecture 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 30Shoebox-Style Apps Should Not Use NSDocument When implementing a single-window, shoebox-style (sometimes referred to as a “library” style) app, it is sometimes better not to use NSDocument objectsto manage your content. The NSDocument class was designed specifically for use in multiwindow document apps. Instead, use custom controller objects to manage your data. Those custom controllers would then work with a view controller or your app’s main window controller to coordinate the presentation of the data. Although you normally use an NSDocumentController object only in multiwindow apps, you can subclass it and use it in a single-window app to coordinate the Open Recent and similar behaviors. When subclassing, though, you must override any methods related to the creation of NSDocument objects. Document-Based Apps Are Based on an NSDocument Subclass Documents are containers for user data that can be stored in files and iCloud. In a document-based design, the app enables users to create and manage documents containing their data. One app typically handles multiple documents, each in its own window, and often displays more than one document at a time. For example, a word processor provides commands to create new documents, it presents an editing environment in which the user enters text and embeds graphics into the document, it saves the document data to disk (and, optionally, iCloud), and it provides other document-related commands, such as printing and version management. In Cocoa, the document-based app design is enabled by the document architecture, which is part of of the AppKit framework. Documents in OS X There are several waysto think of a document. Conceptually, a document is a container for a body of information that can be named and stored in a disk file and in iCloud. In this sense, the document is not the same as the file but is an object in memory that owns and manages the document data. To users, the document is their information—such as text and graphics formatted on a page. Programmatically, a document is an instance of a custom NSDocument subclass that knows how to represent internally persistent data that it can display in windows. This document object knows how to read document data from a file and create an object graph in The Core App Design Document-Based Apps Are Based on an NSDocument Subclass 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 31memory for the document data model. It also knows how to handle the user’s editing commands to modify the data model and write the document data back out to disk. So, the document object mediates between different representations of document data, as shown in Figure 2-6. Figure 2-6 Document file, object, and data model Using iCloud, documents can be shared automatically among a user’s computers and iOS devices. Changes to the document data are synchronized without user intervention. For information about iCloud, see “Integrating iCloud Support Into Your App” (page 30). The Document Architecture Provides Many Capabilities for Free The document-based style of app is a design choice that you should consider when you design your app. If it makes sense for your users to create multiple discrete sets of data, each of which they can edit in a graphical environment and store in files or iCloud, then you certainly should plan to develop a document-based app. The Cocoa document architecture provides a framework for document-based apps to do the following things: ● Create new documents. The first time the user chooses to save a new document, it presents a dialog enabling the user to name and save the document in a disk file in a user-chosen location. ● Open existing documents stored in files. A document-based app specifies the types of document it can read and write, as well as read-only and write-only types. It can represent the data of different types internally and display the data appropriately. It can also close documents. ● Automatically save documents. Document-based apps can adopt autosaving in place, and its documents are automatically saved at appropriate times so that the data the user sees on screen is effectively the same as that saved on disk. Saving is done safely, so that an interrupted save operation does not leave data inconsistent. The Core App Design Document-Based Apps Are Based on an NSDocument Subclass 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 32● Asynchronously read and write document data. Reading and writing are done asynchronously on a background thread, so that lengthy operations do not make the app’s user interface unresponsive. In addition, reads and writes are coordinated using NSFilePresenter protocol and NSFileCoordinator class to reduce version conflicts. Coordinated reads and writes reduce version conflicts both among different appssharing document data in localstorage and among different instances of an app on different devices sharing document data via iCloud. ● Manage multiple versions of documents. Autosave creates versions at regular intervals, and users can manually save a version whenever they wish. Users can browse versions and revert the document’s contents to a chosen version using a Time Machine–like interface. The version browser is also used to resolve version conflicts from simultaneous iCloud updates. ● Print documents. The print dialog and page setup dialog enable the user to choose various page layouts. ● Monitor and set the document’s edited status and validate menu items. To avoid automatic saving of inadvertent changes, old files are locked from editing until explicitly unlocked by the user. ● Track changes. The document manages its edited status and implements multilevel undo and redo. ● Handle app and window delegation. Notifications are sent and delegate methods called at significant lifecycle events, such as when the app terminates. See Document-Based App Programming Guide for Mac for more detailed information about how to implement a document-based app. The App Life Cycle The app life cycle is the progress of an app from its launch through its termination. Apps can be launched by the user or the system. The user launches apps by double-clicking the app icon, using Launchpad, or opening a file whose type is currently associated with the app. In OS X v10.7 and later, the system launches apps at user login time when it needs to restore the user’s desktop to its previous state. When an app is launched, the system creates a process and all of the normal system-related data structures for it. Inside the process, it creates a main thread and uses it to begin executing your app’s code. At that point, your app’s code takes over and your app is running. The main Function is the App Entry Point Like any C-based app, the main entry point for a Mac app at launch time is the main function. In a Mac app, the main function is used only minimally. Its main job is to give control to the AppKit framework. Any new project you create in Xcode comes with a default main function like the one shown in Listing 2-1. You should normally not need to change the implementation of this function. The Core App Design The App Life Cycle 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 33Listing 2-1 The main function of a Mac app #import int main(int argc, char *argv[]) { return NSApplicationMain(argc, (const char **) argv); } The NSApplicationMain function initializes your app and prepares it to run. As part of the initialization process, this function does several things: ● Creates an instance of the NSApplication class. You can access this object from anywhere in your app using the sharedApplication class method. ● Loads the nib file specified by the NSMainNibFile key in the Info.plist file and instantiates all of the objects in that file. This is your app’s main nib file and should contain your application delegate and any other critical objects that must be loaded early in the launch cycle. Any objects that are not needed at launch time should be placed in separate nib files and loaded later. ● Calls the run method of your application object to finish the launch cycle and begin processing events. By the time the run method is called, the main objects of your app are loaded into memory but the app is still not fully launched. The run method notifies the application delegate that the app is about to launch, shows the application menu bar, opens any files that were passed to the app, does some framework housekeeping, and starts the event processing loop. All of this work occurs on the app’s main thread with one exception. Files may be opened in secondary threads if the canConcurrentlyReadDocumentsOfType: class method of the corresponding NSDocument object returns YES. If your app preserves its user interface between launch cycles, Cocoa loads any preserved data at launch time and uses it to re-create the windows that were open the last time your app was running. For more information about how to preserve your app’s user interface, see “User Interface Preservation” (page 39). The App’s Main Event Loop Drives Interactions Asthe user interacts with your app, the app’s main event loop processesincoming events and dispatchesthem to the appropriate objects for handling. When the NSApplication object is first created, it establishes a connection with the system window server, which receives events from the underlying hardware and transfers The Core App Design The App Life Cycle 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 34them to the app. The app also sets up a FIFO event queue to store the events sent to it by the window server. The main event loop isthen responsible for dequeueing and processing events waiting in the queue, asshown in Figure 2-7. Figure 2-7 The main event loop The run method of the NSApplication object is the workhorse of the main event loop. In a closed loop, this method executes the following steps until the app terminates: 1. Services window-update notifications, which results in the redrawing of any windows that are marked as “dirty.” 2. Dequeues an event from its internal event queue using the nextEventMatchingMask:untilDate:inMode:dequeue: method and converts the event data into an NSEvent object. 3. Dispatchesthe event to the appropriate target object using the sendEvent: method of NSApplication. When the app dispatches an event, the sendEvent: method uses the type of the event to determine the appropriate target. There are two major types of input events: key events and mouse events. Key events are sent to the key window—the window that is currently accepting key presses. Mouse events are dispatched to the window in which the event occurred. For mouse events, the window looks for the view in which the event occurred and dispatches the event to that object first. Views are responder objects and are capable of responding to any type of event. If the view is a control, it typically uses the event to generate an action message for its associated target. The overall process for handling events is described in detail in Cocoa Event Handling Guide . The Core App Design The App Life Cycle 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 35Automatic and Sudden Termination of Apps Improve the User Experience In OS X v10.7 and later, the use of the Quit command to terminate an app is diminished in favor of more user-centric techniques. Specifically, Cocoa supports two techniques that make the termination of an app transparent and fast: ● Automatic termination eliminates the need for users to quit an app. Instead, the system manages app termination transparently behind the scenes, terminating apps that are not in use to reclaim needed resources such as memory. ● Sudden termination allows the system to kill an app’s process immediately without waiting for it to perform any final actions. The system uses this technique to improve the speed of operations such as logging out of, restarting, or shutting down the computer. Automatic termination and sudden termination are independent techniques, although both are designed to improve the user experience of app termination. Although Apple recommendsthat appssupport both, an app can support one technique and not the other. Apps that support both techniques can be terminated by the system without the app being involved at all. On the other hand, if an app supports sudden termination but not automatic termination, then it must be sent a Quit event, which it needs to process without displaying any user interface dialogs. Automatic termination transfers the job of managing processes from the user to the system, which is better equipped to handle the job. Users do not need to manage processes manually anyway. All they really need is to run apps and have those apps available when they need them. Automatic termination makes that possible while ensuring that system performance is not adversely affected. Apps must opt in to both automatic termination and sudden termination and implement appropriate support for them. In both cases, the app must ensure that any user data is saved well before termination can happen. And because the user does not quit an autoterminable app, such an app should also save the state of its user interface using the built-in Cocoa support. Saving and restoring the interface state provides the user with a sense of continuity between app launches. For information on how to support for automatic termination in your app, see “Automatic Termination” (page 37). For information on how to support sudden termination, see “Sudden Termination” (page 38). Support the Key Runtime Behaviors in Your Apps No matter what style of app you are creating, there are specific behaviors that all apps should support. These behaviors are intended to help users focus on the content they are creating rather than focus on app management and other busy work that is not part of creating their content. The Core App Design Support the Key Runtime Behaviors in Your Apps 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 36Automatic Termination Automatic termination is a feature that you must explicitly code for in your app. Declaring support for automatic termination is easy, but apps also need to work with the system to save the current state of their user interface so that it can be restored later as needed. The system can kill the underlying process for an auto-terminable app at any time, so saving this information maintains continuity for the app. Usually, the system kills an app’s underlying process some time after the user has closed all of the app’s windows. However, the system may also kill an app with open windows if the app is not currently on screen, perhaps because the user hid it or switched spaces. To support automatic termination, you should do the following: ● Declare your app’s support for automatic termination, either programmatically or using an Info.plist key. ● Support saving and restoring your window configurations. ● Save the user’s data at appropriate times. ● Single-window, library-style apps should implement strategies for saving data at appropriate checkpoints. ● Multiwindow, document-based apps can use the autosaving and saveless documents capabilities in NSDocument. ● Whenever possible, support sudden termination for your app as well. Enabling Automatic Termination in Your App Declaring support for automatic termination letsthe system know that itshould manage the actual termination of your app at appropriate times. An app has two ways to declare its support for automatic termination: ● Include the NSSupportsAutomaticTermination key (with the value YES) in the app’s Info.plist file. This sets the app’s default support status. ● Use the NSProcessInfo classto declare support for automatic termination dynamically. Use thistechnique to change the default support of an app that includes the NSSupportsAutomaticTermination key in its Info.plist file. Automatic Data-Saving Strategies Relieve the User You should always avoid forcing the user to save changesto their data manually. Instead, implement automatic data saving. For a multiwindow app based on NSDocument, automatic saving is as simple as overriding the autosavesInPlace classmethod to return YES. Formore information,seeDocument-Based App Programming Guide for Mac . The Core App Design Support the Key Runtime Behaviors in Your Apps 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 37For a single-window, library-style app, identify appropriate pointsin your code where any user-related changes should be saved and write those changes to disk automatically. This benefits the user by eliminating the need to think about manually saving changes, and when done regularly, it ensures that the user does not lose much data if there is a problem. Some appropriate times when you can save user data automatically include the following: ● When the user closes the app window or quits the app (applicationWillTerminate:) ● When the app is deactivated (applicationWillResignActive:) ● When the user hides your app (applicationWillHide:) ● Whenever the user makes a valid change to data in your app The last item means that you have the freedom to save the user’s data at any time it makes sense to do so. For example, if the user is editing fields of a data record, you can save each field value as it is changed or you can wait and save all fields when the user displays a new record. Making these types of incremental changes ensures that the data is always up-to-date but also requires more fine-grained management of your data model. In such an instance, Core Data can help you make the changes more easily. For information about Core Data, see Core Data Starting Point. Sudden Termination Sudden termination lets the system know that your app’s process can be killed directly without any additional involvement from your app. The benefit of supporting sudden termination is that it lets the system close apps more quickly, which is important when the user is shutting down a computer or logging out. An app has two ways to declare its support for sudden termination: ● Include the NSSupportsSuddenTermination key (with the value YES) in the app’s Info.plist file. ● Use the NSProcessInfo class to declare support for sudden termination dynamically. You can also use this class to change the default support of an app that includes the NSSupportsSuddenTermination key in its Info.plist file. One solution is to declare global support for the feature globally and then manually override the behavior at appropriate times. Because sudden termination means the system can kill your app at any time after launch, you should disable it while performing actions that might lead to data corruption if interrupted. When the action is complete, reenable the feature again. The Core App Design Support the Key Runtime Behaviors in Your Apps 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 38You disable and enable sudden termination programmatically using the disableSuddenTermination and enableSuddenTerminationmethods ofthe NSProcessInfo class. Thesemethodsincrement and decrement a counter, respectively, maintained by the process. When the value of this counter is 0, the process is eligible for sudden termination. When the value is greater than 0, sudden termination is disabled. Enabling and disabling sudden termination dynamically also meansthat your app should save data progressively and not rely solely on user actions to save important information. The best way to ensure that your app’s information is saved at appropriate times is to support the interfaces in OS X v10.7 for saving your document and window state. Those interfaces facilitate the automatic saving of relevant user and app data. For more information about saving your user interface state, see “User Interface Preservation” (page 39). For more information about saving your documents, see “Document-Based Apps Are Based on an NSDocument Subclass” (page 31). For additional information about enabling and disabling sudden termination,see NSProcessInfo Class Reference . User Interface Preservation The Resume feature, in OS X v10.7 and later, saves the state of your app’s windows and restores them during subsequent launches of your app. Saving the state of your windows enables you to return your app to the state it was in when the user last used it. Use the Resume feature especially if your app supports automatic termination, which can cause your app to be terminated while it is running but hidden from the user. If your app supports automatic termination but does not preserve its interface, the app launches into its default state. Users who only switched away from your app might think that the app crashed while it was not being used. Writing Out the State of Your Windows and Custom Objects You must do the following to preserve the state of your user interface: ● For each window, you must set whether the window should be preserved using the setRestorable: method. ● For each preserved window, you must specify an object whose job is to re-create that window at launch time. ● Any objects involved in your user interface must write out the data they require to restore their state later. ● At launch time, you must use the provided data to restore your objects to their previous state. The actual process of writing out your application state to disk and restoring it later is handled by Cocoa, but you must tell Cocoa what to save. Your app’s windows are the starting point for all save operations. Cocoa iterates over all of your app’s windows and saves data for the ones whose isRestorable method returns YES. Most windows are preserved by default, but you can change the preservation state of a window using the setRestorable: method. The Core App Design Support the Key Runtime Behaviors in Your Apps 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 39In addition to preserving your windows, Cocoa saves data for most of the responder objects associated with the window. Specifically, it saves the views and window controller objects associated with the window. (For a multiwindow document-based app, the window controller also saves data from its associated document object.) Figure 2-8 shows the path that Cocoa takes when determining which objects to save. Window objects are always the starting point, but other related objects are saved, too. Figure 2-8 Responder objects targeted by Cocoa for preservation All Cocoa window and view objects save basic information about their size and location, plus information about other attributes that might affect the way they are currently displayed. For example, a tab view saves the index of the selected tab, and a text view savesthe location and range of the current textselection. However, these responder objects do not have any inherent knowledge about your app’s data structures. Therefore, it is your responsibility to save your app’s data and any additional information needed to restore the window to its current state. There are several places where you can write out your custom state information: ● If you subclass NSWindow or NSView, implement the encodeRestorableStateWithCoder: method in your subclass and use it to write out any relevant data. Alternatively, your custom responder objects can override the restorableStateKeyPaths method and use it to specify key paths for any attributes to be preserved. Cocoa uses the key paths to locate and save the data for the corresponding attribute. Attributes must be compliant with key-value coding and Key-value observing. ● If your window has a delegate object, implement the window:willEncodeRestorableState: method for the delegate and use it to store any relevant data. ● In your window controller, use the encodeRestorableStateWithCoder: method to save any relevant data or configuration information. The Core App Design Support the Key Runtime Behaviors in Your Apps 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 40Be judicious when deciding what data to preserve, and strive to write out the smallest amount of information that is required to reconfigure your window and associated objects. You are expected to save the actual data that the window displays and enough information to reattach the window to the same data objects later. Important: Never use the user interface preservation mechanism as a way to save your app’s actual data. The archive created for interface preservation can change frequently and may be ignored altogether if there is a problem during the restoration process. Your app data should always be saved independently in data files that are managed by your app. For information on how to use coder objects to archive state information, see NSCoder Class Reference . For additional information on what you need to do to save state in a multiwindow document-based app, see Document-Based App Programming Guide for Mac . Notifying Cocoa About Changes to Your Interface State Whenever the preserved state of one of your responder objects changes, mark the object as dirty by calling the invalidateRestorableState method of that object. Having done so, at some point in the future, encodeRestorableStateWithCoder: message is sent to your responder object. Marking your responder objects as dirty lets Cocoa know that it needs to write their preservation state to disk at an appropriate time. Invalidating your objects is a lightweight operation in itself because the data is not written to disk right away. Instead, changes are coalesced and written at key times, such as when the user switches to another app or logs out. You should mark a responder object as dirty only for changes that are truly interface related. For example, a tab view marks itself as dirty when the user selects a different tab. However, you do not need to invalidate your window or its views for many content-related changes, unless the content changes themselves caused the window to be associated with a completely different set of data-providing objects. If you used the restorableStateKeyPaths method to declare the attributes you want to preserve, Cocoa preserves and restores the values of those attributes of your responder object. Therefore, any key paths you provide should be key-value observing compliant and generate the appropriate notifications. For more information on how to support key-value observing in your objects, see Key-Value Observing Programming Guide . Restoring Your Windows and Custom Objects at Launch TIme As part of your app’s normal launch cycle, Cocoa checks to see whether there is any preserved interface data. If there is, Cocoa usesthat data to try to re-create your app’s windows. Every window must identify a restoration class that knows about the window and can act on its behalf at launch time to create the window when asked to do so by Cocoa. The Core App Design Support the Key Runtime Behaviors in Your Apps 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 41The restoration class is responsible for creating both the window and all of the critical objects required by that window. For most app styles, the restoration class usually creates one or more controller objects as well. For example, in a single-window app, the restoration class would likely create the window controller used to manage the window and then retrieve the window from that object. Because it createsthese controller objects too, you typically use high-level application classesfor your restoration classes. An app might use the application delegate, a document controller, or even a window controller as a restoration class. During the launch cycle, Cocoa restores each preserved window as follows: 1. Cocoa retrieves the window’s restoration class from the preserved data and calls its restoreWindowWithIdentifier:state:completionHandler: class method. 2. The restoreWindowWithIdentifier:state:completionHandler: class method must call the provided completion handler with the desired window object. To do this, it does one of the following: ● It creates any relevant controller objects (including the window controller) that might normally be created to display the window. ● If the controller objects already exist (perhaps because they were already loaded from a nib file), the method gets the window from those existing objects. If the window could not be created, perhaps because the associated document was deleted by the user, the restoreWindowWithIdentifier:state:completionHandler: should pass an error object to the completion handler. 3. Cocoa uses the returned window to restore it and any preserved responder objects to their previous state. ● Standard Cocoa window and view objects are restored to their previousstate without additional help. If you subclass NSWindow or NSView, implement the restoreStateWithCoder: method to restore any custom state. If you implemented the restorableStateKeyPaths method in your custom responder objects, Cocoa automatically sets the value of the associated attributes to their preserved values. Thus, you do not have to implement the restoreStateWithCoder: to restore these attributes. ● For the window delegate object, Cocoa calls the window:didDecodeRestorableState: method to restore the state of that object. ● For your window controller, Cocoa calls the restoreStateWithCoder: method to restore its state. When re-creating each window, Cocoa passes the window’s unique identifier string to the restoration class. You are responsible for assigning user interface identifier strings to your windows prior to preserving the window state. You can assign an identifier in your window’s nib file or by setting your window object's identifier property (defined in NSUserInterfaceItemIdentification protocol). For example, you might give your preferences window an identifier of preferences and then check for that identifier in your The Core App Design Support the Key Runtime Behaviors in Your Apps 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 42implementation. Your restoration class can use this identifier to determine which window and associated objects it needs to re-create. The contents of an identifier string can be anything you want but should be something to help you identify the window later. For a single-window app whose main window controller and window are loaded from the main nib file, the job of your restoration class is fairly straightforward. Here, you could use the application delegate’s class as the restoration class and implement the restoreWindowWithIdentifier:state:completionHandler: method similar to the implementation shown in Listing 2-2. Because the app has only one window, it returns the main window directly. If you used the application delegate’s class asthe restoration classfor other windows, your own implementation could use the identifier parameter to determine which window to create. Listing 2-2 Returning the main window for a single-window app + (void)restoreWindowWithIdentifier:(NSString *)identifier state:(NSCoder *)state completionHandler:(void (^)(NSWindow *, NSError *))completionHandler { // Get the window from the window controller, // which is stored as an outlet by the delegate. // Both the app delegate and window controller are // created when the main nib file is loaded. MyAppDelegate* appDelegate = (MyAppDelegate*)[[NSApplication sharedApplication] delegate]; NSWindow* mainWindow = [appDelegate.windowController window]; // Pass the window to the provided completion handler. completionHandler(mainWindow, nil); } Apps Are Built Using Many Different Pieces The objects of the core architecture are important but are not the only objects you need to consider in your design. The core objects manage the high-level behavior of your app, but the objects in your app’s view layer do most of the work to display your custom content and respond to events. Other objects also play important roles in creating interesting and engaging apps. The Core App Design Apps Are Built Using Many Different Pieces 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 43The User Interface An app’s user interface is made up of a menu bar, one or more windows, and one or more views. The menu bar is a repository for commands that the user can perform in the app. Commands may apply to the app as a whole, to the currently active window, or to the currently selected object. You are responsible for defining the commands that your app supports and for providing the event-handling code to respond to them. You use windows and views to present your app’s visual content on the screen and to manage the immediate interactions with that content. A window is an instance of the NSWindow class. A panel is an instance of the NSPanel class(which is a descendant of NSWindow) that you use to presentsecondary content. Single-window apps have one main window and may have one or more secondary windows or panels. Multiwindow apps have multiple windows for displaying their primary content and may have one or more secondary windows or panels too. The style of a window determines its appearance on the screen. Figure 2-9 shows the menu bar, along with some standard windows and panels. Figure 2-9 Windows and menus in an app A view, an instance of the NSView class, defines the content for a rectangular region of a window. Views are the primary mechanism for presenting content and interacting with the user and have several responsibilities. For example: ● Drawing and animation support. Views draw content in their rectangular area. Views that support Core Animation layers can use those layers to animate their contents. ● Layout and subview management. Each view manages a list ofsubviews, allowing you to create arbitrary view hierarchies. Each view defineslayout and resizing behaviorsto accommodate changesin the window size. The Core App Design Apps Are Built Using Many Different Pieces 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 44● Event handling. Views receive events. Views forward events to other objects when appropriate. For information about creating and configuring windows, see Window Programming Guide . For information about using and creating view hierarchies, see View Programming Guide . Event Handling The system window server is responsible for tracking mouse, keyboard, and other events and delivering them to your app. When the system launches an app, it creates both a process and a single thread for the app. This initial thread becomes the app’s main thread. In it, the NSApplication object sets up the main run loop and configures its event-handling code, as shown in Figure 2-10. As the window server delivers events, the app queues those events and then processes them sequentially in the app’s main run loop. Processing an event involves dispatching the event to the object best suited to handle it. For example, mouse events are usually dispatched to the view in which the event occurred. Figure 2-10 Processing events in the main run loop Note: A run loop monitors sources of input on a specific thread of execution. The app’s event queue represents one of these inputsources. While the event queue is empty, the main thread sleeps. When an event arrives, the run loop wakes up the thread and dispatches control to the NSApplication object to handle the event. After the event has been handled, control passes back to the run loop, which can then process another event, process other input sources, or put the thread back to sleep if there is nothing more to do. For more information about how run loops and input sources work, see Threading Programming Guide . Distributing and handling events is the job of responder objects, which are instances of the NSResponder class. The NSApplication, NSWindow, NSDrawer, NSView, NSWindowController, and NSViewController classes are all descendants of NSResponder. After pulling an event from the event queue, the app dispatches The Core App Design Apps Are Built Using Many Different Pieces 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 45that event to the window object where it occurred. The window object, in turn, forwards the event to its first responder. In the case of mouse events, the first responder is typically the view object (NSView) in which the touch took place. For example, a mouse event occurring in a button is delivered to the corresponding button object. If the first responder is unable to handle an event, it forwardsthe event to its nextresponder, which istypically a parent view, view controller, or window. If that object is unable to handle the event, it forwards it to its next responder, and so , until the event is handled. Thisseries of linked responder objectsis known asthe responder chain. Messages continue traveling up the responder chain—toward higher-level responder objects, such as a window controller or the application object—until the event is handled. If the event isn't handled, it is discarded. The responder object that handles an event often sets in motion a series of programmatic actions by the app. For example, a control object (that is, a subclass of NSControl) handles an event by sending an action message to another object, typically the controller that manages the current set of active views. While processing the action message, the controller might change the user interface or adjust the position of views in ways that require some of those views to redraw themselves. When this happens, the view and graphics infrastructure takes over and processes the required redraw events in the most efficient manner possible. For more information about responders, the responder chain, and handling events, see Cocoa Event Handling Guide . Graphics, Drawing, and Printing There are two basic ways in which a Mac app can draw its content: ● Native drawing technologies (such as Core Graphics and AppKit) ● OpenGL The native OS X drawing technologies typically use the infrastructure provided by Cocoa views and windows to render and present custom content. When a view is first shown, the system asks it to draw its content. System views draw their contents automatically, but custom views must implement a drawRect: method. Inside this method, you use the native drawing technologies to draw shapes, text, images, gradients, or any other visual content you want. When you want to update your view’s visual content, you mark all or part of the view invalid by calling its setNeedsDisplay: or setNeedsDisplayInRect: method. The system then calls your view’s drawRect: method (at an appropriate time) to accommodate the update. This cycle then repeats and continues throughout the lifetime of your app. If you are using OpenGL to draw your app’s content, you still create a window and view to manage your content, but those objects simply provide the rendering surface for an OpenGL drawing context. Once you have that drawing context, your app is responsible for initiating drawing updates at appropriate intervals. The Core App Design Apps Are Built Using Many Different Pieces 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 46For information about how to draw custom content in your views, see Cocoa Drawing Guide . Text Handling The Cocoa text system, the primary text-handling system in OS X, is responsible for the processing and display of all visible text in Cocoa. It provides a complete set of high-quality typographical services through the text-related AppKit classes, which enable apps to create, edit, display, and store text with all the characteristics of fine typesetting. The Cocoa text system provides all these basic and advanced text-handling features, and it also satisfies additional requirements from the ever-more-interconnected computing world: support for the character sets of all of the world’s living languages, powerful layout capabilities to handle various text directionality and nonrectangular text containers, and sophisticated typesetting capabilities such as control of kerning, ligatures, line breaking, and justification. Cocoa’s object-oriented textsystem is designed to provide all these capabilities without requiring you to learn about or interact with more of the system than is necessary to meet the needs of your app. Underlying the Cocoa text system is Core Text, which provides low-level, basic text layout and font-handling capabilities to higher-level engines such as Cocoa and WebKit. Core Text provides the implementation for many Cocoa text technologies. App developers typically have no need to use Core Text directly. However, the Core Text API is accessible to developers who must use it directly, such as those writing apps with their own layout engine and those porting older ATSUI- or QuickDraw-based codebases to the modern world. For more information about the Cocoa text system, see Cocoa Text Architecture Guide . Implementing the Application Menu Bar The classes NSMenu and NSMenuItem are the basis for all types of menus. An instance of NSMenu manages a collection of menu items and draws them one beneath another. An instance of NSMenuItem represents a menu item; it encapsulates all the information its NSMenu object needs to draw and manage it, but does no drawing or event-handling itself. You typically use Interface Builder to create and modify any type of menu, so often there is no need to write any code. The application menu bar stretches across the top of the screen, replacing the menu bar of any other app when the app is foremost. All of an app’s menus in the menu bar are owned by one NSMenu instance that’s created by the app when it starts up. The Core App Design Implementing the Application Menu Bar 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 47Xcode Templates Provide the Menu Bar Xcode’s Cocoa application templates provide that NSMenu instance in a nib file called MainMenu.xib. This nib file contains an application menu (named with the app’s name), a File menu (with all of its associated commands), an Edit menu (with text editing commands and Undo and Redo menu items), and Format, View, Window, and Help menus (with their own menu items representing commands). These menu items, as well as all of the menu items of the File menu, are connected to the appropriate first-responder action methods. For example, the About menu item is connected to the orderFrontStandardAboutPanel: action method in the File’s Owner that displays a standard About window. The template has similar ready-made connections for the Edit, Format, View, Window, and Help menus. If your app does not support any of the supplied actions (for example, printing), you should remove the associated menu items (or menu) from the nib. Alternatively, you may want to repurpose and rename menu commands and action methodsto suit your own app, taking advantage of the menu mechanism in the template to ensure that everything is in the right place. Connect Menu Items to Your Code or Your First Responder For your app’s custom menu items that are not already connected to action methods in objects or placeholder objects in the nib file, there are two common techniques for handling menu commands in a Mac app: ● Connect the corresponding menu item to a first responder method. ● Connect the menu item to a method of your custom application object or your application delegate object. Of these two techniques, the first is more common given that many menu commands act on the current document or its contents, which are part of the responder chain. The second technique is used primarily to handle commands that are global to the app, such as displaying preferences or creating a new document. It is possible for a custom application object or its delegate to dispatch events to documents, but doing so is generally more cumbersome and prone to errors. In addition to implementing action methods to respond to your menu commands, you must also implement the methods of the NSMenuValidation protocol to enable the menu items for those commands. Step-by-step instructions for connecting menu items to action methods in your code are given in “Designing User Interfaces in Xcode”. For more information about menu validation and other menu topics, see Application Menu and Pop-up List Programming Topics. The Core App Design Implementing the Application Menu Bar 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 48Enabling a window of your app to assume full-screen mode, taking over the entire screen, provides users with a more immersive, cinematic experience. Full-screen appearance can be striking and can make your app stand out. From a practical standpoint, full-screen mode presents a better view of users’ data, enabling them to concentrate fully on their content without the distractions of other apps or the desktop. In full-screen mode, by default, the menu bar and Dock are autohidden; that is, they are normally hidden but reappear when the user moves the pointer to the top or bottom of the screen. A full-screen window does not draw its titlebar and may have special handling for its toolbar. The full-screen experience makes sense for many apps but not for all. For example, the Finder, Address Book, and Calculator would not provide any benefit to users by assuming full-screen mode. The same is probably true for most utility apps. Media-rich apps, on the other hand, can often benefit from full-screen presentation. Beginning with OS X v10.7, Cocoa includes support for full-screen mode through APIs in NSApplication, NSWindow, and NSWindowDelegate protocol. When the user chooses to enter full-screen mode, Cocoa dynamically creates a space and puts the window into that space. This behavior enables the user to have one window of an app running in full-screen mode in one space, while using other windows of that app, as well as other apps, on the desktop in otherspaces. While in full-screen mode, the user can switch between windows in the current app or switch apps. Apps that have implemented full-screen user interfaces in previous versions of OS X should consider standardizing on the full-screen APIs in OS X v10.7. Full-Screen API in NSApplication Full-screen support in NSApplication is provided by the presentation option NSApplicationPresentationFullScreen. You can find the current presentation mode via the NSApplication method currentSystemPresentationOptions, which is also key-value observable. You can set the presentation options using the NSApplication method setPresentationOptions:. (Be sure to observe the restrictions on presentation option combinations documented with NSApplicationPresentationOptions, and set the presentation optionsin a try-catch block to ensure that your program does not crash from an invalid combination of options.) 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 49 Implementing the Full-Screen ExperienceA window delegate may also specify that the window toolbar be removed from the window in full-screen mode and be shown automatically with the menu bar by including NSApplicationPresentationAutoHideToolbar in the presentation options returned from the window:willUseFullScreenPresentationOptions: method of NSWindowDelegate. Full-Screen API in NSWindow The app must specify whether a given window can enter full-screen mode. Apps can set collection behavior using the setCollectionBehavior: method by passing in various constants, and the current options may be accessed via the collectionBehavior method. You can choose between two constants to override the window collection behavior, as shown in the following table: Constant Behavior The frontmost window with this collection behavior becomes the full-screen window. A window with this collection behavior has a full-screen button in the upper right of its titlebar. NSWindowCollectionBehaviorFullScreenPrimary Windows with this collection behavior can be shown in the same space with the full-screen window. NSWindowCollectionBehaviorFullScreenAuxiliary When a window goesinto full-screen mode, the styleMask changesto NSFullScreenWindowMask to reflect the state of the window.The setting of the styleMask goesthrough the setStyleMask: method. As a result, a window can override this method if it has customization to do when entering or exiting full-screen. A window can be taken into or out of full-screen mode using the toggleFullScreen: method. If an app supports full-screen mode, it should add a menu item to the View menu with toggleFullScreen: as the action, and nil as the target. Full-Screen API in NSWindowDelegate Protocol The following notifications are sent before and after the window enters and exits full-screen mode: NSWindowWillEnterFullScreenNotification NSWindowDidEnterFullScreenNotification NSWindowWillExitFullScreenNotification NSWindowDidExitFullScreenNotification Implementing the Full-Screen Experience Full-Screen API in NSWindow 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 50The window delegate has the following corresponding window delegate notification methods: windowWillEnterFullScreen: windowDidEnterFullScreen: windowWillExitFullScreen: windowDidExitFullScreen: The NSWindowDelegate protocol methods supporting full-screen mode are listed in Table 3-1. Table 3-1 Window delegate methods supporting full-screen mode Method Description Invoked to allow the delegate to modify the full-screen content size. window: willUseFullScreenContentSize: Returns the presentation options the window will use when transitioning to full-screen mode. window: willUseFullScreenPresentationOptions: Invoked when the window is about to enter full-screen mode. The window delegate can implement this method to customize the animation by returning a custom window or array of windows containing layers or other effects. customWindowsToEnterFullScreenForWindow: The system has started its animation into full-screen mode, including transitioning into a new space. You can implement this method to perform custom animation with the given duration to be in sync with the system animation. window: startCustomAnimationToEnterFullScreenWithDuration: Invoked if the window failed to enter full-screen mode. windowDidFailToEnterFullScreen: Implementing the Full-Screen Experience Full-Screen API in NSWindowDelegate Protocol 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 51Method Description Invoked when the window is about to exit full-screen mode. The window delegate can implement this method to customize the animation when the window is about to exit full-screen by returning a custom window or array of windows containing layers or other effects. customWindowsToExitFullScreenForWindow: The system has started its animation out of full-screen, including transitioning back to the desktop space. You can implement this method to perform custom animation with the given duration to be in sync with the system animation. window: startCustomAnimationToExitFullScreenWithDuration: Invoked if the window failed to exit full-screen mode. windowDidFailToExitFullScreen: For more information about full-screen mode, see NSWindowDelegate Protocol Reference and the OS X Human Interface Guidelines. Implementing the Full-Screen Experience Full-Screen API in NSWindowDelegate Protocol 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 52During the design phase of creating your app, you need to think about how to implement certain features that users expect in well-formed Mac apps. Integrating these features into your app architecture can have an impact on the data model or may require cooperation between significantly different portions in the app. You Can Prevent the Automatic Relaunch of Your App By default, as part of the Resume feature of OS X v10.7, apps that were open at logout are relaunched by the system when the user logsin again. You can prevent the automatic relaunching of your app at login by sending it a disableRelaunchOnLogin message. This NSApplication method increments a counter that controls the app being relaunched; if the counter is 0 at the time the user logs out, then the app is relaunched when the user logs back in. The counter is initially zero, providing the default relaunch behavior. Your can reinstate automatic relaunching of your app by sending it an enableRelaunchOnLogin message. This message decrements the relaunch counter, so an equal number of disableRelaunchOnLogin and enableRelaunchOnLogin messages enables relaunching. Both methods are thread safe. If your app should not be relaunched because it launches via some other mechanism, such as the launchd system process, then the recommended practice is to send the app a disableRelaunchOnLogin message once, and never pair it with an enableRelaunchOnLogin message. If your app should not be relaunched because it triggers a restart (for example, if it is an installer), then the recommended practice is to send it a disableRelaunchOnLogin message immediately before you attempt to trigger a restart and send it an enableRelaunchOnLogin message immediately after. This procedure handles the case where the user cancels restarting; if the user later restarts for another reason, then your app should be relaunched. Making Your App Accessible Enables Many Users Millions of people have a disability or special need. These include visual and hearing impairments, physical disabilities, and cognitive and learning challenges. Access to computers is vitally important for this population, because computers can provide a level of independence that is difficult to attain any other way. As populations around the world age, an increasing number of people will experience age-related disabilities, such as vision or hearing loss. Current and future generations of the elderly will expect to be able to continue using their 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 53 Supporting Common App Behaviorscomputers and accessing their data, regardless of the state of their vision and hearing. Apps that support customizable text displays, access by a screen reader, or the replacement of visual cues by audible ones can serve this population well. OS X is designed to accommodate assistive technologies and has many built-in features to help people with disabilities. Users access most of this functionality through the Universal Access pane of System Preferences. Some of these built-in technologies take advantage of the same accessibility architecture that allows external assistive technologies to access your app. Designing your app with accessibility in mind not only allows you to reach a larger group of users, it results in a better experience for all your users. As a first step in designing your app, be sure to read OS X Human Interface Guidelines. That book provides detailed specifications and best practices for designing and implementing an intuitive, consistent, and aesthetically pleasing user interface that delivers the superlative experience Macintosh users have come to expect. During the design process, you also should be aware of the accessibility perspective on many basic design considerations. Consider the following basic accessibility-related design principles: ● Support full keyboard navigation. For many users, a mouse is difficult, if not impossible, to use. Consequently, a user should be able to perform all your app’s functions using the keyboard alone. ● Don’t override built-in keyboard shortcuts. This applies both to the keyboard shortcuts OS X reserves (listed in “Keyboard Shortcuts”) and to the accessibility-related keyboard shortcuts (listed in “Accessibility Keyboard Shortcuts”). As a general rule, you should never override reserved keyboard shortcuts. In particular, you should take care not to override accessibility-related keyboard shortcuts or your app will not be accessible to users who enable full keyboard access. A corollary to this principle is to avoid creating too many new keyboard shortcuts that are specific to your app. Users should not have to memorize a new set of keyboard commands for each app they use. ● Provide alternatives for drag-and-drop operations. If your app relies on drag-and-drop operations in its workflow, you should provide alternate ways to accomplish the same tasks. This may not be easy; in fact, it may require the design of an alternate interface for apps that are heavily dependent on drag and drop. ● Make sure there’s always a way out of your app’s workflow. This is important for all users, of course, but it’s essential for users of assistive technologies. A user relying on an assistive app to use your app may have a somewhat narrower view of your app’s user interface. For this reason, it’s especially important to make canceling operations and retracing steps easy. In addition to the basic design principles, you should consider the requirements of specific disabilities and resulting design solutions and adaptations you can implement. The main theme of these suggestions is to provide as many alternate modes of content display as possible, so that users can find the way that suits their needs best. Consider the following categories of disabilities: Supporting Common App Behaviors Making Your App Accessible Enables Many Users 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 54● Visual Disabilities. Visual disabilities include blindness, color blindness, and low vision. Make your app accessible to assistive apps, such as screen readers. Ensure that color is not the only source of come particular information in your user interface. Provide an audio option for all visual cues and feedback, as well as succinct descriptions of images and animated content. ● Hearing Disabilities. When you design the audio output of your app, remember that some users may not be able to hear your app’s sound effects well or at all. And, of course, there are situations in which any user could not use audio output effectively, such as in a library. Providing redundant audio and visual output can aid comprehension for users with other types of disabilities as well, such as cognitive and learning disabilities. ● Motor and Cognitive Disabilities. People with motor disabilities may need to use alternatives to the standard mouse and keyboard input devices. Other users may have difficulty with the fine motor control required to double-click a mouse or to press key combinations on the keyboard. Users with cognitive or learning disabilities may need extra time to complete tasks or respond to alerts. OS X providessupport for many types of disabilities at the system level through solutions offered in the Universal Access system preferences, illustrated in Figure 4-1. In addition, most standard Cocoa objects implement accessibility through the NSAccessibility protocol, providing reasonable default behavior in most cases, Cocoa apps built with standard objects are automatically accessible. In general, you need to explicitly implement the NSAccessibility protocol methods only if you subclass one of them, adding new behavior. Figure 4-1 Universal Access system preference dialog Supporting Common App Behaviors Making Your App Accessible Enables Many Users 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 55The NSAccessibility informal protocol defines methods that Cocoa classes must implement to make themselves available to an external assistive app. An assistive app interacts with your app to allow persons with disabilities to use your app. For example, a person with a visual impairment could use an app to convert menu items and button labels into speech and then perform actions by verbal command. An accessible object is described by a set of attributes that define characteristics such as the object type, its value, its size and position on the screen, and its place in the accessibility hierarchy. For some objects, the set of attributes can include parameterized attributes. Parameterized attributes behave similar to a function by allowing you to pass a parameter when requesting an attribute value. For more information, see Accessibility Overview for OS X . Provide User Preferences for Customization Preferences are settings that enable users to customize the appearance or behavior of your software. The OS X user defaultssystem lets you access and manage user preferences. You can use the defaultssystem to provide reasonable initial values for app settings, as well as save and retrieve the user's own preference selections across sessions. The Cocoa NSUserDefaults class provides programmatic access to the user defaults system. The NSUserDefaults class provides convenience methodsfor accessing common typessuch asfloats, doubles, integers, Booleans, and URLs. A default object must be a property list, that is, an instance of (or for collections a combination of instances of): NSData, NSString, NSNumber, NSDate, NSArray, or NSDictionary. If you want to store any other type of object, you should typically archive it to create an instance of NSData. The user defaults system groups defaults into domains. Two of the domains are persistently saved in the user defaults database: the app domain stores app-specific defaults, and the global domain stores defaults applicable to all apps. In addition, the user defaults system provides three volatile domains whose values last only while the user defaults object exists: the argument domain for defaults set from command-line arguments, the languages domain containing defaults for a locale if one is specified, and the registration domain for “factory defaults” registered by the app. Your app uses the NSUserDefaults class to register default preferences, and typically it also provides a user interface (a preferences panel) that enables the user to change those preferences. Preferences are stored in the ~/Library/Preferences/ folder in a standard XML-format property list file as a set of key-value pairs. The app-specific preferences property list is identified by the bundle identifier of the app. For more details, see Preferences and Settings Programming Guide . Supporting Common App Behaviors Provide User Preferences for Customization 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 56Integrate Your App With Spotlight Search Spotlight is a fast desktop search technology that allows users to organize and search for files based on metadata. Metadata is data about a file, rather than the actual content stored in the file. Metadata can include familiar information such as an asset’s author and modification date but it can also be keywords or other information that is custom to a particular asset. For example, an image file might have metadata describing the image’s dimensions and color model. Developers of appsthatsave documentsto disk should consider providing Spotlightsupport by implementing a metadata importer. A Spotlight metadata importer is a small plug-in bundle that you create to extract information from files created by your app. Spotlight importers are used by the Spotlight server to gather information about new and existing files, as illustrated in Figure 4-2. Figure 4-2 Spotlight extracting metadata Apple provides importers for many standard file types that the system uses, including RTF, JPEG, Mail, PDF and MP3. However, if you define a custom document format, you must create a metadata importer for your own content. Xcode provides a template for writing Spotlight importer plug-ins. For information about writing a Spotlight importer, see Spotlight Importer Programming Guide . In addition, you may wish to provide metadata search capability in your app. Cocoa provides the NSMetadataQuery class which enables you to construct queries using a subset of the NSPredicate classes and execute the queries asynchronously. For information about providing Spotlight search capability in your app, see File Metadata Search Programming Guide . For more information about Spotlight, see Spotlight Overview. Supporting Common App Behaviors Integrate Your App With Spotlight Search 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 57Use Services to Increase Your App’s Usefulness Services allow a user to access the functionality of one app from within another app. An app that provides a service advertises the operations it can perform on a particular type of data—for example, encryption of text, optical character recognition of a bitmapped image, or generating text such as a message of the day. When the user is manipulating that particular type of data in some app, the user can choose the appropriate item in the Services menu to operate on the current data selection (or merely insert new data into the document). See Services Implementation Guide for more information. Optimize for High Resolution High-resolution displays provide a rich visual experience, allowing users to see sharper text and more details in photos than on standard-resolution displays. The high-resolution model for OS X enables your app to draw into an abstract coordinate space called user space, without regard for the characteristics of the final drawing destination—printer, display screen, bitmap, or PDF—and without regard to the resolution of the display. OS X provides much support for high-resolution automatically. For example,standard AppKit views and controls automatically render correctly at any resolution, vector-based content automatically uses additional pixels to rendersharper lines and shapes, Cocoa text displayssharper in high-resolution, and AppKit automatically loads high-resolution variants of your images. You must do the following things to optimize your app for high-resolution: ● Provide properly-named high-resolution versions of your bitmapped images. ● Use high-resolution-savvy image-loading methods. ● Use the most recent APIs that support high resolution. These techniques are described in the following sections. Think About Points, Not Pixels OS X refers to screen size in points, not pixels. A point is one unit in user space, prior to any transformations on the space. Because, on a high-resolution display, there are four onscreen pixels for each point, points can be expressed asfloating-point values. Valuesthat are integersin standard resolution,such as mouse coordinates, are floating-point values on a high-resolution display, allowing for greater precision for such things as graphics alignment. Supporting Common App Behaviors Use Services to Increase Your App’s Usefulness 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 58Your app draws to a view using points in user space. The window server composites drawing operations to an offscreen buffer called the backing store. When it comes time to display the contents of the backing store onscreen, the window server scales the content appropriately, mapping points to onscreen pixels. The result is that if you draw the same content on two similar devices, and only one of them has a high-resolution screen, the content appears to be about the same size on both devices, as shown in Figure 4-3. Figure 4-3 Content appears the same size at standard resolution and high resolution In some situations you may need to know how points are mapped to pixels, in which case you can obtain the backing scale factor which is always either 1.0 or 2.0. The backing scale factor is a property of a layer, view, or window, and it depends on the resolution of the underlying display device. Provide High-Resolution Versions of Graphics OS X automatically magnifies standard-resolution bitmapped images so they appear to the user in the correct size, but they appear fuzzy. To avoid this problem, you must provide high-resolution versions of your graphics, along with the standard-resolution versions in the app bundle. In addition to any images your app displays, you must do this for your app’s icons and any custom controls, cursors, and other artwork. High-resolution graphics must be scaled with twice as many pixels in each dimension to display the same size in user space. For example, if you supply a standard-resolution image sized at 50x50 pixels, the high-resolution version must be sized at 100x100 pixels. For AppKit to recognize and load high-resolution versions of your graphics at appropriate times, you must adopt the naming convention of appending @2x to the image name. For example, a standard-resolution image named circle.png would have a high-resolution counterpart named circle@2x.png. Ideally, you can package both image versions into a single TIFF file. This is most easily done by setting the Xcode option Combine High Resolution Artwork to Yes in Target Build Settings under Deployment. Supporting Common App Behaviors Optimize for High Resolution 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 59You should create a set of icons for your app consisting of standard- and high-resolution versions of each icon size—16x16, 32x32, 128x128, 256x256, 512x512 appending @2x to the icon image name which, by convention, specifies the icon size in user space points. For example, an icon named icon_16x16.png would have a high-resolution counterpart named icon_16x16@2x.png, the icon_32x32.png size would have a version named icon_32x32@2x.png, and so on. After you’ve created your set of app icons, place them in folder a named icon.iconset. Then you can use the iconutil command-line tool to convert your .iconset folder into a single, deployment-ready, high-resolution .icns file. Use High-Resolution-Savvy Image-Loading Methods If you follow the @2x naming convention, there are two methods available that load standard- and high-resolution versions of an image into an NSImage object, whether or not you provide a multirepresentation image file: ● The NSImage method imageNamed: finds resources located in the main application bundle, but not in frameworks or plug-ins. ● The NSBundle method imageForResource: looks for resources outside as well as inside the main bundle. Framework authors should use this method. To create an NSImage object from a CGImageRef data type, use the initWithCGImage:size: method, specifying the image size in points, not pixels. For custom cursors, you can pass a multirepresentation TIFF to the NSCursor class method initWithImage:hotSpot:. Use APIs That Support High Resolution Cocoa apps must replace deprecated APIs with their newer counterparts. Apps that use older technologies need to replace those technologies with newer ones. NSImage, NSView, NSWindow, and NSScreen classes have methods that support high resolution, including methods for converting geometry, detecting scaling, and aligning pixels. These APIs and deprecated technologiesthat youmust avoid are described in High Resolution Guidelines for OS X . You should also consider whether your app requires further adjustments for special scenarios, such as using pixel-based technologies (OpenGL, Quartz bitmaps) or custom Core Animation layers. These advanced optimization techniques are described in High Resolution Guidelines for OS X , which also provides much more detailed information about high resolution. Supporting Common App Behaviors Optimize for High Resolution 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 60Prepare for Fast User Switching Fast user switching lets users share a single machine without having to log out every time they want to access their user account. Users share physical access to the machine, including the same keyboard, mouse, and monitor. However, instead of logging out, a new user simply logs in and switches out the previous user. Processes in a switched-out login session continue running as before. They can continue processing data, communicating with the system, and drawing to the screen buffer as before. However, because they are switched out, they do not receive input from the keyboard and mouse. Similarly, if they were to check, the monitor would appear to be in sleep mode. As a result, it may benefit some apps to adjust their behavior while in a switched-out state to avoid wasting system resources. While fast userswitching is a convenient feature for users, it does provide several challengesfor app developers. Apps that rely on exclusive access to certain resources may need to modify their behavior to live in a fast user switching environment. For example, an app that stores temporary data in /tmp may run into problems when a second instance running under a different user tries to modify the same files in that directory. To support fast user switching, there are certain guidelines you should follow in developing your apps, most of which describe safe waysto identify and share system resources. A summary of these guidelinesis asfollows: ● Incorporate session ID information into the name of any entity that appears in a global namespace, including file names, shared memory regions, semaphores, and named pipes. Tagging such app-specific resources with a session ID is necessary to distinguish them from similar resources created by apps in a different session. The Security layer of the system assigns a unique ID to each login session. Incorporating thisID into cache file or temporary directory names can prevent namespace collisions when creating these files. See “Supporting Fast User Switching” for information on how to get the session ID. ● Don't assume you have exclusive access to any resource, especially TCP/IP ports and hardware devices. ● Don't assume there is only one instance of a per-user service running. ● Use file-level or range-level locks when accessing files. ● Accept and handle user switch notifications. See “User Switch Notifications” for more information. For more information on user switching, see Multiple User Environment Programming Topics. Supporting Common App Behaviors Prepare for Fast User Switching 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 61Take Advantage of the Dock The Dock is a desktop app designed to reduce desktop clutter, provide users with feedback about an app, and allow users to switch easily between tasks. You can customize your app Dock tile by modifying the Dock icon and adding items to the menu displayed for your app, and you can customize the Dock icon of a minimized window. An app’s Dock icon is, by default, the app’s icon. While your app is running, you can modify or replace the default icon with another image that indicates the current state of your app. For example, the icon for Mail changes when messages are waiting to be read. A badge—the red circle and number in the figure—is overlaid onto Mail’s app icon to indicate the number of unread messages. The number changes each time Mail retrieves more messages. When the user holds the Control key down and clicks on a Dock tile, a menu appears. If your app does nothing to customize the menu, the Dock tile’s menu contains a list of the app’s open documents (if any), followed by the standard menu items Keep in Dock, Open at Login, Show in Finder, Hide, and Quit. You can add other menu itemsto the Dock menu, eitherstatically by providing a custom menu nib file, or dynamically by overriding the application delegate’s applicationDockMenu: method. You can also customize a dock tile when your app is not currently running by creating a Dock tile plug-in that can update the Dock tile icon and menu. For example, you may want to update the badge text to indicate that new content will be available the next time the app is launched, and you may want to customize the app’s Dock menu to deal with the new content. For information explaining how to customize your app’s Dock icon and menu, see Dock Tile Programming Guide . Supporting Common App Behaviors Take Advantage of the Dock 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 62Configuring your app properly is an important part of the development process. Mac apps use a structured directory configuration to manage their code and resource files. And although most of the files are custom and are there to support your app, some are required by the system or the App Store and must be configured properly. If you intend to sell your application through the Mac App Store or use iCloud storage, you also need to create an explicit app ID, create provisioning profiles, and enable the correct entitlements for your application. These procedures are explained in Tools Workflow Guide for Mac . All of the requirements for preparing your app and distributing it on the App Store are described in the App Store Resource Center. Configuring Your Xcode Project To develop a Mac app, you create an Xcode project. An Xcode Project is a repository for all the files, resources, and information required to build your app (or one of a number of othersoftware products). A project contains all the elements used to build your app and maintains the relationships between those elements. It contains one or more targets, which specify how to build the software. A project defines default build settings for all the targets in the project (each target can also specify its own build settings, which override the project build settings). You create a new project using the Xcode File > New > New Project menu command, which invokes the New Project dialog. This dialog enables you to choose a template for your project, such as a Cocoa app, to name it, and to locate it in your file system. The New Project dialog also provides options, so you can specify whether your app uses the Cocoa document architecture or the Core Data framework. When you save your project, Xcode lets you to create a local Git repository to enable source code control for your project. If you have two or more closely related projects, you should create an Xcode Workspace and add your projects to it. A workspace groups projects and other documents so you can work on them together. One project can use the products and shared libraries of another project while building, and Xcode does indexing across the entire workspace, extending the scope of content-aware features such as code completion. Once you have created your project, you write and edit your code using the Xcode source editor. You also use Xcode to build and debug your code, setting breakpoints, viewing the values of variables, stepping through running code, and reviewing issues found during builds or code analysis. 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 63 Build-Time Configuration DetailsWhen you create a new project, it includes one or more targets, where each target specifies one build product and the instructions for how the product is to be built. Most developers never need to change the default of the vast majority of build settings, but there are a few basic settings that you should check for each target, such as the deployment target (platform, OS version, and architecture), main user interface, and linked frameworks and libraries. You also need to set up one or more schemes to specify the targets, build configuration, and executable configuration to use when the product specified by the target is launched. You use the project editor and the scheme editing dialog to edit build settings and control what happens when you build and run your code. “Building and Running Your Code” explains how to work with Xcode build settings and schemes. For more information about using Xcode to create, configure, build, and debug your project, as well as archiving your program to package it for distribution or submission to the Mac App Store, see Xcode 4 User Guide . The Information Property List File An information property list (Info.plist) file contains essential runtime-configuration information for the app. Xcode provides a version of this file with every Mac application project and configures it with several standard keys. Although the default keys cover several important aspects of the app’s configuration, most apps require additional keys to specify their configuration fully. Build-Time Configuration Details The Information Property List File 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 64To view the contents of your Info.plist file, select it in the Groups & Files pane. Xcode displays a property list editor similar to the one in Figure 5-1 (which is from Xcode 3.2.5). You can use this window to edit the property values and add new key-value pairs. By default, Xcode displays a more user-friendly version of each key name. To see the key names that Xcode adds to the Info.plist file, Control-click an item in the editor and choose Show Raw Keys/Values from the contextual menu that appears. Figure 5-1 The information property list editor Xcode automatically addssome important keysto the Info.plist file of all new projects and setstheir initial values. However, there are several keys that Mac apps use commonly to configure their launch environment and runtime behavior. Here are some keysthat you might want to add to your app’s Info.plist file specifically for OS X: ● LSApplicationCategoryType (required for apps distributed using the App Store) ● CFBundleDisplayName ● LSHasLocalizedDisplayName ● NSHumanReadableCopyright ● LSMinimumSystemVersion ● UTExportedTypeDeclarations ● UTImportedTypeDeclarations Build-Time Configuration Details The Information Property List File 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 65For detailed information about these and other keys that you can include in your app’s Info.plist file, see Information Property List Key Reference . The OS X Application Bundle When you build your Mac app, Xcode packages it as a bundle. A bundle is a directory in the file system that groupsrelated resourcestogether in one place. A Mac app bundle contains a single Contents directory, inside of which are additional files and directories with the app’s code, resources, and localized content. Table 5-1 liststhe contents of a typical Mac app bundle, which for demonstration purposesis called MyApp. This example is for illustrative purposes only. Some of the files listed in this table may not appear in your own application bundles. Table 5-1 A typical application bundle Files Description The executable file containing your app’s code. The name of this file is the same as your app name minus the .app extension. This file is required. Contents/MacOS/MyApp Also known as the information property list file, a file containing configuration data for the app. The system uses this data to determine how to interact with the app at specific times. This file is required. For more information, see “The Information Property List File” (page 64). Contents/Info.plist Build-Time Configuration Details The OS X Application Bundle 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 66Files Description The English version of the app’s main nib file. This file contains the default interface objectsto load at app launch time. Typically, this nib file contains the app’s menu bar and application delegate object. It may also contain other controller objects that should always be available at launch time. (The name of the main nib file can be changed by assigning a different value to the NSMainNibFile key in the Info.plist file.) Thisfile is optional but recommended. For more information, see “The Information Property List File” (page 64) Contents/Resources/English.lproj/MainMenu.nib Nonlocalized resources are placed at the top level of the Resources directory (sun.png represents a nonlocalized image file in the example). The app uses nonlocalized resources when no localized version of the same resource is provided. Thus, you can use these files in situations where the resource is the same for all localizations. Contents/Resources/sun.png (or other resource files) Build-Time Configuration Details The OS X Application Bundle 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 67Files Description Localized resources are placed in subdirectories with an ISO 639-1 language abbreviation for a name plus an .lproj suffix. Although more human readable names (such as English.lproj, French.lproj, and German.lproj) can be used for directory names, the ISO 639-1 names are preferred because they allow you to include an ISO 3166-1 regional designation. (For example, the en_GB.lproj directory contains resources localized for English as spoken in Great Britain, the es.lproj directory contains resources localized for Spanish, and the de.lproj directory contains resources localized for German.) Contents/Resources/en_GB.lproj Contents/Resources/es.lproj Contents/Resources/de.lproj Other language-specific project directories A Mac app should be internationalized and have a language.lproj directory for each language it supports. Even if you provide localized versions of your resources, though, include a default version of these files at the top level of your Resources directory. The default version is used when a specific localization is not available. At runtime, you can access your app’s resource files from your code using the following steps: 1. Obtain a reference to your app’s main bundle object (typically using theNSBundle class). 2. Use the methods of the bundle object to obtain a file-system path to the desired resource file. 3. Open (or access) the file and use it. You obtain a reference to your app’s main bundle using the mainBundle class method of NSBundle. The pathForResource:ofType: method is one of several NSBundle methods that you can use to retrieve the location of resources. The following example shows how to locate a file called sun.png and create an image object using it. The first line getsthe bundle object and the path to the file. The second line creates an NSImage object that you could use to display the image in your app. NSString* imagePath = [[NSBundle mainBundle] pathForResource:@"sun" ofType:@"png"]; NSImage* sunImage = [[NSImage alloc] initWithContentsOfFile:imagePath]; Build-Time Configuration Details The OS X Application Bundle 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 68Note: If you prefer to use Core Foundation to access bundles, you can obtain a CFBundleRef opaque type using the CFBundleGetMainBundle function. You can then use that opaque type plus the Core Foundation functions to locate any bundle resources. For information on how to access and use resources in your app, see Resource Programming Guide . For more information about the structure of a Mac app bundle, see Bundle Programming Guide . Internationalizing Your App The process of preparing a project to handle content in different languages is called internationalization. The process of converting text, images, and other content into other languages is called localization. Project resources that are candidates for localization include: ● Code-generated text, including locale-specific aspects of date, time, and number formatting ● Static text—for example, strings you specify programmatically and display in parts of your user interface or an HTML file containing app help ● Icons (including your app icon) and other images when those images either contain text or have some culture-specific meaning ● Sound files containing spoken language ● Nib files Build-Time Configuration Details Internationalizing Your App 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 69Users select their preferred language from the Language and Text system preference, shown in Figure 5-2. Figure 5-2 The Language preference view Your application bundle can include multiple language-specific resource directories. The names of these directories consist of three components: an ISO 639-1 language code, an optional ISO 3166-1 region code, and a .lproj suffix. For example, to designate resourceslocalized to English, the bundle would be named en.lproj. By convention, these directories are called lproj directories. Note: You may use ISO 639-2 language codes instead of those defined by ISO 639-1. For more information about language and region codes, see “Language and Locale Designations” in Internationalization Programming Topics. Each lproj directory contains a localized copy of the app’s resource files. When you request the path to a resource file using the NSBundle class or the CFBundleRef opaque type, the path you get back automatically reflects the resource for the user’s preferred language. For more information about internationalization and how you support localized content in your Mac apps, see Internationalization Programming Topics. Build-Time Configuration Details Internationalizing Your App 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 70As you develop your app and your project code stabilizes, you can begin performance tuning. Of course, you want your app to launch and respond to the user’s commands as quickly as possible. A responsive app fits easily into the user’s workflow and feels well crafted. Speed Up Your App’s Launch Time You can improve your app’s performance at launch time by minimizing or deferring work until after the launch sequence has completed. The launch of an app provides users with the first impression of your app, and it’s something they see on a regular basis. Your overriding goal during launch should be to display the app’s menu bar and main window and then start responding to user commands as quickly as possible. Making your app responsive to commands quickly provides a better experience for the user. The following sections provide some general tips on how to make your app launch faster. Delay Initialization Code Many apps spend a lot of time initializing code that isn’t used until much later. Delaying the initialization of subsystems that are not immediately needed can speed up your launch time considerably. Remember that the goal is to display your app interface quickly, so try to initialize only the subsystems related to that goal initially. Once you have posted your interface, your app can continue to initialize additional subsystems as needed. However, remember that just because your app is able to process commands does not mean you need all of that code right away. The preferred way of initializing subsystems is on an as-needed basis. Wait until the user executes a command that requires a particular subsystem and then initialize it. That way, if the user never executes the command, you will not have wasted any time running the code to prepare for it. Avoid putting a lot of extraneous initialization code in your awakeFromNib methods. The system calls the awakeFromNib method of your main nib file before your app enters its main event loop. Use that method to initialize the objects in that nib and to prepare your app interface. For all other initialization, use the applicationDidFinishLaunching: method of your NSApplicationDelegate object instead. For more information on nib files and how they are loaded, see Resource Programming Guide . 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 71 Tuning for Performance and ResponsivenessSimplify Your Main Nib File Loading a nib file is an expensive process that can slow down your app launch time if you are not careful. When a nib file isloaded, all of the objectsin that file are instantiated and made ready for use. The more objects you include in your app’s main nib, the more time it takes to load that file and launch your app. The instantiation process for objects in a nib file requires that any frameworks used by those objects must themselves reside in memory. Thus loading a nib for a Cocoa app would likely require the loading of both the AppKit and Foundation frameworks, if they were not already resident in memory. Similarly, if you declare a custom class in your main nib file and that class relies on other frameworks, the system must load those frameworks as well. When designing your app’s main nib file, you should include only those objects needed to display your app’s initial user interface. Usually, this would involve just your app’s menu bar and initial window. For any custom classes you include in the nib, make sure their initialization code is as minimal as possible. Defer any time-consuming operations or memory allocations until after the class is instantiated. Minimize Global Variables For both apps and frameworks, be careful not to declare global variables that require significant amounts of initialization. The system initializes global variables before calling your app’s main routine. If you use a global variable to declare an object, the system must call the constructor or initialization method for that object during launch time. In general, it’s best to avoid declaring objects as global variables altogether when you can use a pointer instead. If you are implementing a framework or any type of reusable code module, you should also minimize the number of global variables you declare. Each app that links to a framework acquires a copy of that framework’s global variables. These variables might require several pages of virtual memory, which then increases the memory footprint of the app. An increased memory footprint can lead to paging in the app, which has a tremendous impact on performance. One way to minimize the global variables in a framework is to store the variables in a malloc-allocated block of memory instead. In this technique, you access the variables through a pointer to the memory, which you store as a global variable. Another advantage of this technique is that it allows you to defer the creation of any global variables until the first time they are actually used. See “Tips for Allocating Memory” in Memory Usage Performance Guidelines for more information. Tuning for Performance and Responsiveness Speed Up Your App’s Launch Time 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 72Minimize File Access at Launch Time Accessing a file is one of the slowest operations performed on a computer, so it is important that you do it as little as possible, especially at launch time. There is always some file access that must occur at launch time, such as loading your executable code and reading in your main nib file, but reducing your initial dependence on startup files can provide significant speed improvements. If you can delay the reading of a file until after launch time, do so. The following list includes some files whose contents you may not need until after launch: ● Frameworks not used directly by your app—Avoid calling code that uses nonessential frameworks until after launch. ● Nib files whose contents are not displayed immediately—Make sure your nib files and awakeFromNib: code are not doing too much at launch time. See “Simplify Your Main Nib File” (page 72) for more information. ● User preference files—User preferences may not be local so read them later if you can. ● Font files—Consider delaying font initialization until after the app has launched. ● Network files—Avoid reading files located on the network if at all possible. If you must read a file at launch time, do so only once. If you need multiple pieces of data from the same file, such as from a preferences file, consider reading all of the data once rather than accessing the file multiple times. Don’t Block the Main Thread The main thread is where your app handles user events and other input, so you should keep it free as much as possible to be responsive to the user. In particular, never use the main thread to perform long-running or potentially unbounded tasks, such as tasks that require network access. Instead, always move those tasks onto background threads. The preferred way to do so is to use Grand Central Dispatch (GCD) or operation objects to perform tasks asynchronously. For more information about doing work on background threads, see Concurrency Programming Guide . Decrease Your App’s Code Size In the context of performance, the more memory your app occupies, the more inefficient it is. More memory means more memory allocations, more code, and a greater potential for paging. Tuning for Performance and Responsiveness Don’t Block the Main Thread 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 73Reducing your code footprint is not just a matter of turning on code optimizations in your compiler, although that does help. You can also reduce your code footprint by organizing your code so that only the minimum set of required functions is in memory at any given time. You implement this optimization by profiling your code. See “Memory Instruments” in Instruments User Guide for information about profiling your app’s memory allocations. Compiler-Level Optimizations The Xcode compiler supports optimization options that let you choose whether you prefer a smaller binary size, faster code, or faster build times. For new projects, Xcode automatically disables optimizations for the debug build configuration and selects the Fastest, Smallest option for the release build configuration. Code optimizations of any kind result in slower build times because of the extra work involved in the optimization process. If your code is changing, as it does during the development cycle, you do not want optimizations enabled. As you near the end of your development cycle, though, the release build configuration can give you an indication of the size of your finished product, so the Fastest, Smallest option is appropriate. Table 6-1 lists the optimization levels available in Xcode. When you select one of these options, Xcode passes the appropriate flags to the compiler for the given group or files. These options are available at the target level or as part of a build configuration. See the Xcode Build System Guide for information on working with build settings for your project. Table 6-1 Compiler optimization options Xcode setting Description The compiler does not attempt to optimize code. Use this option during development when you are focused on solving logic errors and need a fast compile time. Do not use this option for shipping your executable. None The compiler performs simple optimizations to boost code performance while minimizing the impact to compile time. This option also uses more memory during compilation. Fast The compiler performs nearly all supported optimizations that do not require a space-time tradeoff. The compiler does not perform loop unrolling or function inlining with this option. This option increases both compilation time and the performance of generated code. Faster The compiler performs all optimizations in an attempt to improve the speed of the generated code. This option can increase the size of generated code as the compiler performs aggressive inlining of functions. This option is generally not recommended. Fastest Tuning for Performance and Responsiveness Decrease Your App’s Code Size 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 74Xcode setting Description The compiler performs all optimizations that do not typically increase code size. This is the preferred option for shipping code because it gives your executable a smaller memory footprint. Fastest, Smallest As with any performance enhancement, do not make assumptions about which option will give you the best results. You should always measure the results of each optimization you try. For example, the Fastest option might generate extremely fast code for a particular module, but it usually doesso at the expense of executable size. Any speed advantages you gain from the code generation are easily lost if the code needs to be paged in from disk at runtime. Use Core Data for Large Data Sets If your app manipulates large amounts of structured data, store it in a Core Data persistent store or in a SQLite database instead of in a flat file. Both Core Data and SQLite provide efficient ways to manage large data sets without requiring the entire set to be in memory all at once. Use SQLite if you deal with low-level data structures, or an existing SQLite database. Core Data provides a high-level abstraction for efficient object-graph management with an Objective-C interface; it is, however, an advanced framework and you shouldn't use it until you have gained adequate experience. For more information about Core Data, see Core Data Programming Guide and Optimizing Core Data with Instruments. Eliminate Memory Leaks Your app should not have any memory leaks. You can use the Instruments app to track down leaks in your code, both in the simulator and on actual devices. See “Memory Instruments” in Instruments User Guide for information about finding memory leaks. Dead Strip Your Code For statically linked executables, dead-code stripping is the process of removing unreferenced code from the executable file. If the code is unreferenced, it must not be used and therefore is not needed in the executable file. Removing dead code reduces the size of your executable and can help reduce paging. To enable dead-code stripping in Xcode, in the Linking group of Build Settings, set the Dead Code Stripping option to Yes. Tuning for Performance and Responsiveness Decrease Your App’s Code Size 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 75Strip Symbol Information Debugging symbols and dynamic-binding information can take up a lot of space and comprise a large percentage of your executable’s size. Before shipping your code, you should strip out all unneeded symbols. To strip debugging symbols from your executable, change the Xcode compiler code generation Generate Debug Symbols option to No. You can also generate debugging symbols on a target-by-target basis if you prefer. See the Xcode Help for more information on build configurations and target settings. Tuning for Performance and Responsiveness Decrease Your App’s Code Size 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 76This table describes the changes to Mac App Programming Guide . Date Notes Added a short section on adopting iCloud in a Mac app, “Integrating iCloud Support Into Your App” (page 30). 2012-07-23 Removed the chapter on iCloud, which is superseded by iCloud Design Guide . Rewrote section about supporting high resolution: “Optimize for High Resolution” (page 58). Summarized chapter on document-based apps, added iCloud chapter, and revised sandbox information. Made minor technical and editorial revisions throughout. Changed title from OS X Application Programming Guide. 2012-01-09 2011-06-27 New document describing the development process for Mac apps. 2012-07-23 | © 2012 Apple Inc. All Rights Reserved. 77 Document Revision HistoryApple Inc. © 2012 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Bonjour, Cocoa, Finder, Instruments, iPhoto, iTunes, Keychain, Mac, Macintosh, Objective-C, OS X, Quartz, QuickDraw, Sand, Spotlight, Time Machine, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. Launchpad is a trademark of Apple Inc. iCloud is a service mark of Apple Inc., registered in the U.S. and other countries. App Store and Mac App Store are service marks of Apple Inc. OpenGL is a registered trademark of Silicon Graphics, Inc. UNIX is a registered trademark of The Open Group. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Objective-C Runtime Programming GuideContents Introduction 5 Organization of This Document 5 See Also 5 Runtime Versions and Platforms 7 Legacy and Modern Versions 7 Platforms 7 Interacting with the Runtime 8 Objective-C Source Code 8 NSObject Methods 8 Runtime Functions 9 Messaging 10 The objc_msgSend Function 10 Using Hidden Arguments 13 Getting a Method Address 14 Dynamic Method Resolution 16 Dynamic Method Resolution 16 Dynamic Loading 17 Message Forwarding 18 Forwarding 18 Forwarding and Multiple Inheritance 20 Surrogate Objects 21 Forwarding and Inheritance 21 Type Encodings 24 Declared Properties 28 Property Type and Functions 28 Property Type String 29 Property Attribute Description Examples 30 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 2Document Revision History 33 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 3 ContentsFigures and Tables Messaging 10 Figure 3-1 Messaging Framework 12 Message Forwarding 18 Figure 5-1 Forwarding 20 Type Encodings 24 Table 6-1 Objective-C type encodings 24 Table 6-2 Objective-C method encodings 26 Declared Properties 28 Table 7-1 Declared property type encodings 30 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 4The Objective-C language defers as many decisions as it can from compile time and link time to runtime. Whenever possible, it does things dynamically. This means that the language requires not just a compiler, but also a runtime system to execute the compiled code. The runtime system acts as a kind of operating system for the Objective-C language; it’s what makes the language work. This document looks at the NSObject class and how Objective-C programs interact with the runtime system. In particular, it examines the paradigms for dynamically loading new classes at runtime, and forwarding messages to other objects. It also provides information about how you can find information about objects while your program is running. You should read this document to gain an understanding of how the Objective-C runtime system works and how you can take advantage of it. Typically, though, there should be little reason for you to need to know and understand this material to write a Cocoa application. Organization of This Document This document has the following chapters: ● “Runtime Versions and Platforms” (page 7) ● “Interacting with the Runtime” (page 8) ● “Messaging” (page 10) ● “Dynamic Method Resolution” (page 16) ● “Message Forwarding” (page 18) ● “Type Encodings” (page 24) ● “Declared Properties” (page 28) See Also Objective-C Runtime Reference describes the data structures and functions of the Objective-C runtime support library. Your programs can use these interfaces to interact with the Objective-C runtime system. For example, you can add classes or methods, or obtain a list of all class definitions for loaded classes. 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 5 IntroductionThe Objective-C Programming Language describes the Objective-C language. Objective-C Release Notes describes some of the changes in the Objective-C runtime in the latest release of OS X. Introduction See Also 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 6There are different versions of the Objective-C runtime on different platforms. Legacy and Modern Versions There are two versions of the Objective-C runtime—“modern” and “legacy”. The modern version wasintroduced with Objective-C 2.0 and includes a number of new features. The programming interface for the legacy version of the runtime is described in Objective-C 1 Runtime Reference ; the programming interface for the modern version of the runtime is described in Objective-C Runtime Reference . The most notable new feature is that instance variables in the modern runtime are “non-fragile”: ● In the legacy runtime, if you change the layout of instance variables in a class, you must recompile classes that inherit from it. ● In the modern runtime, if you change the layout of instance variables in a class, you do not have to recompile classes that inherit from it. In addition, the modern runtime supports instance variable synthesis for declared properties (see Declared Properties in The Objective-C Programming Language ). Platforms iPhone applications and 64-bit programs on OS X v10.5 and later use the modern version of the runtime. Other programs (32-bit programs on OS X desktop) use the legacy version of the runtime. 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 7 Runtime Versions and PlatformsObjective-C programs interact with the runtime system at three distinct levels: through Objective-C source code; through methods defined in the NSObject class of the Foundation framework; and through direct calls to runtime functions. Objective-C Source Code For the most part, the runtime system works automatically and behind the scenes. You use it just by writing and compiling Objective-C source code. When you compile code containing Objective-C classes and methods, the compiler creates the data structures and function calls that implement the dynamic characteristics of the language. The data structures capture information found in class and category definitions and in protocol declarations; they include the class and protocol objects discussed in Defining a Class and Protocols in The Objective-C Programming Language , as well as method selectors, instance variable templates, and other information distilled from source code. The principal runtime function is the one that sends messages, as described in “Messaging” (page 10). It’s invoked by source-code message expressions. NSObject Methods Most objects in Cocoa are subclasses of the NSObject class, so most objects inherit the methods it defines. (The notable exception is the NSProxy class; see “Message Forwarding” (page 18) for more information.) Its methods therefore establish behaviors that are inherent to every instance and every class object. However, in a few cases, the NSObject class merely defines a template for how something should be done; it doesn’t provide all the necessary code itself. For example, the NSObject class defines a description instance method that returns a string describing the contents of the class. This is primarily used for debugging—the GDB print-object command prints the string returned from this method. NSObject’s implementation of this method doesn’t know what the class contains,so it returns a string with the name and address of the object. Subclasses of NSObject can implement this method to return more details. For example, the Foundation class NSArray returns a list of descriptions of the objects it contains. 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 8 Interacting with the RuntimeSome of the NSObject methodssimply query the runtime system for information. These methods allow objects to perform introspection. Examples of such methods are the class method, which asks an object to identify its class; isKindOfClass: and isMemberOfClass:, which test an object’s position in the inheritance hierarchy; respondsToSelector:, which indicates whether an object can accept a particular message; conformsToProtocol:, which indicates whether an object claims to implement the methods defined in a specific protocol; and methodForSelector:, which provides the address of a method’s implementation. Methods like these give an object the ability to introspect about itself. Runtime Functions The runtime system is a dynamic shared library with a public interface consisting of a set of functions and data structuresin the header fileslocated within the directory /usr/include/objc. Many of these functions allow you to use plain C to replicate what the compiler does when you write Objective-C code. Others form the basis for functionality exported through the methods of the NSObject class. These functions make it possible to develop other interfacesto the runtime system and produce toolsthat augment the development environment; they’re not needed when programming in Objective-C. However, a few of the runtime functions might on occasion be useful when writing an Objective-C program. All of these functions are documented in Objective-C Runtime Reference . Interacting with the Runtime Runtime Functions 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 9This chapter describes how the message expressions are converted into objc_msgSend function calls, and how you can refer to methods by name. It then explains how you can take advantage of objc_msgSend, and how—if you need to—you can circumvent dynamic binding. The objc_msgSend Function In Objective-C, messages aren’t bound to method implementations until runtime. The compiler converts a message expression, [receiver message] into a call on a messaging function, objc_msgSend. This function takes the receiver and the name of the method mentioned in the message—that is, the method selector—as its two principal parameters: objc_msgSend(receiver, selector) Any arguments passed in the message are also handed to objc_msgSend: objc_msgSend(receiver, selector, arg1, arg2, ...) The messaging function does everything necessary for dynamic binding: ● It first finds the procedure (method implementation) that the selector refers to. Since the same method can be implemented differently by separate classes, the precise procedure that it finds depends on the class of the receiver. ● It then calls the procedure, passing it the receiving object (a pointer to its data), along with any arguments that were specified for the method. ● Finally, it passes on the return value of the procedure as its own return value. 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 10 MessagingNote: The compiler generates calls to the messaging function. You should never call it directly in the code you write. The key to messaging lies in the structures that the compiler builds for each class and object. Every class structure includes these two essential elements: ● A pointer to the superclass. ● A class dispatch table. This table has entries that associate method selectors with the class-specific addresses of the methods they identify. The selector for the setOrigin:: method is associated with the address of (the procedure that implements) setOrigin::, the selector for the display method is associated with display’s address, and so on. When a new object is created, memory for it is allocated, and its instance variables are initialized. First among the object’s variables is a pointer to its class structure. This pointer, called isa, gives the object access to its class and, through the class, to all the classes it inherits from. Messaging The objc_msgSend Function 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 11Note: While not strictly a part of the language, the isa pointer is required for an object to work with the Objective-C runtime system. An object needsto be “equivalent” to a struct objc_object (defined in objc/objc.h) in whatever fieldsthe structure defines. However, you rarely, if ever, need to create your own root object, and objects that inherit from NSObject or NSProxy automatically have the isa variable. These elements of class and object structure are illustrated in Figure 3-1. Figure 3-1 Messaging Framework . . . superclass selector...address selector...address selector...address . . . superclass selector...address selector...address selector...address . . . superclass selector...address selector...address selector...address isa instance variable instance variable . . . The object’s superclass The root class (NSObject) The object’s class Messaging The objc_msgSend Function 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 12When a message is sent to an object, the messaging function follows the object’s isa pointer to the class structure where it looks up the method selector in the dispatch table. If it can’t find the selector there, objc_msgSend followsthe pointer to the superclass and triesto find the selector in its dispatch table. Successive failures cause objc_msgSend to climb the class hierarchy until it reaches the NSObject class. Once it locates the selector, the function callsthe method entered in the table and passesit the receiving object’s data structure. This is the way that method implementations are chosen at runtime—or, in the jargon of object-oriented programming, that methods are dynamically bound to messages. To speed the messaging process, the runtime system caches the selectors and addresses of methods as they are used. There’s a separate cache for each class, and it can contain selectors for inherited methods as well as for methods defined in the class. Before searching the dispatch tables, the messaging routine first checks the cache of the receiving object’s class(on the theory that a method that was used once may likely be used again). If the method selector is in the cache, messaging is only slightly slower than a function call. Once a program has been running long enough to “warm up” its caches, almost all the messagesitsendsfind a cached method. Caches grow dynamically to accommodate new messages as the program runs. Using Hidden Arguments When objc_msgSend finds the procedure that implements a method, it calls the procedure and passes it all the arguments in the message. It also passes the procedure two hidden arguments: ● The receiving object ● The selector for the method These arguments give every method implementation explicit information about the two halves of the message expression that invoked it. They’re said to be “hidden” because they aren’t declared in the source code that defines the method. They’re inserted into the implementation when the code is compiled. Although these arguments aren’t explicitly declared, source code can still refer to them (just as it can refer to the receiving object’s instance variables). A method refers to the receiving object as self, and to its own selector as _cmd. In the example below, _cmd refers to the selector for the strange method and self to the object that receives a strange message. - strange { id target = getTheReceiver(); SEL method = getTheMethod(); Messaging Using Hidden Arguments 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 13if ( target == self || method == _cmd ) return nil; return [target performSelector:method]; } self is the more useful of the two arguments. It is, in fact, the way the receiving object’s instance variables are made available to the method definition. Getting a Method Address The only way to circumvent dynamic binding is to get the address of a method and call it directly as if it were a function. This might be appropriate on the rare occasions when a particular method will be performed many times in succession and you want to avoid the overhead of messaging each time the method is performed. With a method defined in the NSObject class, methodForSelector:, you can ask for a pointer to the procedure that implements a method, then use the pointer to call the procedure. The pointer that methodForSelector: returns must be carefully cast to the proper function type. Both return and argument types should be included in the cast. The example below shows how the procedure that implements the setFilled: method might be called: void (*setter)(id, SEL, BOOL); int i; setter = (void (*)(id, SEL, BOOL))[target methodForSelector:@selector(setFilled:)]; for ( i = 0; i < 1000, i++ ) setter(targetList[i], @selector(setFilled:), YES); The first two arguments passed to the procedure are the receiving object (self) and the method selector (_cmd). These arguments are hidden in method syntax but must be made explicit when the method is called as a function. Using methodForSelector: to circumvent dynamic binding saves most of the time required by messaging. However, the savings will be significant only where a particular message is repeated many times, as in the for loop shown above. Messaging Getting a Method Address 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 14Note that methodForSelector: is provided by the Cocoa runtime system; it’s not a feature of the Objective-C language itself. Messaging Getting a Method Address 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 15This chapter describes how you can provide an implementation of a method dynamically. Dynamic Method Resolution There are situations where you might want to provide an implementation of a method dynamically. For example, the Objective-C declared propertiesfeature (see Declared Properties in The Objective-C Programming Language ) includes the @dynamic directive: @dynamic propertyName; which tells the compiler that the methods associated with the property will be provided dynamically. You can implement the methods resolveInstanceMethod: and resolveClassMethod: to dynamically provide an implementation for a given selector for an instance and class method respectively. An Objective-C method is simply a C function that take at least two arguments—self and _cmd. You can add a function to a class as a method using the function class_addMethod. Therefore, given the following function: void dynamicMethodIMP(id self, SEL _cmd) { // implementation .... } you can dynamically add it to a class as a method (called resolveThisMethodDynamically) using resolveInstanceMethod: like this: @implementation MyClass + (BOOL)resolveInstanceMethod:(SEL)aSEL { if (aSEL == @selector(resolveThisMethodDynamically)) { class_addMethod([self class], aSEL, (IMP) dynamicMethodIMP, "v@:"); return YES; 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 16 Dynamic Method Resolution} return [super resolveInstanceMethod:aSEL]; } @end Forwarding methods (as described in “Message Forwarding” (page 18)) and dynamic method resolution are, largely, orthogonal. A class has the opportunity to dynamically resolve a method before the forwarding mechanism kicksin. If respondsToSelector: or instancesRespondToSelector: isinvoked, the dynamic method resolver is given the opportunity to provide an IMP for the selector first. If you implement resolveInstanceMethod: but want particular selectors to actually be forwarded via the forwarding mechanism, you return NO for those selectors. Dynamic Loading An Objective-C program can load and link new classes and categories while it’s running. The new code is incorporated into the program and treated identically to classes and categories loaded at the start. Dynamic loading can be used to do a lot of different things. For example, the various modules in the System Preferences application are dynamically loaded. In the Cocoa environment, dynamic loading is commonly used to allow applications to be customized. Others can write modules that your program loads at runtime—much as Interface Builder loads custom palettes and the OS X System Preferences application loads custom preference modules. The loadable modules extend what your application can do. They contribute to it in ways that you permit but could not have anticipated or defined yourself. You provide the framework, but others provide the code. Although there is a runtime function that performs dynamic loading of Objective-C modules in Mach-O files (objc_loadModules, defined in objc/objc-load.h), Cocoa’s NSBundle class provides a significantly more convenient interface for dynamic loading—one that’s object-oriented and integrated with related services. See the NSBundle classspecification in the Foundation framework reference for information on the NSBundle class and its use. See OS X ABI Mach-O File Format Reference for information on Mach-O files. Dynamic Method Resolution Dynamic Loading 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 17Sending a message to an object that does not handle that message is an error. However, before announcing the error, the runtime system gives the receiving object a second chance to handle the message. Forwarding If you send a message to an object that does not handle that message, before announcing an error the runtime sends the object a forwardInvocation: message with an NSInvocation object as its sole argument—the NSInvocation object encapsulates the original message and the arguments that were passed with it. You can implement a forwardInvocation: method to give a default response to the message, or to avoid the error in some other way. As its name implies, forwardInvocation: is commonly used to forward the message to another object. To see the scope and intent of forwarding, imagine the following scenarios: Suppose, first, that you’re designing an object that can respond to a message called negotiate, and you want itsresponse to include the response of another kind of object. You could accomplish this easily by passing a negotiate message to the other object somewhere in the body of the negotiate method you implement. Take this a step further, and suppose that you want your object’s response to a negotiate message to be exactly the response implemented in another class. One way to accomplish this would be to make your class inherit the method from the other class. However, it might not be possible to arrange things this way. There may be good reasons why your class and the class that implements negotiate are in different branches of the inheritance hierarchy. Even if your class can’t inherit the negotiate method, you can still “borrow” it by implementing a version of the method that simply passes the message on to an instance of the other class: - negotiate { if ( [someOtherObject respondsTo:@selector(negotiate)] ) return [someOtherObject negotiate]; return self; } 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 18 Message ForwardingThis way of doing things could get a little cumbersome, especially if there were a number of messages you wanted your object to pass on to the other object. You’d have to implement one method to cover each method you wanted to borrow from the other class. Moreover, it would be impossible to handle cases where you didn’t know, at the time you wrote the code, the full set of messages you might want to forward. That set might depend on events at runtime, and it might change as new methods and classes are implemented in the future. The second chance offered by a forwardInvocation: message provides a less ad hoc solution to this problem, and one that’s dynamic rather than static. It workslike this: When an object can’t respond to a message because it doesn’t have a method matching the selector in the message, the runtime system informsthe object by sending it a forwardInvocation: message. Every object inherits a forwardInvocation: method from the NSObject class. However, NSObject’s version of the method simply invokes doesNotRecognizeSelector:. By overriding NSObject’s version and implementing your own, you can take advantage of the opportunity that the forwardInvocation: message provides to forward messages to other objects. To forward a message, all a forwardInvocation: method needs to do is: ● Determine where the message should go, and ● Send it there with its original arguments. The message can be sent with the invokeWithTarget: method: - (void)forwardInvocation:(NSInvocation *)anInvocation { if ([someOtherObject respondsToSelector: [anInvocation selector]]) [anInvocation invokeWithTarget:someOtherObject]; else [super forwardInvocation:anInvocation]; } The return value of the message that’s forwarded is returned to the original sender. All types of return values can be delivered to the sender, including ids, structures, and double-precision floating-point numbers. A forwardInvocation: method can act as a distribution center for unrecognized messages, parceling them out to different receivers. Or it can be a transfer station, sending all messages to the same destination. It can translate one message into another, or simply “swallow” some messages so there’s no response and no error. A forwardInvocation: method can also consolidate several messages into a single response. What forwardInvocation: doesis up to the implementor. However, the opportunity it providesfor linking objects in a forwarding chain opens up possibilities for program design. Message Forwarding Forwarding 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 19Note: The forwardInvocation: method gets to handle messages only if they don’t invoke an existing method in the nominal receiver. If, for example, you want your object to forward negotiate messages to another object, it can’t have a negotiate method of its own. If it does, the message will never reach forwardInvocation:. For more information on forwarding and invocations, see the NSInvocation class specification in the Foundation framework reference. Forwarding and Multiple Inheritance Forwarding mimics inheritance, and can be used to lend some of the effects of multiple inheritance to Objective-C programs. As shown in Figure 5-1 (page 20), an object that responds to a message by forwarding it appears to borrow or “inherit” a method implementation defined in another class. Figure 5-1 Forwarding isa . . . – forwardInvocation: – negotiate negotiate isa . . . Warrior Diplomat In this illustration, an instance of the Warrior class forwards a negotiate message to an instance of the Diplomat class. The Warrior will appear to negotiate like a Diplomat. It will seem to respond to the negotiate message, and for all practical purposes it does respond (although it’s really a Diplomat that’s doing the work). The object that forwards a message thus“inherits” methodsfrom two branches of the inheritance hierarchy—its own branch and that of the object that responds to the message. In the example above, it appears as if the Warrior class inherits from Diplomat as well as its own superclass. Message Forwarding Forwarding and Multiple Inheritance 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 20Forwarding provides most of the features that you typically want from multiple inheritance. However, there’s an important difference between the two: Multiple inheritance combines different capabilities in a single object. It tends toward large, multifaceted objects. Forwarding, on the other hand, assigns separate responsibilitiesto disparate objects. It decomposes problemsinto smaller objects, but associatesthose objects in a way that’s transparent to the message sender. Surrogate Objects Forwarding not only mimics multiple inheritance, it also makes it possible to develop lightweight objects that represent or “cover” more substantial objects. The surrogate standsin for the other object and funnels messages to it. The proxy discussed in “Remote Messaging” in The Objective-C Programming Language is such a surrogate. A proxy takes care of the administrative details of forwarding messages to a remote receiver, making sure argument values are copied and retrieved across the connection, and so on. But it doesn’t attempt to do much else; it doesn’t duplicate the functionality of the remote object but simply gives the remote object a local address, a place where it can receive messages in another application. Other kinds of surrogate objects are also possible. Suppose, for example, that you have an object that manipulates a lot of data—perhaps it creates a complicated image or reads the contents of a file on disk. Setting this object up could be time-consuming, so you prefer to do it lazily—when it’s really needed or when system resources are temporarily idle. At the same time, you need at least a placeholder for this object in order for the other objects in the application to function properly. In this circumstance, you could initially create, not the full-fledged object, but a lightweight surrogate for it. This object could do some things on its own, such as answer questions about the data, but mostly it would just hold a place for the larger object and, when the time came, forward messages to it. When the surrogate’s forwardInvocation: method first receives a message destined for the other object, it would ensure that the object existed and would create it if it didn’t. All messages for the larger object go through the surrogate, so, as far as the rest of the program is concerned, the surrogate and the larger object would be the same. Forwarding and Inheritance Although forwarding mimics inheritance, the NSObject class never confuses the two. Methods like respondsToSelector: and isKindOfClass: look only at the inheritance hierarchy, never at the forwarding chain. If, for example, a Warrior object is asked whether it responds to a negotiate message, if ( [aWarrior respondsToSelector:@selector(negotiate)] ) ... Message Forwarding Surrogate Objects 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 21the answer is NO, even though it can receive negotiate messages without error and respond to them, in a sense, by forwarding them to a Diplomat. (See Figure 5-1 (page 20).) In many cases, NO is the right answer. But it may not be. If you use forwarding to set up a surrogate object or to extend the capabilities of a class, the forwarding mechanism should probably be astransparent asinheritance. If you want your objects to act as if they truly inherited the behavior of the objects they forward messages to, you’ll need to re-implement the respondsToSelector: and isKindOfClass: methods to include your forwarding algorithm: - (BOOL)respondsToSelector:(SEL)aSelector { if ( [super respondsToSelector:aSelector] ) return YES; else { /* Here, test whether the aSelector message can * * be forwarded to another object and whether that * * object can respond to it. Return YES if it can. */ } return NO; } In addition to respondsToSelector: and isKindOfClass:,the instancesRespondToSelector:method should also mirror the forwarding algorithm. If protocols are used, the conformsToProtocol: method should likewise be added to the list. Similarly, if an object forwards any remote messages it receives, it should have a version of methodSignatureForSelector: that can return accurate descriptions of the methods that ultimately respond to the forwarded messages; for example, if an object is able to forward a message to its surrogate, you would implement methodSignatureForSelector: as follows: - (NSMethodSignature*)methodSignatureForSelector:(SEL)selector { NSMethodSignature* signature = [super methodSignatureForSelector:selector]; if (!signature) { signature = [surrogate methodSignatureForSelector:selector]; } return signature; } Message Forwarding Forwarding and Inheritance 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 22You might consider putting the forwarding algorithm somewhere in private code and have all these methods, forwardInvocation: included, call it. Note: Thisis an advanced technique,suitable only forsituations where no othersolution is possible. It is not intended as a replacement for inheritance. If you must make use of this technique, make sure you fully understand the behavior of the class doing the forwarding and the class you’re forwarding to. The methods mentioned in this section are described in the NSObject class specification in the Foundation framework reference. For information on invokeWithTarget:, see the NSInvocation class specification in the Foundation framework reference. Message Forwarding Forwarding and Inheritance 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 23To assist the runtime system, the compiler encodes the return and argument types for each method in a character string and associates the string with the method selector. The coding scheme it uses is also useful in other contexts and so is made publicly available with the @encode() compiler directive. When given a type specification, @encode() returns a string encoding that type. The type can be a basic type such as an int, a pointer, a tagged structure or union, or a class name—any type, in fact, that can be used as an argument to the C sizeof() operator. char *buf1 = @encode(int **); char *buf2 = @encode(struct key); char *buf3 = @encode(Rectangle); The table below lists the type codes. Note that many of them overlap with the codes you use when encoding an object for purposes of archiving or distribution. However, there are codes listed here that you can’t use when writing a coder, and there are codesthat you may want to use when writing a coder that aren’t generated by @encode(). (See the NSCoder class specification in the Foundation Framework reference for more information on encoding objects for archiving or distribution.) Table 6-1 Objective-C type encodings Code Meaning c A char i An int s A short A long l is treated as a 32-bit quantity on 64-bit programs. l q A long long C An unsigned char I An unsigned int S An unsigned short 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 24 Type EncodingsCode Meaning L An unsigned long Q An unsigned long long f A float d A double B A C++ bool or a C99 _Bool v A void * A character string (char *) @ An object (whether statically typed or typed id) # A class object (Class) : A method selector (SEL) [array type ] An array {name=type...} A structure (name=type...) A union bnum A bit field of num bits ^type A pointer to type ? An unknown type (among other things, this code is used for function pointers) Important: Objective-C does not support the long double type. @encode(long double) returns d, which is the same encoding as for double. The type code for an array is enclosed within square brackets; the number of elements in the array is specified immediately after the open bracket, before the array type. For example, an array of 12 pointers to floats would be encoded as: [12^f] Structures are specified within braces, and unions within parentheses. The structure tag is listed first, followed by an equal sign and the codes for the fields of the structure listed in sequence. For example, the structure Type Encodings 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 25typedef struct example { id anObject; char *aString; int anInt; } Example; would be encoded like this: {example=@*i} The same encoding results whether the defined type name (Example) or the structure tag (example) is passed to @encode(). The encoding for a structure pointer carriesthe same amount of information about the structure’s fields: ^{example=@*i} However, another level of indirection removes the internal type specification: ^^{example} Objects are treated like structures. For example, passing the NSObject class name to @encode() yields this encoding: {NSObject=#} The NSObject class declares just one instance variable, isa, of type Class. Note that although the @encode() directive doesn’t return them, the runtime system uses the additional encodings listed in Table 6-2 for type qualifiers when they’re used to declare methods in a protocol. Table 6-2 Objective-C method encodings Code Meaning r const n in N inout Type Encodings 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 26Code Meaning o out O bycopy R byref V oneway Type Encodings 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 27When the compiler encounters property declarations (see Declared Properties in The Objective-C Programming Language ), it generates descriptive metadata that is associated with the enclosing class, category or protocol. You can accessthis metadata using functionsthatsupport looking up a property by name on a class or protocol, obtaining the type of a property as an @encode string, and copying a list of a property's attributes as an array of C strings. A list of declared properties is available for each class and protocol. Property Type and Functions The Property structure defines an opaque handle to a property descriptor. typedef struct objc_property *Property; You can use the functions class_copyPropertyList and protocol_copyPropertyList to retrieve an array of the properties associated with a class (including loaded categories) and a protocol respectively: objc_property_t *class_copyPropertyList(Class cls, unsigned int *outCount) objc_property_t *protocol_copyPropertyList(Protocol *proto, unsigned int *outCount) For example, given the following class declaration: @interface Lender : NSObject { float alone; } @property float alone; @end you can get the list of properties using: id LenderClass = objc_getClass("Lender"); unsigned int outCount; 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 28 Declared Propertiesobjc_property_t *properties = class_copyPropertyList(LenderClass, &outCount); You can use the property_getName function to discover the name of a property: const char *property_getName(objc_property_t property) You can use the functions class_getProperty and protocol_getProperty to get a reference to a property with a given name in a class and protocol respectively: objc_property_t class_getProperty(Class cls, const char *name) objc_property_t protocol_getProperty(Protocol *proto, const char *name, BOOL isRequiredProperty, BOOL isInstanceProperty) You can use the property_getAttributes function to discover the name and the @encode type string of a property. For details of the encoding type strings, see “Type Encodings” (page 24); for details of this string, see “Property Type String” (page 29) and “Property Attribute Description Examples” (page 30). const char *property_getAttributes(objc_property_t property) Putting these together, you can print a list of all the properties associated with a class using the following code: id LenderClass = objc_getClass("Lender"); unsigned int outCount, i; objc_property_t *properties = class_copyPropertyList(LenderClass, &outCount); for (i = 0; i < outCount; i++) { objc_property_t property = properties[i]; fprintf(stdout, "%s %s\n", property_getName(property), property_getAttributes(property)); } Property Type String You can use the property_getAttributes function to discover the name, the @encode type string of a property, and other attributes of the property. Declared Properties Property Type String 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 29The string starts with a T followed by the @encode type and a comma, and finishes with a V followed by the name of the backing instance variable. Between these, the attributes are specified by the following descriptors, separated by commas: Table 7-1 Declared property type encodings Code Meaning R The property is read-only (readonly). C The property is a copy of the value last assigned (copy). & The property is a reference to the value last assigned (retain). N The property is non-atomic (nonatomic). The property defines a custom getter selector name. The name follows the G (for example, GcustomGetter,). G The property defines a custom setter selector name. The name follows the S (for example, ScustomSetter:,). S D The property is dynamic (@dynamic). W The property is a weak reference (__weak). P The property is eligible for garbage collection. t Specifies the type using old-style encoding. For examples, see “Property Attribute Description Examples” (page 30). Property Attribute Description Examples Given these definitions: enum FooManChu { FOO, MAN, CHU }; struct YorkshireTeaStruct { int pot; char lady; }; typedef struct YorkshireTeaStruct YorkshireTeaStructType; union MoneyUnion { float alone; double down; }; Declared Properties Property Attribute Description Examples 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 30the following table shows sample property declarations and the corresponding string returned by property_getAttributes: Property declaration Property description @property char charDefault; Tc,VcharDefault @property double doubleDefault; Td,VdoubleDefault @property enum FooManChu enumDefault; Ti,VenumDefault @property float floatDefault; Tf,VfloatDefault @property int intDefault; Ti,VintDefault @property long longDefault; Tl,VlongDefault @property short shortDefault; Ts,VshortDefault @property signed signedDefault; Ti,VsignedDefault T{YorkshireTeaStruct="pot"i"lady"c},VstructDefault @property struct YorkshireTeaStruct structDefault; T{YorkshireTeaStruct="pot"i"lady"c},VtypedefDefault @property YorkshireTeaStructType typedefDefault; T(MoneyUnion="alone"f"down"d),VunionDefault @property union MoneyUnion unionDefault; @property unsigned unsignedDefault; TI,VunsignedDefault @property int (*functionPointerDefault)(char T^?,VfunctionPointerDefault *); @property id idDefault; T@,VidDefault Note: the compiler warns: no 'assign', 'retain', or 'copy' attribute is specified - 'assign' is assumed" @property int *intPointer; T^i,VintPointer @property void *voidPointerDefault; T^v,VvoidPointerDefault Declared Properties Property Attribute Description Examples 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 31Property declaration Property description @property int intSynthEquals; Ti,V_intSynthEquals In the implementation block: @synthesize intSynthEquals=_intSynthEquals; Ti,GintGetFoo,SintSetFoo: ,VintSetterGetter @property(getter=intGetFoo, setter=intSetFoo:) int intSetterGetter; @property(readonly) int intReadonly; Ti,R,VintReadonly @property(getter=isIntReadOnlyGetter, Ti,R,GisIntReadOnlyGetter readonly) int intReadonlyGetter; @property(readwrite) int intReadwrite; Ti,VintReadwrite @property(assign) int intAssign; Ti,VintAssign @property(retain)ididRetain; T@,&,VidRetain @property(copy)ididCopy; T@,C,VidCopy @property(nonatomic) int intNonatomic; Ti,VintNonatomic @property(nonatomic, readonly, copy) id T@,R,C,VidReadonlyCopyNonatomic idReadonlyCopyNonatomic; T@,R,&,VidReadonlyRetainNonatomic @property(nonatomic, readonly, retain) id idReadonlyRetainNonatomic; Declared Properties Property Attribute Description Examples 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 32This table describes the changes to Objective-C Runtime Programming Guide . Date Notes 2009-10-19 Made minor editorial changes. 2009-07-14 Completed list of types described by property_getAttributes. 2009-02-04 Corrected typographical errors. 2008-11-19 New document that describesthe Objective-C 2.0 runtime support library. 2009-10-19 | © 2009 Apple Inc. All Rights Reserved. 33 Document Revision HistoryApple Inc. © 2009 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, iPhone, Mac, Objective-C, and OS X are trademarks of Apple Inc., registered in the U.S. and other countries. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Universal Binary Programming Guidelines, Second Edition (Legacy)Contents Introduction 8 Who Should Read This Document? 8 Organization of This Document 8 Assumptions 9 Conventions 10 Building a Universal Binary 11 Build Assumptions 11 Building Your Code 12 Debugging 16 Troubleshooting Your Built Application 16 Determining Whether a Binary Is Universal 18 Build Options 18 Default Compiler Options 19 Architecture-Specific Options 19 Autoconf Macros 20 See Also 20 Architectural Differences 21 Alignment 21 Bit Fields 21 Byte Order 21 Calling Conventions 22 Code on the Stack: Disabling Execution 22 Data Type Conversions 23 Data Types 23 Divide-By-Zero Operations 24 Extensible Firmware Interface (EFI) 24 Floating-Point Equality Comparisons 24 Structures and Unions 25 See Also 25 Swapping Bytes 26 Why Byte Ordering Matters 26 Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 2Guidelines for Swapping Bytes 28 Byte-Swapping Routines 29 Byte-Swapping Strategies 30 Constants 30 Custom Apple Event Data 31 Custom Resource Data 31 Floating-Point Values 32 Integers 33 Network-Related Data 34 OSType-to-String Conversions 35 Unicode Text Files 36 Values in an Array 38 Writing a Callback to Swap Data Bytes 38 See Also 45 Guidelines for Specific Scenarios 46 Aliases 46 Archived Bit Fields 46 Automator Scripts 46 Bit Shifting 47 Bit Test, Set, and Clear Functions: Carbon and POSIX 47 CPU Subtype 47 Dashboard Widgets 48 Deprecated Functions 48 Disk Partitions 48 Double-Precision Values: Bit-by-Bit Sensitivity 48 Finder Information and Low-Level File System Operations 49 FireWire Device Access 49 Font-Related Resources 50 GWorlds 50 Java Applications 51 Java I/O API (NIO) 51 Machine Location Data Structure 52 Mach Processes: The Task for PID Function 52 Metrowerks PowerPlant 53 Multithreading 53 Objective-C: Messages to nil 53 Objective-C Runtime: Sending Messages 54 Open Firmware 54 Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 3 ContentsOpenGL 55 OSAtomic Functions 58 Pixel Data 58 PostScript Printing 59 Quartz Bitmap Data 59 QuickDraw Routines 60 QuickTime Components 60 QuickTime Metadata Functions 61 Runtime Code Generation 61 Spotlight Importers 61 System-Specific Predefined Macros 62 USB Device Access 62 See Also 62 Preparing Vector-Based Code 63 Accelerate Framework 63 Rewriting AltiVec Instructions 64 See Also 64 Rosetta 65 What Can Be Translated? 65 How It Works 66 Special Considerations 66 Forcing an Application to Run Translated 68 Make a Setting in the Info Window 69 Use Terminal 69 Modify the Property List 69 Use the sysctlbyname Function 70 Preventing an Application from Opening Using Rosetta 71 Programmatically Detecting a Translated Application 71 Troubleshooting 73 Architecture-Independent Vector-Based Code 76 Architecture-Specific Code 76 Architecture-Independent Matrix Multiplication 82 32-Bit Application Binary Interface 84 64-Bit Application Binary Interface 85 Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 4 ContentsDocument Revision History 86 Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 5 ContentsFigures, Tables, and Listings Building a Universal Binary 11 Figure 1-1 The Build pane 14 Figure 1-2 Architectures settings 15 Figure 1-3 The Chess application has a Universal binary 18 Table 1-1 Default values for compiler flags on an Intel-based Macintosh computer 19 Architectural Differences 21 Listing 2-1 Code that illustrates byte-ordering differences 22 Listing 2-2 Architecture-dependent code 23 Listing 2-3 A union whose components can be affected by byte order 25 Swapping Bytes 26 Figure 3-1 Big-endian byte ordering compared to little-endian byte ordering 27 Table 3-1 Byte order marks 36 Listing 3-1 A data structure that contains multibyte and single-byte data 26 Listing 3-2 Encoding a 64-bit floating-point value 32 Listing 3-3 Decoding a 32-bit floating-point value 33 Listing 3-4 Swapping a 16-bit integer from big-endian to host-endian 34 Listing 3-5 Swapping integers from little-endian to host-endian 34 Listing 3-6 A routine for swapping the bytes of the values in an array 38 Listing 3-7 A declaration for a custom resource 40 Listing 3-8 A flipper function for RGBColor data 41 Listing 3-9 A flipper for the custom 'PREF' resource 41 Guidelines for Specific Scenarios 46 Figure 4-1 A test image that can help locate the source of color problems 58 Table 4-1 Quartz constants that specify byte ordering 59 Rosetta 65 Figure A-1 The Info window for the Calculator application 69 Figure A-2 Rosetta listens for a port connection 74 Figure A-3 Terminal windows with the commands for debugging a PowerPC binary on an Intel-based Macintosh computer 75 Listing A-1 A structure whose endian format depends on the architecture 67 Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 6Listing A-2 A routine that controls the preferred CPU type for sublaunched processes 70 Listing A-3 A utility routine for calling the sysctlbyname function 71 Listing A-4 A routine that determines whether a process is running natively or translated 72 Architecture-Independent Vector-Based Code 76 Listing B-1 Architecture-specific code needed to support matrix multiplication 77 Listing B-2 Architecture-independent code that performs matrix multiplication 82 Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 7 Figures, Tables, and ListingsUniversal Binary Programming Guidelines will assist experienced developers to build and modify their Mac OS X applications to run as universal binaries. Universal binaries run natively on Macintosh computers using PowerPC or Intel microprocessors and deliver optimal performance for both architectures in a single package. Important: This document may not represent best practices for current development. Links to downloads and other resources may no longer be valid. This document is designed to help developers determine exactly how much work needs to be done and provides useful tips for general as well as specific code modification scenarios. It describes the prerequisites for building code as a universal binary and shows how to do so using Xcode 2.2. It also discussesthe differences between the Intel and PowerPC architecturesthat can affect code behavior and provides guidelinesfor ensuring that universal binary code builds correctly. This version of Universal Binary Programming Guidelines represents a significant update since its introduction at the Apple Worldwide Developers Conference in June, 2005. It brings together all the information that developers need to make the transition to Intel-based Macintosh computers. This version includes pointers to newly revised tools documentation—“Building Universal Binaries” in Xcode Project Management Guide , GCC Porting Guide , Cross-Development Programming Guide , and more—as well as improved guidelines and tips. Anyone who has an older version of Universal Binary Programming Guidelines will want to replace it with this version. Who Should Read This Document? Any developer who currently has an application that runs in Mac OS X will want to read this document to learn how to modify their code so that it runs natively on all current Apple hardware. Developers who have not yet written an application for the Macintosh, but are planning to do so, will want to follow the guidelines in the document to ensure that their code can run as a universal binary. Organization of This Document This document is organized into the following chapters: Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 8 Introduction● “Building a Universal Binary” (page 11) shows how to use Xcode 2.2 to build native and universal binaries, describes build options, and provides troubleshooting information for code that doesn’t run properly on an Intel-based Macintosh computer. ● “Architectural Differences” (page 21) outlines the major differences between the x86 and PowerPC architectures. Understanding the differences will help you to write portable code. ● “Swapping Bytes” (page 26) describes byte-ordering differences in detail, provides a list of byte-swapping routines, and discusses strategies for a number of scenarios that require you to swap bytes. This is a must-read chapter for all Mac OS X developers. It will help you understand how to avoid byte-ordering issues when transferring data and data files between architectures. ● “Guidelines for Specific Scenarios” (page 46) contains tips for a variety of situations that are not common to most applications. ● “Preparing Vector-Based Code” (page 63) discusses the options available for those developers who have high-performance computing needs. This document contains the following appendixes: ● “Rosetta” (page 65) describesthe translation processthat allows PowerPC binariesto run on an Intel-based Macintosh computer. ● “Architecture-Independent Vector-Based Code” (page 76) uses matrix multiplication as an example to show how to write vector code with a minimum amount of architecture-specific coding. ● “32-Bit Application Binary Interface” (page 84) provides information on where to find details. ● “64-Bit Application Binary Interface” (page 85) provides information on where to find details. Assumptions The document assumes the following: ● Your application runs in Mac OS X. Your application can use any of the Mac OS X development environments: Carbon, Cocoa, Java, or BSD UNIX. If your application runs in the UNIX operating system but not specifically in Mac OS X, you should first read Porting UNIX/Linux Applications to Mac OS X . If your application runs only in the Windows operating system, you should first read Porting to Mac OS X from Windows Win32 API . If you are new to Mac OS X, you should take a look at Mac OS X Technology Overview. ● You know how to use Xcode. Introduction Assumptions Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 9Currently Xcode is the only GUI tool available that compiles code to run universally. If you are unfamiliar with Xcode, you might want to take a look at Xcode Workspace Guide . If you have been using CodeWarrior, you should read Porting CodeWarrior Projects to Xcode . Conventions The term x86 is a generic term used in some parts of this book to refer to the class of microprocessors manufactured by Intel. This book uses the term x86 as a synonym for IA-32 (Intel Architecture 32-bit). Introduction Conventions Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 10Architectural differences between Macintosh computersthat use Intel and PowerPC microprocessors can cause existing PowerPC code to behave differently when built and run natively on a Macintosh computer that uses an Intel microprocessor. The extent to which architectural differences affect your code depends on the level of your source code. Most existing code is high-level source code that is not specific to the processor. If your application falls into this category, you’ll find that creating a universal binary involves adjusting code in a few places. Cocoa developers may need to make fewer adjustments than Carbon developers whose code was ported from Mac OS 9 to Mac OS X. Most code that uses high-level frameworks and that builds with GCC 4.0 in Mac OS X v10.4 will build with few, if any, changes on an Intel-based Macintosh computer. The best approach for any developer in that situation isto build the existing code as a universal binary, as described in this chapter, and then see how the application runs on an Intel-based Macintosh. Find the places where the code doesn’t behave as expected and consult the sections in this document that cover those issues. Developers who use AltiVec instructions in their code or who intentionally exploit architectural differences for optimization or other purposes will need to make the most code adjustments. These developers will probably want to consult the rest of this document before building a universal binary. AltiVec programmers should read “Preparing Vector-Based Code” (page 63). This chapter describes how to use Xcode 2.2 to create a universal binary, providestroubleshooting information, and listsrelevant build options. You’ll find that the software development workflow on an Intel-based Macintosh computer is exactly the same as the software development workflow on a PowerPC-based Macintosh. Build Assumptions Before you build your code as a universal binary, you must ensure that: ● Your application already builds for Mac OS X. Your application can use any of the Mac OS X development environments: Carbon, Cocoa, Java, or BSD UNIX. ● Your application uses the Mach-O executable format. Mach-O binaries are the only type of binary that run natively on an Intel-based Macintosh computer. If you are already using the Xcode compilers and linkers, your application is a Mach–O binary. Carbon applications based on the Code Fragment Manager Preferred Executable Format (PEF) must be changed to Mach-O. Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 11 Building a Universal Binary● Your Xcode target is a native Xcode target. If it isn’t, in Xcode you can choose Project > Upgrade All Targets in Project to Native. ● Your code project is ported to GCC 4.0. Xcode uses GCC 4.0 for targeting Intel-based Macintosh computers. You may want to look at the document GCC Porting Guide to assess whether you need to make any changes to your code to allow it to compile using GCC 4.0. ● You installed the Mac OS X v10.4 universal SDK. The installer places the SDK in this location: /Developer/SDKs/MacOSX10.4u.sdk Building Your Code If you have already been using Xcode to build applications on a PowerPC-based Macintosh, you’ll see that building your code on an Intel-based Macintosh computer is accomplished in the same way. By default, Xcode compiles code to run on the architecture on which you build your Xcode project. Note that your Xcode target must be a native target. Tip: CodeWarrior users can read Xcode From a CodeWarrior Perspective for a discussion of the similarities and differences between the two. Thisinformation can help you to put your CodeWarrior experience to work in Xcode. When you are in the process of developing your project, you’ll want to use the following settingsfor the Default and Debug configurations: ● Keep the Architectures settings set to $(NATIVE_ARCH). ● Change the Mac OS X Deployment Target settings to Mac OS X 10.4. ● Make sure the SDKROOT setting is /Developer/SDKs/MacOSX10.4u.sdk. You can set the SDK root for the project by following these steps: 1. Open your project in Xcode 2.2 or later. Make sure that your Xcode target is a native target. If it isn’t, you can choose Project > Upgrade All Targets in Project to Native. 2. In the Groups & Files list, click the project name. 3. Click the Info button to open the Info window. 4. In the General pane, in the Cross-Develop Using Target SDK pop-up menu, choose Mac OS X 10.4 (Universal). Building a Universal Binary Building Your Code Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 12If you don’t see Mac OS X 10.4 (Universal) as a choice, look in the following directory to make sure that the universal SDK is installed: /Developer/SDKs/MacOSX10.4u.sdk If it’s not there, you’ll need to install this SDK before you can continue. 5. Click Change in the sheet that appears. The Debug build configuration turns on ZeroLink, Fix and Continue, and debug-symbol generation, among other settings, and turns off code optimization. When you are ready to test your application on both architectures, you’ll want to use the Release configuration. This configuration turns off ZeroLink and Fix and Continue. It also sets the code-optimization level to optimize for size. As with the Default and Debug configurations, you’ll want to set the Mac OS X Deployment Target to Mac OS X 10.4 and the SDK root to MacOSX10.4u.sdk. To build a universal binary, the Architecturessetting for the Release configuration must be set to build on Intel and PowerPC. You can change the Architectures setting by following these steps: 1. Open your project in Xcode 2.2 or later. 2. In the Groups & Files list, click the project name. 3. Click the Info button to open the Info window. Building a Universal Binary Building Your Code Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 134. In the Build pane (see Figure 1-1), choose Release from the Configuration pop-up menu. Figure 1-1 The Build pane Building a Universal Binary Building Your Code Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 145. Select the Architectures setting and click Edit. In the sheet that appears, select the PowerPC and Intel options, as shown in Figure 1-2. Figure 1-2 Architectures settings 6. Close the Info window. 7. Build and run the project. If your application doesn’t build, see “Debugging” (page 16). If your application builds but does not behave as expected when you run it as a native binary on an Intel-based Macintosh computer, see “Troubleshooting Your Built Application” (page 16). If your application behaves as expected, don’t assume that it also works on the other architecture. You need to test your application on both PowerPC Macintosh computers and Intel-based Macintosh computers. If your application reads data from and writes data to disk, you should make sure that you can save files on one architecture and open them on the other. Building a Universal Binary Building Your Code Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 15Note: Xcode 2.x has per-architecture SDK support. For example, you can target Mac OS X versions 10.3 and 10.4 for PowerPC while also targeting Mac OS X v10.4 and later for Intel-based Macintosh computers. For information on default compiler settings, architecture-specific options, and Autoconf macros, see “Build Options” (page 18). For information on building with version-specific SDKs for PowerPC (Mac OS X v10.3, v10.2, and so forth) while still building a universal binary for both PowerPC and Intel-based Macintosh computers, see the following resources: ● Using Cross Development in Xcode. ● Cross-Development and Universal Binaries in the Cross-Development Programming Guide provides details on to create executable files that contain object code for both Intel-based and PowerPC-based Macintosh computers. Debugging Xcode uses GDB for debugging,so you’ll wantto review the XcodeDebuggingGuide document. Xcode provides a powerful user interface to GDB that lets you step through your code, set breakpoints and view variables, stack frames, and threads. Debugging with GDB—an Open Source document that explains how to use GDB—is another useful resource that you’ll want to look at. It provides a lot of valuable information, including how to get a list of breakpoints for debugging. If you are moving code to GCC 4.0, you can find remedies for most linking issues and compiler warnings by consulting GCC Porting Guide . You can find additional information on the GCC options you can use to request or suppress warnings in Section 3.8 of the GNU C/C++/Objective-C 4.0.1 Compiler User Guide . Troubleshooting Your Built Application Here are the most typical behavior problems you’ll observe when your application runs natively on an Intel-based Macintosh computer: ● The application crashes. ● There are unexpected numerical results. Building a Universal Binary Debugging Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 16● Color is displayed incorrectly. ● Text is not displayed properly—characters from the Last Resort font or unexpected Chinese or Japanese characters appear. ● Files are not read or written correctly. ● Network communication does not work properly. The first two problems in the list are typically caused by architecture-dependent code. On an Intel-based Macintosh computer, an integer divide-by-zero exception resultsin a crash, but on PowerPC the same operation returns zero. In these cases, the code must be rewritten in an architecture-independent manner. “Architectural Differences” (page 21) discusses the major differences between Macintosh computers that use PowerPC and Intel microprocessors. That chapter can help you determine which code is causing the crash or the unexpected numerical results. The last four problems in the list are most often caused by byte-ordering differences between architectures. These problems are easily remedied by taking the byte order into account when you read and write data. The strategies available for handling byte ordering, as well as an in-depth discussion of byte-ordering differences, are provided in “Swapping Bytes” (page 26). Keep in mind that Mac OS X ensures that byte-ordering is correct for anything it is responsible for. Apple-defined resources (such as menus) won’t result in problem behavior. Custom resources provided by your application, however, can result in problem behavior. For example, if images in your application seem to have a cyan tint, it’s quite likely that your application is writing alpha channel data to the blue channel. For this specific issue, depending on the APIs that you are using, you’d want to consult the sections “GWorlds” (page 50), “Pixel Data ” (page 58), or other graphics-related sections in “Guidelines for Specific Scenarios” (page 46). Apple engineers prepared a lot of code to run natively on an Intel-based Macintosh computer—including the operating system, most Apple applications, and Apple tools. The guidelines in this book are the result of their work. In addition to the more common issues discussed in “Architectural Differences” (page 21) and “Swapping Bytes” (page 26), the engineers identified a number of narrowly focused issues. These are described in “Guidelines for Specific Scenarios” (page 46). You will want to at least glance at this chapter to see if your code can benefit from any of the information. Building a Universal Binary Troubleshooting Your Built Application Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 17Determining Whether a Binary Is Universal You can determine whether an application has a universal binary by looking at the Kind entry in the General section of the Info window for the application (see Figure 1-3). To open the Info window, click the application icon and press Cmd-I. Figure 1-3 The Chess application has a Universal binary On an Intel-based Macintosh computer, when you double-click an application that doesn’t have an executable for the native architecture, it might launch. Whether or not it launches depends on how compatible the application is with Rosetta. For more information, see “Rosetta” (page 65). Build Options This section contains information on the build options that you need to be aware of when using Xcode 2.2 and later on an Intel-based Macintosh computer. It lists the default compiler options, discusses how to set architecture-specific options, and provides information on using GNU Autoconf macros. Building a Universal Binary Determining Whether a Binary Is Universal Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 18Default Compiler Options In Xcode 2.2 and later on an Intel-based Macintosh computer, the defaults for compiler flags that differ from standard GCC distributions are listed in Table 1-1. Table 1-1 Default values for compiler flags on an Intel-based Macintosh computer Compiler flag Default value Specifies to -mfpmath sse Use SSE instructions for floating-point math. Enable the MMX, SSE, and SSE2 extensions in the Intel instruction set architecture. -msse2 On by default Architecture-Specific Options Most developers don’t need to use architecture-specific options for their projects. In Xcode, to set one flag for an Intel-based Macintosh and another for PowerPC, you use the PER_ARCH_CFLAGS_i386 and PER_ARCH_CFLAGS_ppc build settings variables to supply the architecture-specific settings. For example to set the architecture-specific flags -faltivec and -msse3, you would add the following build settings: PER_ARCH_CFLAGS_i386 = -msse3 PER_ARCH_CFLAGS_ppc = -faltivec Similarly, you can supply architecture-specific linker flags using the OTHER_LDFLAGS_i386 and OTHER_LDFLAGS_ppc build settings variables. You can pass the -arch flag to gcc, ld, and as. The allowable values are i386 and ppc. You can specify both flags as follows: -arch ppc -arch i386 Formore information on architecture-specific options,see“BuildingUniversal Binaries”in Xcode Project Management Guide . Building a Universal Binary Build Options Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 19Autoconf Macros If you are compiling a project that uses GNU Autoconf and trying to build it for both PowerPC-based and Intel-based Macintosh computers, you need to make sure that when the project configures itself, it doesn't use Autoconf macros to determine the endian type of the runtime system. For example, if your project uses the Autoconf AC_C_BIGENDIAN macro, the program won't work correctly when it is run on the opposite architecture from the one you are targeting when you configure the project. To correctly build for both PowerPC-based and Intel-based Macintosh computers, use the compiler-defined __BIG_ENDIAN__ and __LITTLE_ENDIAN__ macros in your code. For more information, see Using GNU Autoconf in Porting UNIX/Linux Applications to Mac OS X . See Also These resources provide information related to compiling and building applications, andmeasuring performance: ● Xcode Project Management Guide contains all the instructions needed to compile and debug any type of Xcode project (C, C++, Objective C, Java, AppleScript, resource, nib files, and so forth). ● GCC Porting Guide provides advice for how to modify your code in ways that make it more compatible with GCC 4.0. ● GNU C/C++/Objective-C 4.0.1 Compiler User Guide provides details about the GCC implementation. Xcode uses the GNU compiler collection (GCC) to compile code. The assembler (as) used by Xcode supports AT&T System V/386 assembler syntax in order to maintain compatibility with the output from GCC. The AT&T syntax is quite different from Intel syntax. The major differences are discussed in the GNU documentation. ● C++ Runtime Environment Programming Guide provides information on the GCC 4.0 shared C++ runtime that is available in Panther 10.3.9 and later. ● Porting UNIX/Linux Applications to Mac OS X . Developers porting from UNIX and Linux applications who want to compile a universal binary, will want to read the section Compiling for Multiple Architectures. ● Kernel Extension Programming Topics containsinformationaboutdebuggingKEXTsonIntel-basedMacintosh computers. ● Performance tools. Shark, MallocDebug, ObjectAlloc, Sampler, Quartz Debug, Thread Viewer, and other Apple-developed tools (some command-line, others use a GUI) are in the /Developer directory. Command-line performance tools are in the /usr/bin directory. ● Code Size Performance Guidelines and Code Speed Performance Guidelines discuss optimization strategies for a Mach-O executable. Building a Universal Binary See Also Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 20The PowerPC and the x86 architectures have some fundamental differences that can prevent code written for one architecture from running properly on the other architecture. The extent to which you need to change your PowerPC code so that it runs natively on an Intel-based Macintosh computer depends on how much of your code is processor specific. This chapter describes the major differences between architectures, organized alphabetically by topic. You can use the information to identify the parts of your code that are likely to be problematic. Alignment All PowerPC instructions are 4 bytes in size and must be 4-byte aligned. x86 instructions are variable in size (from 1 to >10 bytes), and as a consequence do not need to be aligned. Bit Fields The value of a signed, 1-bit bit field is either 0, 1, or –1, depending on the compiler, architecture, optimization level, and so forth. Code that compares the value of a bit field to 1 may not work if the bit field is signed, so you will want to use unsigned 1-bit bit fields. Keep in mind that the order of bit fields in memory can be reversed between architectures. For more information on issues related to endian format, see “Swapping Bytes” (page 26). See also “Archived Bit Fields” (page 46) and “Structures and Unions” (page 25). Byte Order Microprocessor architectures commonly use two different byte-ordering methods(little-endian and big-endian) to store the individual bytes of multibyte data formats in memory. This difference becomes critically important if you try to read data from filesthat were created on a computer that uses a different byte ordering than yours. You also need to consider byte ordering when you send and receive data through a network connection and handle networking data. The difference in byte ordering can produce incorrect results if you do not account for this difference. For example, the order of bytes in memory of a scalar type is architecture-dependent, as shown in Listing 2-1 (page 22). Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 21 Architectural DifferencesListing 2-1 Code that illustrates byte-ordering differences unsigned char charVal; unsigned long value = 0x12345678; unsigned long *ptr = &value; charVal = *(unsigned char*)ptr; On a processor that useslittle-endian addressing the variable charVal takes on the value 0x78. On a processor that uses big-endian addressing the variable charVal takes on the value 0x12. To make this code architecture-independent, change the last line in Listing 2-1 to the following: charVal = (unsigned char)*ptr; For a detailed discussion of byte ordering and strategies that you can use to account for byte-ordering differences, see “Swapping Bytes” (page 26). Calling Conventions The x86 C-language calling convention (application binary interface, or ABI)specifiesthat argumentsto functions are passed on the stack. The PowerPC ABI specifies that arguments to functions are passed in registers. Also, x86 has far fewer registers, so many local variables use the stack for their storage. Thus, programming errors, or other operationsthat access past the end of a local variable array or otherwise incorrectly manipulate values on the stack may be more likely to crash applications on x86 systems than on PowerPC. For detailed information about the IA-32 ABI, see Mac OS X ABI Function Call Guide . This document describes the function-calling conventions used in all the architecturessupported by Mac OS X. See also “32-Bit Application Binary Interface” (page 84). Code on the Stack: Disabling Execution Intel processors include a bit that prevents code from being executed on the stack. On Intel-based Macintosh computers, this bit is always set to On. Architectural Differences Calling Conventions Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 22Data Type Conversions For some data type conversions, such as casting a string to a long and converting a floating-point type to an integer type, the PowerPC and x86 architectures perform differently. When the microprocessor converts a floating-point type to an integer type, it discards the fractional part of the value. The behavior is undefined if the value of the integral part cannot be represented by the integer type. Listing 2-2 shows an example of the sort of code that is architecture-dependent. You would need to modify this code to make it architecture-independent. On a PowerPC microprocessor, the variable x shown in the listing is equal to 7fffffff or INTMAX. On an x86 microprocessor, the variable x is equal to 80000000 or INTMIN. Listing 2-2 Architecture-dependent code int main (int argc, const char * argv[]) { double a; int x; a = 5000000.0 * 6709000.5; // or any really big value x = a; printf("x = %08x \n",x); return 0; } Data Types A long double is 16 bytes on both architectures, but only 80 bits are significant in long double data types on Intel-based Macintosh computers. A bool data type is a single byte on an x86 system, but four bytes on a PowerPC architecture. Thissize difference can cause alignment problems. You should use fixed-size data types to avoid alignment problems. (The bool data type is not the Carbon Boolean type, which is a fixed size of 1 byte.) Existing document formats that include the bool data type as part of a data structure that is written directly to disk can be problematic because the data structure might not be laid out the same on both architectures. If you update the data structure definition to use the UInt32 data type or another fixed-size four-byte data type, the structure should then be portable, although you must swap bytes appropriately. Architectural Differences Data Type Conversions Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 23Divide-By-Zero Operations An integer divide-by-zero operation isfatal on an x86 system but the operation continues on a PowerPC system, where it returns zero. (A floating point divide-by-zero behaves the same on both architectures.) If you get a crash log that mentions EXC_I386_DIV (divide by zero), your program divided by zero. Mod operations perform a divide, so a mod-by-zero operation produces a divide-by-zero exception. To fix a divide-by-zero exception, find the place in your program corresponding to that operation. Then add code that checks for a denominator of zero before performing the divide operation. For example, change this: int a = b % c; // Divide by zero can happen here; to this: int a; if (c != 0) { a = b % c; } else { a = 0; } Extensible Firmware Interface (EFI) Intel-based Macintosh computers use extensible firmware interface (EFI). EFI provides a flexible and adaptable interface between Mac OS X and the platform firmware. This change should be transparent to most developers, but may affect some, such as those who write boot drivers. For more information on the EFI specification, see http://www.intel.com/technology/efi/ Floating-Point Equality Comparisons The results of a floating-point equality comparison are architecture-dependent. Whether the comparison works depends on a number of things, including the compiler, the surrounding code, all compiler flags in use (particularly optimization flags), and the current floating-point mode for the thread. If your floating-point comparison is currently working on PowerPC, you may need to inspect it on an Intel-based Macintosh computer. Architectural Differences Divide-By-Zero Operations Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 24You can use the GCC flag -Wfloat-equal to receive a warning for floating-point equality comparisons. For details on this option, see Section 3.8 of the GNU C/C++/Objective-C 4.0.1 Compiler User Guide Structures and Unions The fields in a structure can be sensitive to their defined order. Structures must either be properly ordered or accessed by the field name directly. When a union has components that could be affected by byte order, use a form similar to that shown in Listing 2-3. Code that sets wch and then reads hi and lo as the high and low bytes of wch will work correctly. The same is true for the reverse direction. Code that sets hi and lo and then reads wch will get the same value on both architectures. For another example, see the WideChar union that’s defined in the IntlResources.h header file. Listing 2-3 A union whose components can be affected by byte order union WChar{ unsigned short wch; struct { #if __BIG_ENDIAN__ unsigned char hi; unsigned char lo; #else unsigned char lo; unsigned char hi; #endif } s; } See Also The ISO standard for the C programming language—ISO/IEC 9899—is a valuable reference that you can use to investigate code portability issues, many of which may not be immediately obvious. You can find this reference in a number of locations on the web, including: http://www.iso.org/ Architectural Differences Structures and Unions Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 25Two primary byte-ordering methods (or endian formats) exist in the world of computing. An endian format specifies how to store the individual bytes of multibyte numerical data in memory. Big-endian byte ordering specifies to store multibyte data with its most significant byte first. Little-endian byte ordering specifies to store multibyte data with its least significant byte first. The PowerPC processor uses big-endian byte ordering. The x86 processor family uses little-endian byte ordering. By convention, multibyte data sent over the network uses big-endian byte ordering. If your application assumes that data is in one endian format, but the data is actually in another, then it will interpret the data incorrectly. You will want to analyze your code for routines that read multibyte data (16 bits, 32 bits, or 64 bits) from, or write multibyte data to, disk or to the network, as these routines are sensitive to byte-ordering format. There are two general approaches for handling byte ordering differences: swap bytes when necessary or use XML or another byte-order-independent data format such as those offered by Core Foundation (CFPreferences, CFPropertyList, CFXMLParser). Whether you should swap bytes or use a byte-order-independent data format depends on how you use the data in your application. If you have an existing file format to support, the binary-compatible solution is to accept the big-endian file format you have been using in your application, and write code that swaps bytes when the file isread or written on an Intel-based Macintosh. If you don’t have legacy filesto support, you could consider redesigning your file format to use XML (extensible markup language), XDR (external data representation), or NSCoding (Objective C) to represent data. This chapter describes why byte ordering matters, gives guidelines for swapping bytes, describes the byte-swapping APIs available in Mac OS X, and providessolutionsfor most of the situations where byte ordering matters. Why Byte Ordering Matters The example in this section is designed to show you why byte ordering matters. Take a look at the C data structure defined in Listing 3-1. It contains a four-byte integer, a character string, and a two-byte integer. The listing also initializes the structure. Listing 3-1 A data structure that contains multibyte and single-byte data typedef struct { Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 26 Swapping Bytesuint32_t myOptions; char myStringArray [7]; short myVariable; } myDataStructure; myDataStructure aStruct; aStruct.myOptions = 0xfeedface; strcpy(aStruct.myStringArray, "safari"); aStruct.myVariable = 0x1234; Figure 3-1 compares how this data structure is stored in memory on big-endian and little-endian systems. In a big-endian system, memory is organized with the address of each data byte increasing from most significant to leastsignificant. In a little-endian system, memory is organized with the address of each data byte increasing from the least significant to the most significant. Figure 3-1 Big-endian byte ordering compared to little-endian byte ordering 0x00000004 0x00000005 0x00000006 0x00000007 0x00000008 0x00000009 0x0000000A 0x0000000B 0x0000000C 0x0000000D 0x0000000E 0x0000000F fe fe ed fa ed fa ce ce 's' 's' 'a' 'a' 'a' 'a' 'f' 'f' 'r' 'r' 'i' 'i' 12 34 12 34 \0 \0 0x00000000 0x00000001 0x00000002 0x00000003 Address Data * * * Big-endian 0x00000004 0x00000005 0x00000006 0x00000007 0x00000008 0x00000009 0x0000000A 0x0000000B 0x0000000C 0x0000000D 0x0000000E 0x0000000F 0x00000000 0x00000001 0x00000002 0x00000003 Address Data * * * Little-endian Padding bytes used to maintain alignment As you look at Figure 3-1, note the following: Swapping Bytes Why Byte Ordering Matters Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 27● Multibyte data, such as the 32-bit and 16-bit variables shown in the figure, are stored differently between big-endian and little-endian systems. As you can see in the figure, big-endian systemsstore data in memory so that the most significant byte of the data is stored in the address with the lowest value. Little-endian systems store data in memory so that the most significant byte of the data is in the address with the highest value. Hence, the least significant byte of the myOptions variable (0xce) is stored in memory location 0x00000003 on the big-endian system while it is stored in memory location 0x00000000 on the little-endian system. ● Single-byte data, such as the char values in the myStringArray character array, are stored in the same memory location on either system regardless of the byte ordering format of the system. ● Each system pads bytes to maintain four-byte data alignment. Padded bytes in the figure are designated by a shaded box that contains an asterisk. The byte ordering of multibyte data in memory matters if you are reading data written on one architecture from a system that uses a different architecture and you access the data on a byte-by-byte basis. For example, if your application is written to access the second byte of the myOptions variable, then when you read the data from a system that uses the opposite byte ordering scheme, you’ll end up retrieving the first byte of the myOptions variable instead of the second one. Suppose the example data values that are initialized by the code shown in Listing 3-1 are generated on a little-endian system and saved to disk. Assume that the data is written to disk in byte-address order. When read from disk by a big-endian system, the data is again laid out in memory asshown in Figure 3-1. The problem is that the data is still in little-endian byte order even though it is interpreted on a big-endian system. This difference causes the values to be evaluated incorrectly. In this example, the value of the field myOptions should be 0xfeedface, but because of the incorrect byte ordering it is evaluated as 0xcefaedfe. Note: The terms big-endian and little-endian come from Jonathan Swift’s eighteenth-century satire Gulliver’s Travels. The subjects of the empire of Blefuscu were divided into two factions: those who ate eggs starting from the big end and those who ate eggs starting from the little end. Guidelines for Swapping Bytes The following guidelines, along with the strategies provided later in this chapter, will help ensure optimal byte-swapping code in your application. ● Keep data structures in native byte-order while in memory. Only swap bytes when you read data from disk or write it to disk. Swapping Bytes Guidelines for Swapping Bytes Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 28● When possible, let the compiler do the work for you. For example, when you use function calls such as the Core Foundation function CFSwapInt16BigToHost, the compiler determines whether the function call does something for the processor you are targeting. If the code does nothing, the compiler won’t call the function. Letting the compiler do the work is more efficient than using #ifdef statements. ● If you must access a large file, consider arranging the data in a way that limits the byte swapping that you must perform. For example, you can arrange the most frequently accessed data contiguously in the file. Then, you need to read and swap bytes only for that chunk of data instead of for the entire data file. ● Use the __BIG_ENDIAN__ and __LITTLE_ENDIAN__ macros only if you must. Do not use macros that check for a specific processor type, such as __i386__ and __ppc__. ● Choose a consistent byte-order approach and stick with it. That is, if you are reading and writing data from disk on a regular basis, choose the endian format you want to use. This eliminates the need for you to check the byte ordering of the data, and then to possibly have to swap the byte order. ● Be aware of which functions return big-endian data, and use them appropriately. These include the BSD Sockets networking functions, the DNSServiceDiscovery functions (for example, TCP and UDP ports are specified in network byte order), and the ColorSync profile functions (for which all data is big-endian). The IconFamilyElement and IconFamilyResource data types (which also include the data types IconFamilyPtr and IconFamilyHandle) are always big-endian. There may be other functions and data types that are not listed here. Consult the appropriate API reference for information on data returned by a function. For more information see “Network-Related Data” (page 34). ● Keep in mind that swapping bytes comes at a performance cost so swap them only when absolutely necessary. Byte-Swapping Routines The APIs that provide byte-swapping routines are listed below. For most situations it’s best to use the routines that match the framework you’re programming in. The Core Foundation and Foundation APIs have functions for swapping floating-point values, while the other APIs listed do not. ● POSIX (Portable Operating System Interface) byte ordering functions (ntohl, htonl, ntohs, and htons) are documented in Mac OS X Man Pages. ● Darwin byte ordering functions and macros are defined in the header file libkern/OSByteOrder.h. Even though this header is in the kernel framework, it is acceptable to use it from high-level applications. ● Core Foundation byte-order functions are defined in the header file CoreFoundation/CFByteOrder.h and described in the Byte-Order Utilities Reference . For details on using these functions, see the Byte Swapping article in Memory Management Programming Guide for Core Foundation . ● Foundation byte-order functions are defined in the header file Foundation/NSByteOrder.h and described in Foundation Framework Reference . Swapping Bytes Byte-Swapping Routines Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 29● The Core Endian API is defined in the header file CarbonCore/Endian.h and described in Core Endian Reference . Note: When you use byte-swapping routines, the compiler optimizes your code so that the routines are executed only if they are needed for the architecture on which your code is running. Byte-Swapping Strategies The strategy for swapping bytes depends on the format of the data; there is no universal routine that can take care of all byte ordering differences. Any program that needsto swap data must know the data type, the source data endian order, and the host endian order. This section lists byte-swapping strategies, organized alphabetically, for the following data: ● “Constants” (page 30) ● “Custom Apple Event Data” (page 31) ● “Custom Resource Data” (page 31) ● “Floating-Point Values” (page 32) ● “Integers” (page 33) ● “Network-Related Data” (page 34) ● “OSType-to-String Conversions” (page 35) ● “Unicode Text Files” (page 36) ● “Values in an Array” (page 38) Constants Constants that are part of a compiled executable are in host byte order. You need to swap bytes for a constant only if it is part of data that is not maintained natively or if the constant travels between hosts. In most cases you can either swap bytes ahead of time or let the preprocessor perform any needed math by using shifts or other simple operators. If you are defining and populating a structure that must use data of a specific endian format in memory, use the OSSwapConst macros and the OSSwap*Const variants defined in the libkern/OSByteOrder.h header file. These macros can be used from high-level applications. Swapping Bytes Byte-Swapping Strategies Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 30Custom Apple Event Data An Apple event is a high-level event that conforms to the Apple Event Interprocess Messaging Protocol. The Apple Event Managersends Apple events between applications on the same computer or between applications on remote computers. You can define your own Apple event data types, and send and receive Apple events using the Apple Event Manager API. Mac OS X manages system-defined Apple event data types for you, handling them appropriately for the currently executing code. You don't need to perform any special tasks. When the data that your application extracts from an Apple event is system-defined, the system swaps the data for you before giving the event to your application to process. You will want to treat system-defined data types from Apple events as native endian. Similarly, if you put native-endian data into an Apple event that you are sending, and it is a system-defined data type, the receiver will be able to interpret the data in its own native endian format. However, you must account for byte-ordering differences for the custom Apple event data types that you define. You can accomplish this in one of the following ways: ● Write a byte-swapping callback routine (also known as a flipper) and provide it to the system. Whenever the system determines that your Apple event data needs to be byte swapped it invokes your flipper to ensure that the recipient of the data gets the data in the correct endian format. For details, see “Writing a Callback to Swap Data Bytes” (page 38). ● Choose one endian format to use, regardless of architecture. Then, when you read or write your custom Apple event data, use big-to-host and host-to-big routines,such asthe Core Foundation Byte Order Utilities functions CFSwapInt16BigToHost and CFSwapInt16HostToBig. Custom Resource Data In Mac OS X, the preferred way to supply resources is to provide files in your application bundle that define resources such as image files, sounds, localized text, and archived user-interface definitions. The resource data types discussed in this section are those defined in Resource Manager-style files supported by Carbon. The Resource Manager was created prior to Mac OS X. If your application uses Resource Manager-style resource files, you should consider moving towards Mac OS X–style resources in your application bundle instead. Resources typically include data that describes menus, windows, controls, dialogs, sounds, fonts, and icons. Although the system defines a number ofstandard resource types(such as 'moov', used to specify a QuickTime movie, and 'MENU', used to define menus) you can also create your own private resource types for use in your application. You use the Resource Manager API to define resource data types and to get and set resource data. Mac OS X keepstrack of resourcesin memory and allows your application to read or write resources. Applications and system software interpret the data for a resource according to its resource type. Although you'll typically let the operating system read resources (such as your application icon) for you, you can also call Resource Manager functions directly to read and write resources. Swapping Bytes Byte-Swapping Strategies Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 31Mac OS X manages the system-defined resources for you, handling them appropriately for the currently executing code. That is, if your application runs on an Intel-based Macintosh, Mac OS X swaps bytes so that your application icon, menus, and other standard resources appear correctly. You don't need to perform any special tasks. But if you define your own private resource data types for use in your application, you need to account for byte-ordering differences between architectures when you read or write resource data from disk. You can use either of the following strategies to handle custom Resource Manager-style resource data. Notice that these are the same strategies used to handle custom Apple event data: ● Provide a byte-swapping callback routine for the system to invoke whenever the system determines your resource data must be byte swapped. For details, see “Writing a Callback to Swap Data Bytes” (page 38). ● Always write your data using the same endian format, regardless of the architecture. Then, when you read or write your custom resource data, use big-to-host and host-to-big routines, such as the Core Foundation Byte Order Utilities CFSwapInt16BigToHost and CFSwapInt16HostToBig. Note: If you are revising old code that marks resources with a preload bit, you should remove the preload bit from any resources that must be byte swapped. In Mac OS X, the preload bit is almost always unnecessary. If you cannot remove the preload bit, you should swap the resource data after you read the resource. You will not be able to use a flipper callback to swap bytes automatically because in Mac OS X a preload bit causes the resources to be read before any of the application code runs. Floating-Point Values Core Foundation defines a set of functions and two special data types to help you work with floating-point values. These functions allow you to encode 32- and 64-bit floating-point values in such a way that they can later be decoded and byte swapped if necessary. Listing 3-2 shows you how to encode a 64-bit floating-point number and Listing 3-3 shows how to decode it. Listing 3-2 Encoding a 64-bit floating-point value double d = 3.0; CFSwappedFloat64 swappedDouble; // Encode the floating-point value. swappedDouble = CFConvertFloat64HostToSwapped(d); // Call the appropriate routine to write swappedDouble to disk, // send it to another process, etc. write(myFile, &swappedDouble, sizeof(swappedDouble)); Swapping Bytes Byte-Swapping Strategies Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 32The data types CFSwappedFloat32 and CFSwappedFloat64 contain floating-point values in a canonical representation. A CFSwappedFloat data type is not itself a floating-point value, and should not be directly used as one. You can however send one to another process, save it to disk, or send it over a network. Because the format is converted to and from the canonical format by the conversion functions, there is no need for explicit swapping. Bytes are swapped for you during the format conversion if necessary. Listing 3-3 Decoding a 32-bit floating-point value float f; CFSwappedFloat32 swappedFloat; // Call the appropriate routine to read swappedFloat from disk, // receive it from another process, etc. read(myFile, &swappedFloat, sizeof(swappedFloat)); f = CFConvertFloat32SwappedToHost(swappedFloat) The NSByteOrder.h header file defines functions that are comparable to the Core Foundation functions discussed here. Integers The system library byte-access functions, such as OSReadLittleInt16 and OSWriteLittleInt16, provide generic byte swapping. These functions swap bytes if the native endian format is different from the endian format of the destination. They are defined in the libkern/OSByteOrder.h header file. Note: The OSReadXXX and OSWriteXXX functions provide higher performance than the OSSwapXXX functions or any other functions in the higher-level frameworks. Core Foundation provides three optimized primitive functions for swapping bytes— CFSwapInt16, CFSwapInt32, and CFSwapInt64. All of the other swapping functions use these primitives to accomplish their work. In general you don’t need to use these primitives directly. Although the primitive swapping functions swap unconditionally, the higher-level swapping functions are defined in such a way that they do nothing when swapping bytes is not required—in other words, when the source and host byte orders are the same. For the integer types, these functions take the forms CFSwapXXXBigToHost, CFSwapXXXLittleToHost, CFSwapXXXHostToBig, and CFSwapXXXHostToLittle, where XXX is a data type such as Int32. For example, on a little-endian machine you use the function CFSwapInt16BigToHost to read a 16-bit integer value from a network whose data is in network byte order (big-endian). Listing 3-4 demonstrates this process. Swapping Bytes Byte-Swapping Strategies Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 33Listing 3-4 Swapping a 16-bit integer from big-endian to host-endian SInt16 bigEndian16; SInt16 swapped16; // Swap a 16-bit value read from the network. swapped16 = CFSwapInt16BigToHost(bigEndian16); Suppose the integers are in the fields of a data structure. Listing 3-5 demonstrates how to swap bytes. Listing 3-5 Swapping integers from little-endian to host-endian // Swap the bytes of the values if necessary. aStruct.int1 = CFSwapInt32LittleToHost(aStruct.int1) aStruct.int2 = CFSwapInt32LittleToHost(aStruct.int2) The code swaps bytes only if necessary. If the host is a big-endian architecture, the functions used in the code sample swap the bytesin each field. The code does nothing when run on a little-endian machine—the compiler ignores the code. Network-Related Data Network-related data typically uses big-endian format (also known as network byte order), so you may need to swap bytes when communicating between the network and an Intel-based Macintosh computer. You probably never had to adjust your PowerPC code when you transmitted data to, or received data from, the network. On an Intel-based Macintosh computer you must look closely at your networking code and ensure that you always send network-related data in the appropriate byte order. You must also handle data received from the network appropriately, swapping the bytes of values to the endian format appropriate to the host microprocessor. You can use the following POSIX functions to convert between network byte order and host byte order. (Other byte-swapping functions, such as those defined in the OSByteOrder.h and CFByteOrder.h header files, can also be useful for handling network data.) ● network to host: uint32_t ntohl (uint32_t netlong); uint16_t ntohs (uint16_t netshort); ● host to network: uint32_t htonl (uint32_t hostlong); Swapping Bytes Byte-Swapping Strategies Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 34uint16_t htons (uint16_t hostshort); These functions are documented in Mac OS X Man Pages. The sin_saddr.s_addr and sin_port fields of a sockaddr_in structure should always be in network byte order. You can find out the appropriate endian format of any argument to a BSD networking function by reading the man page documentation. When advertising a service on the network, you use getsockname to get the local TCP or UDP port that your socket is bound to, and then pass my_sockaddr.sin_port unchanged, without any byte swapping, to the DNSServiceRegister function. In CoreFoundation code, you can use the same approach. Use the CFSocketCopyAddress function as shown below, and then pass my_sockaddr.sin_port unchanged, without any byte swapping, to the DNSServiceRegister function. CFDataRef addr = CFSocketCopyAddress(myCFSocketRef); struct sockaddr_in my_sockaddr; memmove(&my_sockaddr, CFDataGetBytePtr(addr), sizeof(my_sockaddr)); DNSServiceRegister( ... , my_sockaddr.sin_port, ...); When browsing and resolving, the process is similar. The DNSServiceResolve function and the BSD Sockets calls such as gethostbyname and getaddrinfo all return IP addresses and ports already in the correct byte order so that you can assign them directly to your struct sockaddr_in and call connect to open a TCP connection. If you byte-swap the address or port, then your program will not work. The important point is that when you use the DNSServiceDiscovery API with the BSD Sockets networking APIs, you do not need to swap anything. Your code will work correctly on both PowerPC and Intel-based Macintosh computers as well as on Linux, Solaris, and Windows. OSType-to-String Conversions You can use the functions UTCreateStringForOSType and UTGetOSTypeFromString to convert an OSType data type to or from a CFString object (CFStringRef data type). These functions are discussed in Uniform Type Identifiers Overview and defined in the UTType.h header file, which is part of the Launch Services framework. When you use four-character literals, keep in mind that "abcd" != 'abcd'. Rather 'abcd' == 0x61626364. You must treat 'abcd' as an integer and not string data, as 'abcd' is a shortcut for a 32-bit integer. (A FourCharCode data type is a UInt32 data type.) The compiler does not swap this for you. You can use the shift operator if you need to deal with individual characters. Swapping Bytes Byte-Swapping Strategies Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 35For example, if you currently print an OSType or FourCharCode type using the standard C printf-style semantics, use printf("%c%c%c%c", (char) (val >> 24), (char) (val >> 16), (char) (val >> 8), (char) val) instead of the following: printf("%4.4s", (const char*) &val) Unicode Text Files Mac OS X often uses UTF-16 to encode Unicode; a UniChar data type is a double-byte value. As with any multibyte data, Unicode characters are sensitive to the byte ordering method used by the microprocessor. A byte order mark written to the beginning of a file informs the program reading the data which byte ordering method was used to write the data. The Unicode standard states that in the absence of a byte order mark (BOM) the data in a Unicode data file is to be taken as big-endian. Although a BOM is not mandatory, you should make use of it to ensure that a file written on one architecture can be read from the other architecture. The program can then act accordingly to make sure the byte ordering of the Unicode text is compatible with the host. Table 3-1 lists the standard byte order marks for UTF-8, UTF-16, and UTF-32. (Note that the UTF-8 BOM is not used for endian issues, but only as a tag to indicate that the file is UTF-8.) Table 3-1 Byte order marks Byte order mark Encoding form EF BB BF UTF-8 FF FE UTF-16/UCS-2, little endian FE FF UTF-16/UCS-2, big endian FF FE 00 00 UTF-32/UCS-4, little endian 00 00 FE FF UTF-32/UCS-4, big endian In practice, when your application reads a file, it does not need to look for a byte order mark nor does it need to swap bytes as long as you follow these steps to read a file: 1. Map the file using mmap to get a pointer to the contents of the file (or string). Swapping Bytes Byte-Swapping Strategies Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 36Reading the entire file into memory ensures the best performance and is a prerequisite for the next step. 2. Generate a CFString object by calling the function CFStringCreateWithBytes with the isExternalRepresentation parameter set to true, or call the function CFStringCreateWithExternalRepresentation to generate a CFString, passing in an encoding of kCFStringEncodingUnicode (for UTF-16) or kCFStringEncodingUTF8 (for UTF-8). Either function interprets a BOM swaps bytes as necessary. Note that a BOM should not be used in memory; its use is solely for data transmission (files, pasteboard, and so forth). In summary, with respect to Unicode files, your application performs best when you follow these guidelines: ● Accept the BOM when taking UTF-16 or UTF-8 encoded files from outside the application. ● Use native-endian UniChar data types internally. ● Generate a BOM when writing UTF-16 to a file. Ideally, you only need to generate a BOM for an architecture that uses little-endian format, but it is also acceptable to generate a BOM for an architecture that uses big-endian format. ● When you put data on the Clipboard, make sure that 'utxt' data does not have a BOM. Only 'ut16' data should have a BOM. If you use Cocoa to put an NSString object on the pasteboard, you don’t need to concern yourself with a BOM. For more information, see “UTF & BOM,” available from the Unicode website: http://www.unicode.org/faq/utf_bom.html The Apple Event Manager provides text constants that you can use to specify the type of your data. As of Mac OS X v10.4, only two text constants are recommended: ● typeUTF16ExternalRepresentation, which specifies Unicode text in 16-bit external representation with optional byte order mark (BOM). The presence of this constant guarantees that either there is a BOM or the data is in UTF-16 big-endian format. ● typeUTF8Text, which specifies 8-bit Unicode (UTF-8 encoding). The constant typeUnicodeText indicates utxt text data, in native byte ordering format, with an optional BOM. This constant does not specify an explicit Unicode encoding or byte order definition. The Scrap Manager provides the flavor type constant kScrapFlavorTypeUTF16External which specifies Unicode text in 16-bit external representation with optional byte order mark (BOM). Swapping Bytes Byte-Swapping Strategies Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 37Values in an Array The routine in Listing 3-6 shows an approach that you can use to swap the bytes of values in an array. On a big-endian system, the compiler optimizes away the entire function; you don’t need to use #ifdef statements to swap these sorts of arrays. Listing 3-6 A routine for swapping the bytes of the values in an array static inline void SwapUInt32ArrayBigToHost(UInt32 *array, UInt32 count) { int i; for(i = 0; i < count; i++) { array[i] = CFSwapInt32BigToHost(array[i]); } } Writing a Callback to Swap Data Bytes You can provide a byte-swapping callback routine, also referred to as a flipper, to the system for custom resource data, custom pasteboard data, and custom Apple event data. When you install a byte-swapping callback, you specify which domain that the data type belongs to. There are two data domains—Apple event and resource. The resource data domain specifies custom pasteboard data or custom resource data. If the callback can be applied to either domain (Apple event and resource), you can specify that as well. The Core Endian API defines a callback that you provide to swap bytes for custom resource and Apple event data. You must provide one callback for each type of data you want to swap bytes. The prototype for the CoreEndianFlipProc callback is: typedef CALLBACK_API (OSStatus, CoreEndianFlipProc) (OSType dataDomain, OSType dataType, short id, void *dataPtr, UInt32 dataSize, Boolean currentlyNative, void *refcon ); Swapping Bytes Writing a Callback to Swap Data Bytes Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 38The callback takes the following parameters: ● dataDomain—An OSType value thatspecifiesthe domain to which the flipper callback applies. The value kCoreEndianResourceManagerDomain signifies that the domain is resource or pasteboard data. The value kCoreEndianAppleEventManagerDomain signifies that the domain is Apple event data. ● dataType—The type of data that needs the callback to swap bytes for. This is the four-character code of the resource type, pasteboard type, or Apple event. ● id—The resource id of the data type. This field is ignored if the dataDomain parameter is not kCoreEndianResourceManagerDomain. ● dataPtr—On input, points to the data to be flipped. On output, points to the byte swapped data. ● dataSize—The size of the data pointed to by the dataPtr parameter. ● currentlyNative—A Boolean value that indicatesthe direction to swap bytes. The value true specifies the data pointed to by the dataPtr parameter uses the byte ordering of the currently executing code. On a PowerPC Macintosh, true specifiesthat the data isin big-endian format. On an Intel-based Macintosh, true specifies that the data is in little-endian format. ● refcon—A 32-bit value that contains, or refers to, data needed by the callback. The callback returns a result code that indicates whether bytes are swapped successfully. Your callback should return noErr if the data is byte swapped without error and the appropriate result code to indicate an error condition—errCoreEndianDataTooShortForFormat, errCoreEndianDataTooLongForFormat, or errCoreEndianDataDoesNotMatchFormat. The result code you return is propagated through the appropriate manager (Resource Manager (ResError) or Apple Event Manager) to the caller. You do not need to swap bytes for quantities that are not numerical (such as strings, byte streams, and so forth). You need to provide a callback only to swap bytes data types for which the order of bytes in a word or long word are important. (For the preferred way to handle Unicode strings, see “Unicode Text Files” (page 36).) Your callback should traverse the data structure that contains the data and swap bytes for: ● All counts and lengths so that array indexes are associated with the appropriate value ● All integers and longs so that when you read them into variables of a compatible type, you can operate correctly on the values (such as numerical, offset, and shift operations) The Core Endian API provides these functions for working with your callback: ● CoreEndianInstallFlipper registers your callback for the specified data type (custom resource or custom Apple Event). After you register a byte-swapping callback for an application-defined resource data type, then any time you call a Resource Manager function that operates on that resource type, the system invokes your callback if it is appropriate to do so. (If your callback operates on pasteboard data, the system Swapping Bytes Writing a Callback to Swap Data Bytes Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 39also invokes the callback at the appropriate time.) Similarly, if you specify Apple event as the domain for your callback, then any time you call an Apple Event Manager function that operates on that data type, your callback is invoked when it is appropriate to do so. ● CoreEndianGetFlipper obtains the callback that is registered for the specified data type. You can call this function to determine whether a flipper is available for a given data type. ● CoreEndianFlipData invokes the callback associated with the specified data type. You shouldn’t need to call this function, because the system invokes your callback whenever it’s needed. As an example, look at a callback for the custom resource type ('PREF') defined in Listing 3-7. The MyPreferences structure is used to store preferences data on disk. The structure contains a number of values and includes two instances of the RGBColor data type and an array of RGBColor values. Listing 3-7 A declaration for a custom resource #define kMyPreferencesType 'PREF' struct MyPreferences { SInt32 fPrefsVersion; Boolean fHighlightLinks; Boolean fUnderlineLinks; RGBColor fHighlightColor; RGBColor fUnderlineColor; SInt16 fZoomValue; char fCString[32]; SInt16 fCount; RGBColor fPalette[]; }; You can handle the RGBColor data type by writing a function thatswaps bytesin an RGBColor data structure, such as the function MyRGBSwap, shown in Listing 3-8. This function calls the Core Endian macro EndianS16_Swap to swap bytes for each of the values in the RGBColor data structure. The function doesn’t need to check for the currently executing system because the function is never called unless the values in the Swapping Bytes Writing a Callback to Swap Data Bytes Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 40RGBColor data type need to have their bytesswapped. The MyRGBSwap function is called by the byte-swapping callback routine (shown in Listing 3-9 (page 41)) that’s provided to handle the custom 'PREF' resource (that is defined in Listing 3-7 (page 40)). Listing 3-8 A flipper function for RGBColor data static void MyRGBSwap (RGBColor *p) { p->red = Endian16_Swap(p->red); p->blue = Endian16_Swap(p->blue); p->green = Endian16_Swap(p->green); } Listing 3-9 shows a byte-swapping callback for the custom 'PREF' resource. An explanation for each numbered line of code appears following the listing. Note that the flipper checks for data that is malformed or is of an unexpected length. If the data passed into the flipper routine is a shorter length than the flipped type is normally, or (for example) contains garbage data instead of an array count, the flipper must be careful not to read or write data beyond the end of the passed-in data. Instead, the routine returns an error. Listing 3-9 A flipper for the custom 'PREF' resource #define kCurrentVersion 0x00010400 static OSStatus MyFlipPreferences (OSType dataDomain, // 1 OSType dataType, // 2 short id, // 3 void * dataPtr, // 4 UInt32 dataSize, // 5 Boolean currentlyNative, // 6 void* refcon) // 7 { UInt32 versionNumber; OSStatus status = noErr; MyPreferences* toFlip = (MyPreferences*) dataPtr; // 8 int count, i; Swapping Bytes Writing a Callback to Swap Data Bytes Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 41if (dataSize < sizeof(MyPreferences)) return errCoreEndianDataTooShortForFormat; // 9 if (currentlyNative) // 10 { count = toFlip->fCount; versionNumber = toFlip->fPrefsVersion; toFlip->fPrefsVersion = Endian32_Swap (toFlip->fPrefsVersion); toFlip->fCount = Endian16_Swap (toFlip->fCount); toFlip->fZoomValue = Endian16_Swap (toFlip->fZoomValue); } else // 11 { toFlip->fPrefsVersion = Endian32_Swap (toFlip->fPrefsVersion); versionNumber = toFlip->fPrefsVersion; toFlip->fCount = Endian16_Swap (toFlip->fCount); toFlip->fZoomValue = Endian16_Swap (toFlip->fZoomValue); count = toFlip->fCount; } if (versionNumber != kCurrentVersion) // 12 return errCoreEndianDataDoesNotMatchFormat; MyRGBSwap (&toFlip->fHighlightColor); // 13 MyRGBSwap (&toFlip->fUnderlineColor); // 14 if (dataSize < sizeof(MyPreferences) + count * sizeof(RGBColor)) return errCoreEndianDataTooShortForFormat; // 15 for(i = 0; i < count; i++) { MyRGBSwap (&toFlip->fPalette[i]); // 16 } return status; // 17 } Swapping Bytes Writing a Callback to Swap Data Bytes Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 42Here’s what the code does: 1. The system passesto your callback the domain to which the callback applies. You define the domain when you register the callback using the function CoreEndianInstallFlipper. 2. The system passesto your callback the resource type you defined for the data. In this example, the resource type is 'PREF'. 3. The system passes to your callback the resource ID of the data type. If the data is not a resource, this value is 0. 4. The system passes to your callback a pointer to the resource data that needs to have its bytes swapped. In this case, the pointer refers to a MyPreferences data structure. 5. The system passes to your callback the size of the data pointed to by the pointer described in the previous step. 6. The system passes to your callback true if the data in the buffer passed to the callback is in the byte ordering of the currently executing code. On a PowerPC Macintosh, when currentlyNative is true, the data isin big-endian order. On a Macintosh that uses an Intel microprocessor, when currentlyNative is true, the data is in little-endian order. Your callback needs to know this value, because if your callback uses a value in the data buffer to decide how to process other data in the buffer (for example, the count variable shown in the code), you must know whether that value needs to be flipped before the value can be used by the callback. 7. The system passes to your callback a pointer that refers to application-specific data. In this example, the callback doesn’t require any application-specific data. 8. Defines a variable for the MyPreferences data type and assigns the contents of the data pointer to the newly-defined toFlip variable. 9. Checks the static-length portion of the structure. If the size is less than it should be, the routine returns the error errCoreEndianDataTooLongForFormat. 10. If currentlyNative is true, saves the count value to a local variable and then swaps the bytes for the other values in the MyPreferences data structure. You must save the count value before you swap because you need it for an iteration later in the function. The fact that currentlyNative is true indicates that the value does not need to be byte swapped if it is used in the currently executing code. However, the value does need to be swapped to be stored to disk. The values are swapped using the appropriate Core Endian macros. 11. If currentlyNative is false, flips the values in the MyPreferences data structure before it saves the count value to a local variable. The fact that currentlyNative is false indicates that the count value needs to have its bytes swapped before it can be used in the callback. 12. Checks to make sure the version of the data structure is supported by the application. If the version is not supported, then your callback would not swap bytes for the data and would return the result errCoreEndianDataDoesNotMatchFormat. Swapping Bytes Writing a Callback to Swap Data Bytes Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 4313. Callsthe MyRGBSwap function (shown in Listing 3-8 (page 41)) to swap the bytes of the fHighlightColor field of the data structure. 14. Calls the MyRGBSwap function to swap the bytes of the fUnderlineColor field of the data structure. 15. Checks the data size to make sure that it is less than it should be. If not, the routine returns the error errCoreEndianDataTooLongForFormat. 16. Iterates through the elements in the fPalette array, calling the MyRGBSwap function to swap the bytes of the data in the array. 17. Returns noErr to indicate that the data is flipped without error. Although the sample performs some error checking, it does not include all the error-handling code that it could. When you write a flipper you may want to include such code. Note: The callback does not flip any of the Boolean values in the MyPreferences data structure because these are single character values. The callback also ignores the C string. You register a byte-swapping callback routine by calling the function CoreEndianInstallFlipper. You should register the callback when your application callsitsinitialization routine or when you open your resources. For example, you would register the flipper callback shown in Listing 3-9 (page 41) using the following code: OSStatus status = noErr; status = CoreEndianInstallFlipper (kCoreEndianResourceManagerDomain, kMyPreferencesType, MyFlipPreferences, NULL); The system invokes the callback for the specified resource type and data domain when currentlyNative is false at the time a resource is loaded and true at the time the resource is set to be written. For example, the sample byte-swapping callback gets invoked any time the following line of code is executed in your application: MyPreferences** hPrefs = (MyPreferences**) GetResource ('PREF', 128); After swapping the bytes of the data, you can modify it as much as you’d like. Swapping Bytes Writing a Callback to Swap Data Bytes Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 44When the Resource Manager reads a resource from disk, it looks up the resource type (for example, 'PREF') in a table of byte-swapping routines. If a callback is installed for that resource type, the Resource Manager invokes the callback if it is appropriate to do so. Similar actions are taken when the Resource Manager writes a resource to disk. It finds the appropriate routine and invokes the callback to swap the bytes of the resource if it is appropriate to do so. When you copy or drag custom data from an application that has a callback installed for pasteboard data, the system invokes your callback at the appropriate time. If you copy or drag custom data to a native application, the data callback is not invoked. If you copy or drag custom data to a nonnative application, the system invokes your callback to swap the bytes of the custom data. If you paste or drop custom data into your application from a nonnative application, and a callback exists for that custom data, the system invokes the callback at the time of the paste or drop. If the custom data is copied or dragged from another native application, the callback is not invoked. Note that different pasteboard APIs use different type specifiers. The Scrap Manager and Drag Manager use OSType data types. The Pasteboard Manager uses Uniform Type Identifiers(UTI), and the NSPasteboard class uses its own type mechanism. In each case, the type is converted by the system to an OSType data type to discover if there is a byte-swapping callback for that type. Apple event data types are typically swapped to network byte order when sent over a network. The callback you install is called only if a custom data type that you define issent to another machine, or if another machine sends Apple event data to your application. The byte ordering of Apple events on the network is big-endian. For casesin which the system would not normally invoke your byte-swapping callback, you can call the function CoreEndianFlipData to invoke the callback function installed for the specified data type and domain. See Also The following resources are available in the ADC Reference Library: ● Byte-Order Utilities Reference describes the Core Foundation byte order utilities API. ● Byte Swapping, in Core Foundation Memory Management, shows how to swap integers and floating-point values using Core Foundation byte-order utilities. ● File-System Performance Guidelines provides information useful for mapping Unicode files to memory. Swapping Bytes See Also Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 45This chapter lists an assortment ofscenariosthat relate to a specific technology or API. Although many of these scenarios are uncommon, you will want to at least glance at the topics to determine whether anything applies to your application. The topics are organized alphabetically. Aliases Aliases are big-endian on all systems. Applications that add extra information to the end of an AliasHandle must ensure that the extra data is always endian-neutral or of a defined endian type, preferably big-endian. The AliasRecord data structure is opaque when building your application with the Mac OS X v10.4(Universal) SDK. Code that formerly accessed the userType field of an AliasRecord must use the Alias Manager functions GetAliasUserType, GetAliasUserTypeFromPtr, SetAliasUserType, or SetAliasUserTypeFromPtr. Code that formerly accessed the aliasSize field of an AliasRecord must use the functions GetAliasSize or GetAliasSizeFromPtr. These Alias Manger functions are available in Mac OS X v10.4 and later. For more information,see Alias Manager Reference . Archived Bit Fields For cross platform portability, avoid using bit fields. It’s best not to use the NSArchiver class to archive any structures that contain bit fields as integers. Individual values are stored in the archives in an architecture and compiler dependent manner. In cases where archives already contain such structures, you can read a structure correctly by changing its declaration so that the bit fields are swapped appropriately Automator Scripts AppleScript actions are platform-independent and do not need any changes to run on Intel-based Macintosh computers. However, any action that contains Cocoa code, whether it is a solely Cocoa action or an action that uses both AppleScript and Cocoa code, must be built as a universal binary to run correctly on both architectures. For more information, see Automator Programming Guide . Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 46 Guidelines for Specific ScenariosBit Shifting When you shift a value by the width of itstype or more, the fill bits are undefined regardless of the architecture. In fact, two different compilers on the same architecture could differ on the value of y after these two statements: uint32_t x = 0xDEADBEEF; uint32_t y = x >> 32; Bit Test, Set, and Clear Functions: Carbon and POSIX Don’t mix using the C bitwise operators with the Carbon functions BitTst, BitSet, and BitClr and the POSIX macros setbit, clrbit, isset, and isclr. If you consistently use the Carbon and POSIX functions and avoid the C bitwise operators, your code will function properly. Keep in mind, however, that you must use the Carbon and POSIX functions on the correct kind of data. The Carbon and POSIX functions perform a byte-by-byte traversal, which causes problems on an Intel-based Macintosh when they operate on data types that are larger than 1 byte. You can use these functions only on a pointer to a string of endian-neutral bytes. When you need to perform bit manipulation on integer values you should use functions such as (int32 & (1 << 26)) instead of BitTst(&int32, 5L). You’ll encounter problems when you use the function BitTst to test for 24-bit mode. For example, the following bit test returns false, which indicates that the process is running in 24-bit mode, or at least that the code is not running in 32-bit mode. The POSIX equivalents perform similarly: Gestalt(gestaltAddressingModeAttr, &gestaltResult); if (!(BitTst(&gestaltResult,31L)) ) /*If 24 bit You can use any of the bit testing, setting, and clearing functions if you pass a pointer to data whose byte order is fixed. Used in this way, these functions behave the same on both architectures. For more information, see the ToolUtils.h header file in the Core Services framework and Mathematical and Logical Utilities Reference . CPU Subtype Don't try to build a binary for a specific CPU subtype. Since the CPU subtype for Intel-based Macintosh computers is generic, you can't use it to check for specific functionality. If your application requires information about specific CPU functionality, use the sysctlbyname function, providing an appropriate selector. See Mac OS X Man Pages for information on using sysctlbyname. Guidelines for Specific Scenarios Bit Shifting Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 47Dashboard Widgets Dashboard widgetstypically contain platform-independent elementssuch as HTML,JavaScript, CSS, and image files. If you create a widget that contains only these elements, it should work on both PowerPC and Intel-based Macintosh computers without any modification on your part. However, if your widget contains a plug-in, you must build the plug-in as a universal binary for it to run natively on an Intel-based Macintosh computer. For more information, see Dashboard Programming Topics. Deprecated Functions Many deprecated functions, such as those that use PICT + PS data, have byte swapping issues. You may want to replace deprecated functions at the same time you prepare your code to run as a universal binary. You’ll not only solve byte swapping issues, but your code will use functions that ultimately benefit future development. A function that is deprecated has an availability statement in its header file that states the version of Mac OS X in which the function is deprecated. Many API reference documents provide a list of deprecated functions. In addition, compiler warnings for deprecated functions are on by default in Xcode 2.2 and later. Disk Partitions The standard disk partition format on an Intel-based Macintosh computer differsfrom the disk partition format of a PowerPC-based Macintosh computer. If your application depends on the partitioning details of the disk, it may not behave as expected. Partitioning details can affect tools that examine the hard disk at a low level. By default, internal hard drives on Intel-based Macintosh computers use the GUID Partition Table (GPT)scheme and external drives use the Apple Partition Map (APM) partition scheme. To create an external USB or FireWire disk that can boot an Intel-based Macintosh computer,select the GPT disk partition scheme option using Apple Disk Utility. Starting up an Intel-based Macintosh using an APM disk is not supported. Double-Precision Values: Bit-by-Bit Sensitivity Although both architectures are IEEE 754 compliant, there are differences in the rounding procedure used by each when operating on double-precision numbers. If your application is sensitive to bit-by-bit values in double-precision numbers, be aware that the same computation performed on each architecture may produce a different numerical result. Guidelines for Specific Scenarios Dashboard Widgets Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 48For more information, see Volume 1 of the Intel developer software manuals, available from the following website: http://developer.intel.com/design/Pentium4/documentation.htm Finder Information and Low-Level File System Operations If your code operates on the file system at a low level and handles Finder information, keep in mind that the file system does not swap bytes for the following information: ● The finderInfo field in the HFSPlus data structures HFSCatalogFolder, HFSPlusCatalogFolder, HFSCatalogFile, HFSPlusCatalogFile, and HFSPlusVolumeHeader. ● The FSPermissionInfo data structure, which is used when the constant kFSCatInfoPermissions is passed to the HFSPlus functions FSGetCatalogInfo and FSGetCatalogInfoBulk. The value of multibyte fields on disk always uses big-endian format. When running on a little-endian system, you must swap the bytes of any multibyte fields. The getattrlist function retrieves the metadata associated with a file. The getxattr function, added in Mac OS X v10.4, retrieves extended attributes—those that are an extension of the basic set of attributes. When using the getxattr function to access the legacy attribute "com.apple.FinderInfo", note that as with getattrlist, the information returned by this call is not byte swapped. (For more information on the getxattr and getattrlist functions see Mac OS X Man Pages.) Note: This issue pertains only to code that operates below CarbonCore. Calls to Carbon functions such as FSGetCatalogInfo are not affected. FireWire Device Access The FireWire bus uses big-endian format. If you are developing a universal binary version of an application that accesses a FireWire device, see “FireWire Device Access on an Intel-Based Macintosh” in FireWire Device Interface Guide for a discussion of the issues you can encounter. Guidelines for Specific Scenarios Finder Information and Low-Level File System Operations Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 49Font-Related Resources Font-related resource types (FOND, NFNT, sfnt, and so forth) are in big-endian format on both PowerPC and Intel-based Macintosh computers. If your application accesses font-related resource types directly, you must swap the fields of font-related resource types yourself. The following functionsfrom the ATS for Fonts API obtain font resourcesthat are returned in big-endian format: ● ATSFontGetTableDirectory ● ATSFontGetTable ● ATSFontGetFontFamilyResource The following functions from the Font Manager API obtain font resources that are returned in big-endian format. Note that Font Manager API is based on QuickDraw technology, which was deprecated in Mac OS X v10.4. ● FMGetFontTableDirectory ● FMGetFontTable ● FMGetFontFamilyResource GWorlds When the QuickDraw function NewGWorld allocates storage for the pixel buffer, and the depth parameter is 16 or 32 bits, the byte ordering within each pixel matters. The pixelFormat field of the PixMap data structure can have the values k16BE555PixelFormat or k16LE555PixelFormat for 2-byte pixels, and k32ARGBPixelFormat or k32BGRAPixelFormat for 4-byte pixels. (These constants are defined in the Quickdraw.h header file.) By default, NewGWorld always creates big-endian pixel formats (k16BE555PixelFormat or k32ARGBPixelFormat), regardless of the endian format of the system. For best performance, it is generally preferable for you to use a pixel format that corresponds to the native byte ordering of the system. When you pass kNativeEndianPixMap in the flags parameter to NewGWorld, the byte ordering of the pixel format is big-endian on big-endian systems, and little-endian on little-endian systems. Guidelines for Specific Scenarios Font-Related Resources Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 50Note: QuickDraw does not support little-endian pixel formats on big-endian systems. You can use the GWorld pixel storage as input to the Quartz function CGBitmapContextCreate or as a data provider for the Quartz function CGImageCreate. The byte ordering of the source pixel format needs to be communicated to Quartz through additional flags in the bitmapInfo parameter. These flags are defined in the CGImage.h header file. Assuming that your bitmapInfo parameter is already set up, you now need to combine it (by using a bitwise OR operator) with kCGBitmapByteOrder16Host or kCGBitmapByteOrder32Host if you created the GWorld with a kNativeEndianPixMap flag. Similarly, you should use kCGBitmapByteOrder16Big or kCGBitmapByteOrder32Big when you know that your pixel byte order is big-endian. Java Applications Pure Java applications do not require any code changes to run on Intel-based Macintosh computers. However, Java applications that interface with PowerPC-based native code will not run successfully using Rosetta on Intel-based Macintosh computers. Specifically, the following must be built as universal binaries: ● JNI libraries built for PowerPC-based Macintosh computers are not loaded using Rosetta because the Java Virtual Machine has already launched without using Rosetta.Java applicationsfail on Intel-based Macintosh computers when trying to load PowerPC-only binaries. ● Native applications that use the VM Invocation Interface to start a Java Virtual Machine must be built as universal binaries to run on Intel-based Macintosh computers. The Java VM must run natively; attempts by an application running using Rosetta to instantiate a JVM fail. Formoreinformation,seeTechnicalQ&AQA1295:JavaonIntel-basedMacintoshComputers intheADCReference Library. Java I/O API (NIO) The I/O API (NIO) that was introduced in JDK 1.4 allows the use of native memory buffers. If you are a Java programmer who uses this API, you may need to revise your code. NIO byte buffers have a byte ordering that by default is big-endian. If you have Java code originally written for Mac OS X on PowerPC, when you create java.nio.ByteBuffers you should call the function ByteBuffer.order(ByteOrder.nativeOrder()) to set the byte order of the buffers to the native byte order for the current architecture. If you fail to do this, you will obtain flipped data when you read multibyte data from the buffer using JNI. Guidelines for Specific Scenarios Java Applications Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 51Machine Location Data Structure The Memory Management Utilities data type MachineLocation containsinformation about the geographical location of a computer. The ReadLocation and WriteLocation functions use the geographic location record to read and store the geographic location and time zone information in extended parameter RAM. If your code uses the MachineLocation data structure, you need to change it to use the MachineLocation.u.dls.Delta field that was added to the structure in Mac OS X version 10.0. To be endian-safe, change code that uses the old field: MachineLocation.u.dlsDelta = 1; to use the new field: MachineLocation.u.dls.Delta = 1; The gmtDelta field remains the same—the low 24 bits are used. The order of assignment is important. The following is incorrect because it overwrites results: MachineLocation.u.dls.Delta = 0xAA; // u = 0xAAGGGGGG; G=Garbage MachineLocation.u.gmtDelta = 0xBBBBBB; // u = 0x00BBBBBB; This is the correct way to assign the values: MachineLocation.u.gmtDelta = 0xBBBBBB; // u = 0x00BBBBB; MachineLocation.u.dls.Delta = 0xAA; // u = 0xAABBBBBB; For more details see Memory Management Utilities Reference . Mach Processes: The Task for PID Function The task_for_pid function returns the task associated with a process ID (PID). This function can be called only if the process is owned by the procmod group or if the caller is root. Guidelines for Specific Scenarios Machine Location Data Structure Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 52Metrowerks PowerPlant You can use PowerPlant on an Intel-based Macintosh computer by downloading the PowerPlant framework available from http://sourceforge.net/projects/open-powerplant. This Open Source version of the PowerPlant Framework for Mac OS X includes support for Intel and GCC 4.0. Multithreading Multithreading is a technique used to improve performance and enhance the perceived responsiveness of applications. On computers with one processor, this technique can allow a program to execute multiple pieces of code independently . On computers with more than one processor, multithreading can allow a program to execute multiple pieces of code simultaneously . If your application is single-threaded, consider threading your application to take advantage of hardware multithreading processor capabilities. If your application is multithreaded, you’ll want to ensure that the number of threads is not hard coded to a fixed number of processors. Dual-core technology improves performance by providing two physical cores within a single physical processor package. Multiprocessor and dual-core technology all exploit thread-level parallelism to improve application and system responsiveness and to boost processor throughput. When you prepare code to run as a universal binary, the multithreading capabilities of the microprocessor are transparent to you. This is true whether your application is threaded or not. However, you can optimize your code to take advantage of the specific way hardware multithreading is implemented for each architecture. Objective-C: Messages to nil In Objective-C, it is valid to send a message to a nil object. The Objective-C runtime assumes that the return value of a message sent to a nil object is nil, as long as the message returns an object or any integer scalar of size less than or equal to sizeof(void*). On Intel-based Macintosh computers, messages to a nil object always return 0.0 for methods whose return type is float, double, long double, or long long. Methods whose return value is a struct, as defined by the Mac OS X ABI Function Call Guide to be returned in registers, will return 0.0 for every field in the data structure. Other struct data types will not be filled with zeros. This is also true under Rosetta. On PowerPC Macintosh computers, the behavior is undefined. Guidelines for Specific Scenarios Metrowerks PowerPlant Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 53Objective-C Runtime: Sending Messages The information in this section is only for developers who use the Objective-C runtime library, which is used primarily for developing bridge layers between Objective-C and other languages, or for low-level debugging. Most developers do not need to use the Objective-C runtime library directly when programming in Objective-C. If your application directly callsthe Objective-C runtime function objc_msgSend_stret, you need to change your code to have it work correctly on an Intel-based Macintosh. The x86 ABI for struct-return functions differs from the ABI for struct-address-as-first-parameter functions, but the two ABIs are identical on PowerPC. When you call objc_msgSend_stret, you must cast the function to a function pointer type that uses the expected struct return type. The same applies for calls to objc_msgSendSuper_stret. For other details on the ABI, see “32-Bit Application Binary Interface” (page 84). If your application directly calls the Objective-C runtime function objc_msgSend, you should always cast to the appropriate return value. For instance, for a method that returns a BOOL data type, the following code executes properly on a PPC Macintosh but might not on an Intel-based Macintosh computer: BOOL isEqual = objc_msgSend(string, @selector("isEqual:"), otherString); To ensure that the code does executes properly on an Intel-based Macintosh computer, you would change the code to the following: BOOL isEqual = ((BOOL (*)(id, SEL, id))objc_msgSend)(object, @selector("isEqual:"), otherString); Open Firmware Macintosh computers that use an Intel microprocessor do not use Open Firmware. Although many parts of the I/O registry are present and work as expected, information that is provided by Open Firmware on a PowerPC Macintosh (such as a complete device tree) is not available in the I/O registry on a Macintosh that uses an Intel microprocessor. You can obtain some of the information from IODeviceTree by using the sysctlbyname or sysctl commands. Guidelines for Specific Scenarios Objective-C Runtime: Sending Messages Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 54OpenGL When defining an OpenGL image or texture, you need to provide a type thatspecifiesto OpenGL which format the texture is in. Most of these functions (for example, glTexImage2D) take format and type_ parameters that specify how the texture is laid out on disk or in memory. OpenGL supports a number of different image types; some are endian-neutral but others are not. Note: The advice in this section is for applications that can not reorder their pixel data because of the type of image loaders they are using. For example, a common image format is GL_RGBA with a type of GL_UNSIGNED_BYTE. This means that the image has a byte that specifies the red color data followed by a byte that specifies the green color data, and so forth. Thisformat is not endian-specific; the bytes are in the same order on all architectures. Another common image format is GL_BGRA, often specified by the type GL_UNSIGNED_INT_8_8_8_8_REV. This type means that every 4 bytes of image data are interpreted as an unsigned int, with the most significant 8 bits representing the alpha data, the next most significant 8 bits representing the red color data, and so forth. Because this format is specific to the integer format of the host, the format is interpreted differently on little-endian systemsthan on big-endian systems. When using GL_UNSIGNED_INT_8_8_8_8_REV, the OpenGL implementation expects to find data in byte order ARGB on big-endian systems, but BGRA on little-endian systems. Because there is no explicit way in OpenGL to specify a byte order of ARGB with 32-bit or 16-bit packed pixels (which are common image formats on Macintosh PowerPC computers), many applications specify GL_BGRA with GL_UNSIGNED_INT_8_8_8_8_REV. This practice works on a big-endian system such as PowerPC, but the format is interpreted differently on a little-endian system and causes images to be rendered with incorrect colors. Applications that have this problem are those that use the OpenGL host-order format types, but assume that the data referred to is always big-endian. These types include, but are not limited to the following: GL_SHORT GL_UNSIGNED_SHORT GL_INT GL_UNSIGNED_INT GL_FLOAT GL_DOUBLE GL_UNSIGNED_BYTE_3_3_2 GL_UNSIGNED_SHORT_4_4_4_4 GL_UNSIGNED_SHORT_5_5_5_1 Guidelines for Specific Scenarios OpenGL Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 55GL_UNSIGNED_INT_8_8_8_8 GL_UNSIGNED_INT_10_10_10_2 GL_UNSIGNED_SHORT_5_6_5 GL_UNSIGNED_BYTE_2_3_3_REV GL_UNSIGNED_SHORT_5_6_5_REV GL_UNSIGNED_SHORT_4_4_4_4_REV GL_UNSIGNED_SHORT_1_5_5_5_REV GL_UNSIGNED_INT_8_8_8_8_REV GL_UNSIGNED_INT_2_10_10_10_REV If your application does not use any of these types, it is unlikely to have any problems with OpenGL. Note that an application is not necessarily incorrect to use one of these types. Many applications might already present host-order data tagged with one of these formats, especially with existing cross-platform code, because the Mac OS X implementation behaves the same way as a Windows implementation. If an application incorrectly uses one of these types, its OpenGL textures and images are rendered with incorrect colors. For example, red might appear green, or the image might appear to be tinted purple. You can fix this problem in one of the following ways: 1. If the images are generated or loaded algorithmically, change the code to generate the texturesin host-order format that matches what OpenGL expects. For example, a JPEG decoder can be modified to store its output in 32-bit integers instead of four 8-bit bytes. The resulting data is identical on big-endian systems, but on a little-endian system, the bytes are in a different order. This matches the OpenGL expectation, and the existing OpenGL code continues to work on both architectures. This is the preferred approach. In many cases, rewriting the algorithms may prove a significant amount of work to implement and debug. If that’s the case, an approach that asks OpenGL to interpret the texture data differently might be a better approach for you to take. 2. If the application uses GL_UNSIGNED_INT_8_8_8_8_REV or GL_UNSIGNED_INT_8_8_8_8, it can switch between them based on the architecture. Since these two types are exactly byte swapped versions of the same format, using GL_UNSIGNED_INT_8_8_8_8_REV on a big-endian system is equivalent to using GL_UNSIGNED_INT_8_8_8_8 on a little-endian system and vice versa. Code might look as follows: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGRA_EXT, #if __BIG_ENDIAN__ GL_UNSIGNED_INT_8_8_8_8_REV, #else Guidelines for Specific Scenarios OpenGL Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 56GL_UNSIGNED_INT_8_8_8_8, #endif data); If this is a common idiom, it might be easiest to define it as a macro that can be used multiple times: #if __BIG_ENDIAN__ #define ARGB_IMAGE_TYPE GL_UNSIGNED_INT_8_8_8_8_REV #else #define ARGB_IMAGE_TYPE GL_UNSIGNED_INT_8_8_8_8 #endif /* later on, use it like this */ glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGRA_EXT, ARGB_IMAGE_TYPE, data); Note that switching between GL_UNSIGNED_INT_8_8_8_8_REV and GL_UNSIGNED_INT_8_8_8_8 works only for this particular 32-bit packed-pixel data type. For 16-bit ARGB data stored using GL_UNSIGNED_SHORT_1_5_5_5_REV, there is no corresponding byte swapped type. Keep in mind that GL_UNSIGNED_SHORT_5_5_5_1 is not a replacement for GL_UNSIGNED_SHORT_1_5_5_5_REV on an Intel-based Macintosh computer. The format isinterpreted as bit-order arrrrrbbbbbggggg on a big-endian system, and as bit order ggrrrrrabbbbbggg on a little-endian system. 3. If you can’t use the previous approaches, you should either generate/load your data in the native endian format of the systemand use the same pixel type on both architectures or use the GL_UNPACK_SWAP_BYTES pixel store setting to instruct OpenGL to swap the bytes of any texture loaded on a little-endian system. This setting applies to all texture or image calls made with the current OpenGL context, so it needs to be set only once per OpenGL context, for example: #if __LITTLE_ENDIAN__ glPixelStorei(GL_UNPACK_SWAP_BYTES, 1); #endif This method causes images that use the problematic formats to be loaded as they would be on PowerPC. You should consider this option only if no other option is available. Enabling this option causes OpenGL to use a slower rendering path than normal. Performance-sensitiveOpenGL applications may be significantly Guidelines for Specific Scenarios OpenGL Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 57slower with this option enabled than with it off. Although this method can get an OpenGL-based program up and running in as little time as possible, it is highly recommended that you use one of the other two methods. Note: Using the GL_UNSIGNED_INT_8_8_8_8 format for GL_BGRA data is not necessarily faster than using GL_UNPACK_SWAP_BYTES. In some cases, performance decreases for rendering textures that use either of those two methods compared to using a data type such as GL_UNSIGNED_INT_8_8_8_8_REV. It’s advisable that you use Shark or other tools to analyze the performance of your OpenGL code and make sure that you are not encountering particularly bad cases. OSAtomic Functions The kernel extension functions OSDequeueAtomic and OSEnqueueAtomic are not available on an Intel-based Macintosh. For more information on these functions, see Kernel Framework Reference . Pixel Data Applications that store pixel data in memory using ARGB format must take care in how they read data. If the code is not written correctly, it’s possible to misread the data; the result is colors or alpha that appear wrong. If you see colors that appear wrong when your application runs on an Intel-based Macintosh computer, the following strategy may help you identify where pixel data is being read incorrectly. Create a test image whose pixel data is easy to identify. For example, set each pixel so that alpha is ff, red is aa, green is bb, and blue is cc. Then read that image into your application. Figure 4-1 shows such an image. Figure 4-1 A test image that can help locate the source of color problems It's also helpful to go through your code and cast pixel data to the unsigned char data type. Guidelines for Specific Scenarios OSAtomic Functions Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 58Start with the portion of your code that reads the image. Use the following GDB command to examine the pixel data as hexadecimal bytes: x/xb

This command prints the specified number of bytes, starting with the first byte of the first pixel. You should easily be able to see whether what’s displayed onscreen matches the values of the pixels in the test image. If the values you see do not match the test image, then you've identified the misreading problem. If the values match, then you need to identify other portions of your code that modify or transform pixel data, and inspect the pixel data after each transformation. PostScript Printing If you are using the Carbon Printing Manager, note that the PICT with PostScript ('pictwps') printing path is not available on Intel-based Macintosh computers except under Rosetta. If you need only to support EPS data you can use Quartz drawing together with the function PMCGImageCreateWithEPSDataProvider to allow the inclusion of EPS data as part of your Quartz drawing. If you need to generate the PostScript code for your application drawing you should use the function PMPrinterPrintWithFile. Quartz Bitmap Data The Quartz constants shown in Table 4-1 specify the byte ordering of pixel formats. These constants, which are defined in the CGImage.h header file, are used in the bitmapInfo parameter. To specify byte ordering to Quartz, use a bitwise OR operator to combine the appropriate constant with the bitmapInfo parameter. Table 4-1 Quartz constants that specify byte ordering Constant Specifies kCGBitmapByteOrderMask The byte order mask kCGBitmapByteOrder16Big 16-bit, big-endian format kCGBitmapByteOrder32Big 32-bit, big-endian format kCGBitmapByteOrder16Little 16-bit, little-endian format kCGBitmapByteOrder32Little 32-bit, little-endian format kCGBitmapByteOrder16Host 16-bit, host-endian format kCGBitmapByteOrder32Host 32-bit, host-endian format Guidelines for Specific Scenarios PostScript Printing Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 59QuickDraw Routines If you have existing code that directly accesses the picFrame field of the QuickDraw Picture data structure, you should use the QuickDraw function QDGetPictureBounds to get the appropriately swapped bounds for a Picture. This function is available in Mac OS X version 10.3 and later. Its prototype is as follows: Rect * QDGetPictureBounds( PicHandle picH, Rect *outRect) If you have existing code that uses the QuickDraw DeltaPoint function or the HIToolbox PinRect function (defined in MacWindows.h), make sure that you do not cast the function result to a Point data structure. The horizontal difference is returned in the low 16 bits, and the vertical difference is returned in the high 16 bits. You can obtain the horizontal and vertical values by using code similar to the following: Point pointDiff; SInt32 difference = DeltaPoint (p1, p2); pointDiff.h = LoWord (difference); pointDiff.v = HiWord (difference); Tip: The best solution is to convert your QuickDraw code to Quartz 2D. QuickDraw was deprecated starting in Mac OS X v10.4. For help with converting to Quartz 2D, see Quartz Programming Guide for QuickDraw Developers. QuickTime Components The Component Manager recognizes which architectures are supported by a component by looking at the 'thng' resource for the component, not the architecture of the file. You must specify the appropriate architectures in the 'thng' resource. To accomplish this, in the .r file where you define the 'thng' resource, modify your ComponentPlatformInfo array to look similar to the following: #if defined(__ppc__) kMyComponentFlags, kMyCodeType, kMyCodeID, platformPowerPCNativeEntryPoint, #endif #if defined(__i386__) kMyComponentFlags, kMyCodeType, kMyCodeID, platformIA32NativeEntryPoint, Guidelines for Specific Scenarios QuickDraw Routines Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 60#endif Then, rebuild your component. For details, see “Building a Universal Binary” (page 11). QuickTime Metadata Functions When you call the function QTMetaDataGetItemProperty and the type of the key whose value you are retrieving is code, the data returned is an OSType, not a buffer of four characters. (You can determine the key type by calling the function QTMetaDataGetItemPropertyInfo.) To ensure that your code runs properly on both PowerPC and Intel-based Macintosh computers, you must use a correctly-typed buffer so that the endian format of the data returned to you is correct. If you supply a buffer of the wrong type, for example a buffer of UInt8 instead of a buffer of OSType, the endian format of the data returned in the buffer will be wrong on Intel-based Macintosh Computers. Runtime Code Generation If your application generates code at runtime, keep in mind that the compiler assumes that the stack must be 16-byte aligned when calling into Mac OS X libraries or frameworks. 16-byte stack alignment is enforced on Intel-based Macintosh computers, which means that you need to ensure that your code is 16-byte aligned to avoid having your application crash. For more information, see Mac OS X ABI Function Call Guide . Spotlight Importers A Spotlight importer is a plug-in bundle that extracts information from files created by an application. The Spotlight engine uses importers to gather information about new and existing files. Spotlight importers are not compatible with Rosetta. To run an importer on an Intel-based Macintosh as well as on a PowerPC-based Macintosh, you must compile it as a universal binary. For more information on Spotlight, see Spotlight Overview andSpotlight Importer Programming Guide . Guidelines for Specific Scenarios QuickTime Metadata Functions Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 61System-Specific Predefined Macros The C preprocessor hasseveral predefined macros whose purpose isto indicate the type ofsystem and machine in use. If your code uses system-specific predefined macros, evaluate whether you really need to use them. In most cases applications need to know the capabilities available on a computer and not the specific system or machine on which the application is running. For example, if your application needs to know whether it is running on a little-endian or big-endian microprocessor, you should use the __BIG_ENDIAN__ or __LITTLE_ENDIAN__ macros or the Core Foundation function CFByteOrderGetCurrent. Do not use the __i386__ and __ppc__ macros for this purpose. See GNU C 4.0 Preprocessor User Guide for additional information. USB Device Access USB uses little-endian format. If you are developing a universal binary version of an application that accesses a USB device,see “USB Device Accessin an Intel-Based Macintosh”in USBDevice Interface Guide for a discussion of the issues you may encounter. See Also In addition to the following resources, check the ADC website periodically for updates and technical notes that might address other specific situations: ● Quartz Programming Guide for QuickDraw Developers which provides information on moving code from the deprecated QuickDraw API to Quartz ● IA-32 Intel Architecture Optimization Reference Manual , available from: http://developer.intel.com/design/pentium4/manuals/index_new.htm Guidelines for Specific Scenarios System-Specific Predefined Macros Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 62This chapter is relevant only for those developers who want to start writing vector-based code or whose applications already directly use the AltiVec extension to the PowerPC instruction set. AltiVec instructions, because they are processor specific, must be replaced on Intel-based Macintosh computers. You can choose from these two options: ● Use the Accelerate framework. This is the recommended option because the framework provides a layer of abstraction that lets you perform vector-based operations without needing to use low-level vector instructions yourself. See “Accelerate Framework” (page 63). ● Port AltiVec code to the Intel instruction set architecture (ISA). This solution is available for developers who have performance needs that can’t be met by using the Accelerate framework. See “Rewriting AltiVec Instructions” (page 64). Accelerate Framework The Accelerate framework, introduced in Mac OS X v10.3 and expanded in v10.4, is a set of high-performance vector-accelerated libraries. You don’t need to be concerned with the architecture of the target machine because the routines in this framework abstract the low-level details. The system automatically invokes the appropriate instruction set for the architecture that your code runs on. This framework contains the following libraries: ● vImage is the Apple image processing framework that includes high-level functions for image manipulation—convolutions, geometric transformations, histogram operations, morphological transformations, and alpha compositing—as well as utility functions that convert formats and perform other operations. See vImage Programming Guide . ● vDSP provides mathematical functions that perform digital signal processing (DSP) for applications such asspeech,sound, audio, and video processing, diagnostic medical imaging, radarsignal processing,seismic analysis, and scientific data processing. The vDSP functions operate on real and complex data types and include data type conversions, fast Fourier transforms (FFTs), and vector-to-vector and vector-to-scalar operations. ● vMathLib contains vector-accelerated versions of all routines in the standard math library. See vecLib Framework Reference . Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 63 Preparing Vector-Based Code● LAPACK is a linear algebra package that solves simultaneous sets of linear equations, tackles eigenvalue and singular solution problems, and determines least-squares solutions for linear systems. ● BLAS (Basic Linear Algebra Subroutines) performs basic vector and matrix computations. ● vForce contains routines that take matrices as input and output arguments, rather than single variables. Rewriting AltiVec Instructions Most of the tasksrequired to vectorize for AltiVec—restructuring data structures, designing parallel algorithms, eliminating branches, and so forth— are the same as those you’d need to perform for the Intel architecture. If you already have AltiVec code, you’ve already completed the fundamental vectorization work needed to rewrite your application for the Intel architecture. In many casesthe translation process will be smooth, involving direct or nearly direct substitution of AltiVec intrinsics with Intel equivalents. The MMX, SSE, SSE2, and SSE3 extensions provide analogous functionality to AltiVec. Like the AltiVec unit, these extensions are fixed-sized SIMD (Single Instruction Multiple Data) vector units, capable of a high degree of parallelism. Just as for AltiVec, code that is written to use the Intel ISA typically performs many times faster than scalar code. Before you start rewriting AltiVec instructionsfor the Intel instruction set architecture, read AltiVec/SSE Migration Guide . It outlines the key differences between architectures in terms of vector-based programming, gives an overview of the SIMD extensions on x86, lists what you need to do to build your code, and provides an in-depth discussion on alignment and other relevant issues. See Also The following resources are relevant for rewriting AltiVec instructions for the Intel architecture: ● “Architecture-Independent Vector-Based Code” (page 76) shows how to write a fast matrix-multiplication function with a minimum of architecture-specific coding. ● Intel software manuals describe the x86 vector extensions: http://developer.intel.com/design/Pentium4/documentation.htm ● Perf-Optimization-dev is a list for discussions on analyzing and optimizing performance in Mac OS X. You can subscribe at: http://lists.apple.com/mailman/listinfo/perfoptimization-devlists.apple.com Preparing Vector-Based Code Rewriting AltiVec Instructions Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 64Rosetta is a translation process that runs a PowerPC binary on an Intel-based Macintosh computer—it allows applications to run as nonnative binaries. Many, but not all, applications can run translated. Applications that run translated will never run as fast as they run as a native binary because the translation process itself incurs a processing cost. How compatible your application is with Rosetta depends on the type of application it is. An application such as a word processor that has a lot of user interaction and low computational needs is quite compatible. An application that requires a moderate amount of user interaction and has some high computational needs or that uses OpenGL is most likely also quite compatible. One that has intense computing needs isn’t compatible. Thisincludes applicationsthat need to repeatedly compute fast Fourier transforms(FFTs), that compute complex models for 3-D modeling, or that compute ray tracing. To the user, Rosetta istransparent. Unlike Classic, when the user launches an application, there aren’t any visual cues to indicate that the application is translated. The user may perceive that the application is slow to start up or that the performance is slower than it is on a PowerPC-based Macintosh. The user can discover whether an application has only a PowerPC binary by looking at the Finder information for the application. (See “Determining Whether a Binary Is Universal” (page 18).) This appendix discusses the sorts of applications that can run translated, describes how Rosetta works, points out special considerations for translated applications, shows how to force an application to run translated using Rosetta, describes how to programmatically detect whether an application is running nonnatively, and provides troubleshooting information if your application won’t run translated but you think that it should. What Can Be Translated? Rosetta is designed to translate currently shipping applicationsthat run on a PowerPC with a G3 or G4 processor and that are built for Mac OS X. That includes CFM as well as Mach-O PowerPC applications. Rosetta does not run the following: ● Applications built for any version of the Mac OS earlier than Mac OS X —that means Mac OS 9, Mac OS 8, Mac OS 7, and so forth ● The Classic environment ● Screen savers written for the PowerPC architecture Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 65 Rosetta● Code that inserts preferences in the System Preferences pane ● Applications that require a G5 processor ● Applications that depend on one or more PowerPC-only kernel extensions ● Kernel extensions ● Java applications with JNI libraries ● Java applets in applications that Rosetta can translate; that means a web browser that Rosetta can run translated will not be able to load Java applets. Rosetta does not support precise exceptions. Any application that relies on register states being accurate in exception handlers or signal handlers will not function properly running with Rosetta. For more information on the limitations of Java applications using Rosetta, see “Java Applications” (page 51) and Technical Q &A QA1295, Java on Intel-based Macintosh Computers, which is in the ADC Reference Library. How It Works When an application launches on an Intel-based Macintosh computer, the kernel detects whether the application has a native binary. If the binary is not native, the kernel launches the binary using Rosetta. If the application is one of those that can be translated, it launches and runs, although not as fast as it would as a native binary. Behind the scenes, Rosetta translates and executes the PowerPC binary code. Rosetta runs in the same thread of control as the application. When Rosetta starts an application, it translates a block of application code and executes that block. As Rosetta encounters a call to a routine that it has not yet translated, it translatesthe needed routine and continuesthe execution. The result is a smooth and continual transitioning between translation and execution. In essence, Rosetta and your application work together in a kind of symbiotic relationship. Rosetta optimizes translated code to deliver the best possible performance on the nonnative architecture. It uses a large translation buffer, and it caches code for reuse. Code that getsreused repeatedly in your application benefits the most because it needs to be translated only once. The system uses the cached translation, which is faster than translating the code again. Special Considerations Rosetta must run the entire process when it translates. This hasimplicationsfor applicationsthat use third-party plug-ins or any other component that must be loaded at the time your application launches. All parts (application, plug-ins, or other components needed at launch time) must run either nonnatively or natively. Rosetta How It Works Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 66For example, if your application is built as a universal binary, but it uses a plug-in that has only a PowerPC binary, then your application needs to run nonnatively on an Intel-based Macintosh computer to use the nonnative plug in. Rosetta takes endian issuesinto account when it translates your application. Multibyte data that moves between your application and any system processis automatically handled for you—you don’t need to concern yourself with the endian format of the data. The following kinds of multibyte data can have endian issues if the data moves between: ● Your translated application and a native process that’s not a system process ● A custom pasteboard provided by your translated application and a custom pasteboard provided by a native application ● Data files or caches provided by your translated application and a native application You might encounter thisscenario while developing a universal binary. For example, if you’ve created a universal binary for a server processthat your application relies on, and then test that process by running your application as a PowerPC binary, the endian format of the data passed from the server to your application would be wrong. You encounter the same problem if you create a universal binary for your application, but have not yet done so for a server process needed by the application. Structures that the system defines and that are written using system routines will work correctly. But consider the code in Listing A-1. Listing A-1 A structure whose endian format depends on the architecture typedef struct { int x; int y; } data_t void savefile(data_t data, int filehandle) { write(filehandle, &data, sizeof(data)); } Rosetta Special Considerations Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 67When run using Rosetta, the application will write a big-endian structure; x and y are both written as big-endian integers. When the application runs natively on an Intel-based Macintosh, it will write out a little-endian structure; x and y are written as little-endian integers. It is up to you to define data formats on disk to be of a canonical endian format. Endian-specific data formats are fine as long as any application that reads or write the data understands what the endian format of the data is and treats the data appropriately. Keep in mind that private frameworks and plug-ins can also encounter these sorts of endian issues. If a private framework creates a cache or data file, and the framework is a universal binary, then it will try to access the cache from both native and PPC processes. The framework either needs to account for the endian format of the cache when reading or writing data or needs to have two separate caches. Forcing an Application to Run Translated Assuming that the application meetsthe criteria described in “What Can Be Translated?” (page 65), applications that have only a PowerPC binary automatically run as translated on an Intel-based Macintosh. For testing purposes, there are several ways that you can force applications that have a universal binary to launch as a PowerPC binary on an Intel-based Macintosh: ● For applications, “Make a Setting in the Info Window” (page 69) ● For command-line tools “Use Terminal” (page 69) ● For an application that you are writing, “Modify the Property List” (page 69) ● Programmatically, “Use the sysctlbyname Function” (page 70) Each of these methods is described in this section. Rosetta Forcing an Application to Run Translated Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 68Make a Setting in the Info Window You can manually set which binary to execute on an Intel-based Macintosh computer by selecting the “Open using Rosetta” option in the Info window of the application. To set the option, click the application icon, then press Command-I to open the Info window. Make the setting, as shown in Figure A-1. Figure A-1 The Info window for the Calculator application Use Terminal You can force a command-line tool to run translated by entering the following in Terminal: ditto -arch ppc /tmp/toolname /tmp/toolname Modify the Property List You can set the default setting for the “Open using Rosetta” option by adding the following key to the Info.plist of your application bundle: Rosetta Forcing an Application to Run Translated Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 69LSPrefersPPC This key informs the system that the application should launch as a PowerPC binary and causes the “Open using Rosetta” checkbox to be selected. You might find this useful if you ship an application that has plug-ins that are not native at the time of shipping. Use the sysctlbyname Function The exec_affinity routine in Listing A-2 controls the preferred CPU type for sublaunched processes. You might find this routine useful if you are using fork and exec to launch applications from your application. The routine calls the sysctlbyname function with the "sysctl.proc_exec_affinity" string, passing a constant that specifies the CPU type. Pass CPU_TYPE_POWERPC to launch the PPC executable in a universal binary. (For information on sysctlbyname see Mac OS X Man Pages.) Listing A-2 A routine that controls the preferred CPU type for sublaunched processes cpu_type_t exec_affinity (cpu_type_t new_cputype) { cpu_type_t ret; cpu_type_t *newp = NULL; size_t sz = sizeof (cpu_type_t); if (new_cputype != 0) newp = &new_cputype; if (sysctlbyname("sysctl.proc_exec_affinity", &ret, &sz, newp, newp ? sizeof(cpu_type_t) : 0) == -1) { fprintf(stderr, "exec_affinity: sysctlbyname failed: %s\n", strerror(errno)); return -1; } return ret; } Rosetta Forcing an Application to Run Translated Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 70Preventing an Application from Opening Using Rosetta To prevent an application from opening using Rosetta, add the following key to the Info.plist: LSRequiresNativeExecution Programmatically Detecting a Translated Application Some developers may want to determine programmatically whether an application is running using Rosetta. For example, a developer writing device interface code may need to determine whether the user client is using the same endian format as the kernel. Listing A-3 is a utility routine that can call the sysctlbyname function on a process ID (pid). If you pass a process ID of 0 to the routine, it performs the call on the current process. Otherwise it performs the call on the processspecified by the pid value that you pass. (For information on sysctlbyname see MacOS XMan Pages.) Listing A-3 A utility routine for calling the sysctlbyname function static int sysctlbyname_with_pid (const char *name, pid_t pid, void *oldp, size_t *oldlenp, void *newp, size_t newlen) { if (pid == 0) { if (sysctlbyname(name, oldp, oldlenp, newp, newlen) == -1) { fprintf(stderr, "sysctlbyname_with_pid(0): sysctlbyname failed:" "%s\n", strerror(errno)); return -1; } } else { int mib[CTL_MAXNAME+1]; size_t len = CTL_MAXNAME; if (sysctlnametomib(name, mib, &len) == -1) { fprintf(stderr, "sysctlbyname_with_pid: sysctlnametomib failed:" "%s\n", strerror(errno)); return -1; } Rosetta Preventing an Application from Opening Using Rosetta Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 71mib[len] = pid; len++; if (sysctl(mib, len, oldp, oldlenp, newp, newlen) == -1) { fprintf(stderr, "sysctlbyname_with_pid: sysctl failed:" "%s\n", strerror(errno)); return -1; } } return 0; } The is_pid_native routine shown in Listing A-4 (page 72) calls the sysctlbyname_with_pid routine, passing the string "sysctl.proc_native". The is_pid_native routine determines whether the specified process is running natively or translated. The routine returns: ● 0 if the process is running translated using Rosetta ● 1 if the process is running natively on a PowerPC- or Intel-based Macintosh ● –1 if an unexpected error occurs Listing A-4 A routine that determines whether a process is running natively or translated int is_pid_native (pid_t pid) { int ret = 0; size_t sz = sizeof(ret); if (sysctlbyname_with_pid("sysctl.proc_native", pid, &ret, &sz, NULL, 0) == -1) { if (errno == ENOENT) { return 1; } fprintf(stderr, "is_pid_native: sysctlbyname_with_pid failed:" "%s\n", strerror(errno)); return -1; } Rosetta Programmatically Detecting a Translated Application Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 72return ret; } Note: On Mac OS X v10.4, the proc_native call fails if the current user doesn't own the process being checked. Troubleshooting If you are convinced that your application falls into the category of those that should be able to run using Rosetta but it doesn’t run or it has unexpected behavior, you can follow the procedure in this section to debug your application. This procedure works only for PowerPC binaries—not for a universal binary—and is the only way you can debug a PowerPC binary on an Intel-based Macintosh. Xcode debugging does not work for translated applications. To debug a PowerPC binary on an Intel-based Macintosh, follow these steps: 1. Open Terminal. 2. Enter the following two lines: For tcsh: setenv OAH_GDB YES //.app/Contents/MacOS/ For bash: export OAH_GDB=YES //.app/Contents/MacOS/ Rosetta Troubleshooting Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 73You’ll see the Rosetta process launch and wait for a port connection (Figure A-2). Figure A-2 Rosetta listens for a port connection 3. Open a second terminal window and start up GDB with the following command: gdb --oah Using GDB on an Intel-based Macintosh computer is just like using GDB on a PowerPC Macintosh. 4. Attach your application. attach 5. Press Tab. GDB automatically appends the process ID (pid) to your application name. 6. Press Return. 7. Type c to execute your application. Important: Do not type run. Typing run will not execute your code. It will leave your application in a state that requires you to start over from the first step. Figure A-3 shows the commands for initiating a debugging session for a PowerPC binary. After you start the session, you can debug in much the same way as you would debug a native process except that you can’t call functions—either explicitly or implicitly—from within GDB. For example, you can’t inspect CF objects by calling CFShow. Rosetta Troubleshooting Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 74Keep in mind that symbol files aren’t loaded at the start of the debugging session. They are loaded after your application is up and running. This means that any breakpoints you set are “pending breakpoints” until the executable and libraries are loaded. Figure A-3 Terminal windows with the commands for debugging a PowerPC binary on an Intel-based Macintosh computer Note: Debugging Rosetta applications from within either CodeWarrior or Xcode is not supported. Rosetta Troubleshooting Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 75The intention of this appendix isto show how to factor a mathematical calculation into architecture-independent and architecture-specific parts. Using matrix multiplication as an example, you’ll see how to write a function that works for both the PowerPC and the x86 architectures with a minimum of architecture-specific coding. You can then apply this approach to other, more complex mathematical calculations. The following basic operations are available on both architectures: ● Vector loads and stores ● Multiplication ● Addition ● An instruction to splat a float across a vector For other types of calculations, you may need to write separate versions of code. Because of the differences in the number of registers and the pipeline depths between the two architectures, it is often advantageous to provide separate versions. Note: There is a function for 4x4 matrix multiplication in the Accelerate framework (vecLib) that is tuned for both architectures. You can also call sgemm from Basic Linear Algebra Subprograms (BLAS) (also available in the Accelerate framework) to operate on larger matrices. Architecture-Specific Code Listing B-1 (page 77)showsthe architecture-specific code you need to support matrix multiplication. The code calls the architecture-independent function MyMatrixMultiply, which is shown in Listing B-2 (page 82). The code shown in Listing B-1 works properly for both instruction set architectures only if you build the code as a universal binary. For more information, see “Building a Universal Binary” (page 11). Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 76 Architecture-Independent Vector-Based CodeNote: The sample code makes use of a GCC extension to return a result from a code block ({}). The code may not compile correctly on other compilers. The extension is necessary because you cannot pass immediate values to an inline function, meaning that you must use a macro. Listing B-1 Architecture-specific code needed to support matrix multiplication #include #include #include // For each vector architecture... #if defined( __VEC__ ) // AltiVec // Set up a vector type for a float[4] array for each vector type typedef vector float vFloat; // Define some macros to map a virtual SIMD language to // each actual SIMD language. For matrix multiplication, the tasks // you need to perform are essentially the same between the two // instruction set architectures (ISA). #define vSplat( v, i ) ({ vFloat z = vec_splat( v, i ); /* return */ z; }) #define vMADD vec_madd #define vLoad( ptr ) vec_ld( 0, ptr ) #define vStore( v, ptr ) vec_st( v, 0, ptr ) #define vZero() (vector float) vec_splat_u32(0) #elif defined( __SSE__ ) // SSE // The header file xmmintrin.h defines C functions for using // SSE and SSE2 according to the Intel C programming interface #include // Set up a vector type for a float[4] array for each vector type typedef __m128 vFloat; Architecture-Independent Vector-Based Code Architecture-Specific Code Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 77// Also define some macros to map a virtual SIMD language to // each actual SIMD language. // Note that because i MUST be an immediate, it is incorrect here // to alias i to a stack based copy and replicate that 4 times. #define vSplat( v, i )({ __m128 a = v; a = _mm_shuffle_ps( a, a, \ _MM_SHUFFLE(i,i,i,i) ); /* return */ a; }) inline __m128 vMADD( __m128 a, __m128 b, __m128 c ) { return _mm_add_ps( c, _mm_mul_ps( a, b ) ); } #define vLoad( ptr ) _mm_load_ps( (float*) (ptr) ) #define vStore( v, ptr ) _mm_store_ps( (float*) (ptr), v ) #define vZero() _mm_setzero_ps() #else // Scalar #warning To compile vector code, you must specify -faltivec, -msse, or both- faltivec and -msse #warning Compiling for scalar code. // Some scalar equivalents to show what the above vector // versions accomplish // A vector, declared as a struct with 4 scalars typedef struct { float a; float b; float c; float d; }vFloat; // Splat element i across the whole vector and return it Architecture-Independent Vector-Based Code Architecture-Specific Code Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 78#define vSplat( v, i ) ({ vFloat z; z.a = z.b = z.c = z.d = ((float*) &v)[i]; /* return */ z; }) // Perform a fused-multiply-add operation on architectures that support it // result = X * Y + Z inline vFloat vMADD( vFloat X, vFloat Y, vFloat Z ) { vFloat result; result.a = X.a * Y.a + Z.a; result.b = X.b * Y.b + Z.b; result.c = X.c * Y.c + Z.c; result.d = X.d * Y.d + Z.d; return result; } // Return a vector that starts at the given address #define vLoad( ptr ) ( (vFloat*) ptr )[0] // Write a vector to the given address #define vStore( v, ptr ) ( (vFloat*) ptr )[0] = v // Return a vector full of zeros #define vZero() ({ vFloat z; z.a = z.b = z.c = z. d = 0.0f; /* return */ z; }) #endif // Prototype for a vector matrix multiply function void MyMatrixMultiply( vFloat A[4], vFloat B[4], vFloat C[4] ); int main( void ) { // The vFloat type (defined previously) is a vector or scalar array Architecture-Independent Vector-Based Code Architecture-Specific Code Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 79// that contains 4 floats // Thus each one of these is a 4x4 matrix, stored in the C storage order. vFloat A[4]; vFloat B[4]; vFloat C1[4]; vFloat C2[4]; int i, j, k; // Pointers to the elements in A, B, C1 and C2 float *a = (float*) &A; float *b = (float*) &B; float *c1 = (float*) &C1; float *c2 = (float*) &C2; // Initialize the data for( i = 0; i < 16; i++ ) { a[i] = (double) (rand() - RAND_MAX/2) / (double) (RAND_MAX ); b[i] = (double) (rand() - RAND_MAX/2) / (double) (RAND_MAX ); c1[i] = c2[i] = 0.0; } // Perform the brute-force version of matrix multiplication // and use this later to check for correctness printf( "Doing simple matrix multiply...\n" ); for( i = 0; i < 4; i++ ) for( j = 0; j < 4; j++ ) { float result = 0.0f; for( k = 0; k < 4; k++ ) result += a[ i * 4 + k] * b[ k * 4 + j ]; c1[ i * 4 + j ] = result; Architecture-Independent Vector-Based Code Architecture-Specific Code Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 80} // The vector version printf( "Doing vector matrix multiply...\n" ); MyMatrixMultiply( A, B, C2 ); // Make sure that the results are correct // Allow for some rounding error here printf( "Verifying results..." ); for( i = 0 ; i < 16; i++ ) if( fabs( c1[i] - c2[i] ) > 1e-6 ) printf( "failed at %i,%i: %8.17g %8.17g\n", i/4, i&3, c1[i], c2[i] ); printf( "done.\n" ); return 0; } The 4x4 matrix multiplication algorithm shown in Listing B-2 (page 82) is a simple matrix multiplication algorithm performed with four columns in parallel. The basic calculation is as follows: C[i][j] = sum( A[i][k] * B[k][j], k = 0... width of A ) It can be rewritten in mathematical vector notation for rows of C as the following: C[i][] = sum( A[i][k] * B[k][], k = 0... width of A ) Where: C[i][] is the ith row of C A[i][k] is the element of A at row i and column k B[k][] is the k th row of B An example calculation for C[0][] is as follows: C[0][] = A[0][0] * B[0][] + A[0][1] * B[1][] + A[0][2] * B[2][] + A[0][3] * B[3][] Architecture-Independent Vector-Based Code Architecture-Specific Code Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 81This calculation is simply a multiplication of a scalar times a vector, followed by addition of similar elements between two vectors, repeated four times, to get a vector that contains four sums of products. Performing the calculation in this way saves you from transposing B to obtain the B columns, and also saves you from adding across vectors, which is inefficient. All operations occur between similar elements of two different vectors. Architecture-Independent Matrix Multiplication Listing B-2 (page 82) shows architecture-independent vector code that performs matrix multiplication. This code compiles as scalar if you do not set up the appropriate compiler flags for PowerPC (-faltivec) or x86 (-msse), or if AltiVec is unavailable on the PowerPC. The matrices used in the MyMatrixMultply function assume the C storage order for 2D arrays, not the FORTRAN storage order. Listing B-2 Architecture-independent code that performs matrix multiplication void MyMatrixMultiply( vFloat A[4], vFloat B[4], vFloat C[4] ) { vFloat A1 = vLoad( A ); //Row 1 of A vFloat A2 = vLoad( A + 1 ); //Row 2 of A vFloat A3 = vLoad( A + 2 ); //Row 3 of A vFloat A4 = vLoad( A + 3); //Row 4 of A vFloat C1 = vZero(); //Row 1 of C, initialized to zero vFloat C2 = vZero(); //Row 2 of C, initialized to zero vFloat C3 = vZero(); //Row 3 of C, initialized to zero vFloat C4 = vZero(); //Row 4 of C, initialized to zero vFloat B1 = vLoad( B ); //Row 1 of B vFloat B2 = vLoad( B + 1 ); //Row 2 of B vFloat B3 = vLoad( B + 2 ); //Row 3 of B vFloat B4 = vLoad( B + 3); //Row 4 of B //Multiply the first row of B by the first column of A (do not sum across) C1 = vMADD( vSplat( A1, 0 ), B1, C1 ); C2 = vMADD( vSplat( A2, 0 ), B1, C2 ); C3 = vMADD( vSplat( A3, 0 ), B1, C3 ); C4 = vMADD( vSplat( A4, 0 ), B1, C4 ); Architecture-Independent Vector-Based Code Architecture-Independent Matrix Multiplication Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 82// Multiply the second row of B by the second column of A and // add to the previous result (do not sum across) C1 = vMADD( vSplat( A1, 1 ), B2, C1 ); C2 = vMADD( vSplat( A2, 1 ), B2, C2 ); C3 = vMADD( vSplat( A3, 1 ), B2, C3 ); C4 = vMADD( vSplat( A4, 1 ), B2, C4 ); // Multiply the third row of B by the third column of A and // add to the previous result (do not sum across) C1 = vMADD( vSplat( A1, 2 ), B3, C1 ); C2 = vMADD( vSplat( A2, 2 ), B3, C2 ); C3 = vMADD( vSplat( A3, 2 ), B3, C3 ); C4 = vMADD( vSplat( A4, 2 ), B3, C4 ); // Multiply the fourth row of B by the fourth column of A and // add to the previous result (do not sum across) C1 = vMADD( vSplat( A1, 3 ), B4, C1 ); C2 = vMADD( vSplat( A2, 3 ), B4, C2 ); C3 = vMADD( vSplat( A3, 3 ), B4, C3 ); C4 = vMADD( vSplat( A4, 3 ), B4, C4 ); // Write out the result to the destination vStore( C1, C ); vStore( C2, C + 1 ); vStore( C3, C + 2 ); vStore( C4, C + 3 ); } Architecture-Independent Vector-Based Code Architecture-Independent Matrix Multiplication Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 83Mac OS X ABI Function Call Guide describes the function-calling conventions used in all the architectures supported by Mac OS X. For detailed information about the IA-32 ABI, read the section “IA-32 Function Calling Conventions,” which: ● Lists data types, sizes, and natural alignment ● Describes stack structure ● Discusses prologs and epilogs ● Provides details on how arguments are passed and results are returned ● Tells which registers preserve their value after a procedure call and which ones are volatile Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 84 32-Bit Application Binary InterfaceFor information on the Apple x86-64 ABI, see: ● Mac OS X ABI Function Call Guide ● Mac OS X ABI Mach-O File Format Reference ● Mach-O Programming Topics Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 85 64-Bit Application Binary InterfaceThis table describes the changes to Universal Binary Programming Guidelines, Second Edition . Date Notes 2009-02-04 Made minor content additions. Updated “Programmatically Detecting a Translated Application ” (page 71) with details about the behavior of the sysctl call when working with the proc_native variable. 2007-02-26 Updated for Mac OS X v10.5. Removed the Appendix “Using PowerPlant” because an Open Source version that supports Intel-based Macintosh computers is available. See “Metrowerks PowerPlant” (page 53). Replaced the content in “64-Bit Application Binary Interface” (page 85) with cross-referencesto documentsthat are more thorough at describing the ABI. 2007-01-08 Added information on 64-bit and made technical corrections. Added “64-Bit Application Binary Interface” (page 85). Added a note to “OpenGL” (page 55). Revised the explanation of the return values for the code in Listing A-4 (page 72). Removed the code example in “Archived Bit Fields” (page 46) because it was incorrect. 2006-07-24 Made a few minor technical corrections. Revised “Network-Related Data” (page 34). Clarified how Listing A-4 (page 72) works. Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 86 Document Revision HistoryDate Notes 2006-06-28 Fixed link. Added “PostScript Printing” (page 59). Redirected link from Kernel Extensions Reference to Kernel Framework Reference . 2006-05-23 Removed outdated links and made a few other minor changes. Revised code regarding flippers to use an explicit UInt16 pointer and to assign back to dataptr the advanced countPtr. Updated instructions in “Troubleshooting” (page 73). Added information about the CCSResourcesFileMapped flag to “using PowerPlant”. Removed links to documentation that is no longer relevant. Added a note to “LStream.h” concerning reading and writing bool values. 2006-04-04 Corrected two function names. Revised information in “32-Bit Application Binary Interface” (page 84) so that it now only provides a link to the primary ABI reference. 2006-03-08 Improved wording and added information on Spotlight importers. Added information to “Objective-C Runtime: Sending Messages” and “Objective-C: Messages to nil.” 2006-02-07 Improved the wording in several sections. Revised wording in “Bit Shifting” (page 47), “Bit Test, Set, and Clear Functions: Carbon and POSIX” (page 47), “Troubleshooting” (page 73), and “Guidelines for Swapping Bytes” (page 28). Revised code in Listing A-4 (page 72) by adding a statement to handle versions of Mac OS that pre-date Rosetta. 2006-01-10 Updated content for Mac OS X v10.4.4. Removed the note about preliminary documentation from “Introduction to Universal Binary Programming Guidelines” (page 8). Document Revision History Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 87Date Notes Changed Xcode 2.1 to Xcode 2.2 in various places throughout the document because this is the recommended version for building a universal binary. Updated screenshots. Updated information in “Disk Partitions” (page 48), “Finder Information and Low-Level File System Operations” (page 49), “Multithreading” (page 53), “Objective-C: Messages to nil” (page 53), “QuickTime Components” (page 60), “Runtime Code Generation” (page 61), and “Values in an Array” (page 38). Added the sections “Code on the Stack: Disabling Execution” (page 22), “Extensible Firmware Interface (EFI)” (page 24), and “Mach Processes: The Task for PID Function” (page 52). In “Rosetta” (page 65), updated the sections “What Can Be Translated?” (page 65) and “Forcing an Application to Run Translated” (page 68). In “Rosetta” (page 65), added the section “Programmatically Detecting a Translated Application ” (page 71). 2005-12-06 Made refinements to existing content. Added code that shows how to swap bytes for values in an array. See “Values in an Array” (page 38). Added “Automator Scripts” (page 46), “Dashboard Widgets” (page 48), and “QuickTime Metadata Functions” (page 61). Updated for Xcode 2.2; includes pointers to newly revised tools documentation as well as improved guidelines and tips. 2005-11-09 Revised “Building Your Code” (page 12). Added “Debugging” (page 16). Added information to “Pixel Data ” (page 58) on how to track down color problems. Added the section “Quartz Bitmap Data” (page 59). Document Revision History Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 88Date Notes Added information about IP addresses and other “false” numerical values. In several places throughout the book, added cross references to newly revised, relevant documentation. Added clarification on the long double data type. See “Data Types” (page 23). Added information about using the PinRect function. See “QuickDraw Routines” (page 60). Added information about the need for Xcode targets to be native. See “Build Assumptions” (page 11) and “Building Your Code” (page 12). Corrected information about how ATS for Fonts handles font resources. See “Font-Related Resources” (page 50). Changed extended markup language to extensible markup language. Improved the grammar in “Objective-C: Messages to nil” (page 53). Fixed a link to information on Hyper-Threading Technology. See the “See Also” (page 62) section in “Guidelines for Specific Scenarios” (page 46). Made numerous editorial changes throughout. 2005-10-04 Made technical improvements and minor editorial changes throughout. Added a few resources to See Also in “Building a Universal Binary” (page 11). Changed the title of the Appendix Fast Matrix Multiplication to “Architecture-Independent Vector-Based Code” (page 76). Added new sections to the chapter “Guidelines for Specific Scenarios” (page 46). See “FireWire Device Access” (page 49) and “USB Device Access” (page 62). Added information about a relevant technical note to “QuickTime Components” (page 60). Added an example of a color issue to “Troubleshooting Your Built Application” (page 16). Revised the section “Objective-C: Messages to nil” (page 53). Document Revision History Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 89Date Notes Revised the code for swapping floating-point values. See “Floating-Point Values” (page 32). Add a reference to Cross-Development Programming Guide in the chapter “Building a Universal Binary” (page 11). Made corrections to the section “OpenGL” (page 55). 2005-09-08 Updated a substantial amount of task and conceptual information. Completely replaced information related to PowerPlant. Removed most of the content from “Preparing Vector-Based Code” (page 63) because the document AltiVec/SSE Migration Guide provides a more complete discussion of porting AltiVec code to SSE. Removed most of the content from the appendix titled Application Binary Interface becausethedocumentMacOSXABIFunctionCallGuide provides a more complete description of the IA-32 ABI for Intel-based Macintosh computers. Added a section—“Java Applications” (page 51)—that provides information about Java on Intel-based Macintosh computers, including what happens under Rosetta. Added cross-references to a technical note on this topic to “Rosetta” (page 65). 2005-08-11 Numerous minor technical and editorial changes throughout. Removed the appendix titled x86 Equivalent Instructions for AltiVec Instructions.” Made numerousminortechnical refinements and fixed a few typographical errors. 2005-07-07 Fixed typographical and linking errors. Made several improvements to technical content. 2005-06-17 New document that describes the architectural differences between PowerPC and Intel and providestipsfor writing code that can run on both. 2005-06-07 Document Revision History Retired Document | 2009-02-04 | © 2005, 2009 Apple Inc. All Rights Reserved. 90Apple Inc. © 2005, 2009 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, AppleScript, Carbon, Cocoa, ColorSync, eMac, Finder, FireWire, Logic, Mac, MacOS, Macintosh,Objective-C,OS X, Pages, Panther, Quartz, QuickDraw, QuickTime, Rosetta, Spotlight, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. Intel and Intel Core are registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Java is a registered trademark of Oracle and/or its affiliates. MMX is a trademark of Intel Corporation or its subsidiaries in the United States and other countries. OpenGL is a registered trademark of Silicon Graphics, Inc. PowerPC and the PowerPC logo are trademarks of International Business Machines Corporation, used under license therefrom. UNIX is a registered trademark of The Open Group. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Object-Oriented Programming with Objective-CContents Introduction 5 Who Should Read This Document 5 Organization of This Document 6 See Also 6 Why Objective-C? 7 Object-Oriented Programming 8 Data and Operations 8 Interface and Implementation 9 The Object Model 12 The Messaging Metaphor 13 Classes 15 Modularity 16 Reusability 16 Mechanisms of Abstraction 18 Encapsulation 18 Polymorphism 19 Inheritance 20 Class Hierarchies 21 Subclass Definitions 21 Uses of Inheritance 22 Dynamism 23 Dynamic Typing 24 Dynamic Binding 25 Dynamic Loading 27 Structuring Programs 29 Outlet Connections 29 Extrinsic and Intrinsic Connections 30 Activating the Object Network 31 Aggregation and Decomposition 31 Models and Frameworks 32 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 2Structuring the Programming Task 34 Collaboration 34 Organizing Object-Oriented Projects 35 Designing on a Large Scale 35 Separating the Interface from the Implementation 35 Dividing the Work into Modules 35 Keeping the Interface Simple 36 Making Decisions Dynamically 36 Inheriting Generic Code 36 Reusing Tested Code 36 Document Revision History 38 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 3 ContentsFigures Object-Oriented Programming 8 Figure 2-1 Interface and implementation 9 The Object Model 12 Figure 3-1 An object 12 Figure 3-2 Objects in a network 13 Figure 3-3 An inheritance hierarchy 21 Structuring Programs 29 Figure 4-1 Outlets 30 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 4An object-oriented approach to application development makes programs more intuitive to design, faster to develop, more amenable to modification, and easier to understand. Most object-oriented development environments consist of at least three parts: ● A library of objects ● A set of development tools ● An object-oriented programming language and support library The Objective-C language is a programming language designed to enable sophisticated object-oriented programming. Objective-C is defined as a small but powerfulset of extensionsto the standard ANSI C language. Its additions to C are mostly based on Smalltalk, one of the first object-oriented programming languages. Objective-C is designed to give C full object-oriented programming capabilities and to do so in a simple and straightforward way. Important: This document does not describe the Objective-C language itself. To learn about the language, see The Objective-C Programming Language . Every object-oriented programming language and environment has a different perspective on what object-oriented means, how objects behave, and how programs might be structured. This document offers the Objective-C perspective. Who Should Read This Document For those who have never used object-oriented programming to create applications, this document is designed to help you become familiar with object-oriented development. It spells out some of the implications of object-oriented design and gives you a flavor of what writing an object-oriented program is really like. If you have developed applications using an object-oriented environment, this document will help you understand the fundamental concepts that are essential to understanding how to use Objective-C effectively and how to structure a program that uses Objective-C. 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 5 IntroductionBecause this isn’t a document about C, it assumes some prior acquaintance with that language. However, it doesn’t have to be an extensive acquaintance. Object-oriented programming in Objective-C is sufficiently different from procedural programming in ANSI C that you won’t be hampered if you’re not an experienced C programmer. Organization of This Document This document is divided into several chapters: ● “Why Objective-C?” (page 7) explains why Objective-C was chosen as the development language for the Cocoa frameworks. ● “Object-Oriented Programming” (page 8) discusses the rationale for object-oriented programming languages and introduces much of the terminology. It develops the ideas behind object-oriented programming techniques. Even if you’re already familiar with object-oriented programming, you are encouraged to read this chapter to gain a sense of the Objective-C perspective on object orientation and its use of terminology. ● “The Object Model” (page 12) describes how you can think of a program in terms of units that combine state and behavior—objects. It then explains how you characterize these objects as belonging to a particular class, how one class can inheritstate and behavior from another class, and how objects can send messages to other objects. ● “Structuring Programs” (page 29) explains how you think about designing an object-oriented program by creating connections between objects. It introducesthe techniques of aggregation and decomposition, which divide responsibility between differentsorts of object, and the role of frameworksin defining libraries of objects designed to work together. ● “Structuring the Programming Task” (page 34) discusses issues of project management related to collaboration among programmers and to code implementation. See Also The Objective-C Programming Language describes the Objective-C programming language. Objective-C Runtime Programming Guide describes how you can interact with the Objective-C runtime. Objective-C Runtime Reference describes the data structures and functions of the Objective-C runtime support library. Your programs can use these interfaces to interact with the Objective-C runtime system. For example, you can add classes or methods, or obtain a list of all class definitions for loaded classes. Introduction Organization of This Document 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 6The Objective-C language was chosen for a variety of reasons. First and foremost, it’s an object-oriented language. The kind of functionality that’s packaged in the Cocoa frameworks can only be delivered through object-oriented techniques. Second, because Objective-C is an extension ofstandard ANSI C, existing C programs can be adapted to use the software frameworks without losing any of the work that went into their original development. Because Objective-C incorporates C, you get all the benefits of C when working within Objective-C. You can choose when to do something in an object-oriented way (define a new class, for example) and when to stick to procedural programming techniques (define a structure and some functions instead of a class). Moreover, Objective-C is a fundamentally simple language. Its syntax is small, unambiguous, and easy to learn. Object-oriented programming, with its self-conscious terminology and emphasis on abstract design, often presents a steep learning curve to new recruits. A well-organized language like Objective-C can make becoming a proficient object-oriented programmer that much less difficult. Compared to other object-oriented languages based on C, Objective-C is very dynamic. The compiler preserves a great deal of information about the objects themselves for use at runtime. Decisions that otherwise might be made at compile time can be postponed until the program isrunning. Dynamism gives Objective-C programs unusual flexibility and power. For example, it yields two big benefits that are hard to get with other nominally object-oriented languages: ● Objective-C supports an open style of dynamic binding, a style that can accommodate a simple architecture for interactive user interfaces. Messages are not necessarily constrained by either the class of the receiver or even the method name, so a software framework can allow for user choices at runtime and permit developers freedom of expression in their design. (Terminology such as dynamic binding , message , class, and receiver are explained in due course in this document.) ● Dynamism enables the construction of sophisticated development tools. An interface to the runtime system provides access to information about running applications, so it’s possible to develop tools that monitor, intervene, and reveal the underlying structure and activity of Objective-C applications. Historical note: As a language, Objective-C has a long history. It was created at the Stepstone company in the early 1980s by Brad Cox and Tom Love. It was licensed by NeXT Computer Inc. in the late 1980s to develop the NeXTStep frameworks that preceded Cocoa. NeXT extended the language in several ways, for example, with the addition of protocols. 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 7 Why Objective-C?As humans, we’re constantly faced with myriad facts and impressions that we must make sense of. To do so, we must abstract underlying structure away from surface details and discover the fundamental relations at work. Abstractions reveal causes and effects, expose patterns and frameworks, and separate what’s important from what’s not. Object orientation provides an abstraction of the data on which you operate; moreover, it provides a concrete grouping between the data and the operations you can perform with the data—in effect giving the data behavior. Data and Operations Programming languages have traditionally divided the world into two parts—data and operations on data. Data is static and immutable, except as the operations may change it. The procedures and functions that operate on data have no lasting state of their own; they’re useful only in their ability to affect data. This division is, of course, grounded in the way computers work, so it’s not one that you can easily ignore or push aside. Like the equally pervasive distinctions between matter and energy and between nouns and verbs, it forms the background against which we work. At some point, all programmers—even object-oriented programmers—must lay out the data structures that their programs will use and define the functions that will act on the data. With a procedural programming language like C, that’s about all there is to it. The language may offer various kinds of support for organizing data and functions, but it won’t divide the world any differently. Functions and data structures are the basic elements of design. Object-oriented programming doesn’tso much dispute this view of the world asrestructure it at a higher level. It groups operations and data into modular units called objects and lets you combine objects into structured networks to form a complete program. In an object-oriented programming language, objects and object interactions are the basic elements of design. Every object has both state (data) and behavior (operations on data). In that, they’re not much different from ordinary physical objects. It’s easy to see how a mechanical device,such as a pocket watch or a piano, embodies both state and behavior. But almost anything that’s designed to do a job does, too. Even simple things with no moving parts such as an ordinary bottle combine state (how full the bottle is, whether or not it’s open, how warm its contents are) with behavior (the ability to dispense its contents at various flow rates, to be opened or closed, to withstand high or low temperatures). 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 8 Object-Oriented ProgrammingIt’sthisresemblance to real thingsthat gives objects much of their power and appeal. They can not only model components of real systems, but equally as well fulfill assigned roles as components in software systems. Interface and Implementation To invent programs, you need to be able to capture abstractions and express them in the program design. It’s the job of a programming language to help you do this. The language should facilitate the process of invention and design by letting you encode abstractions that reveal the way things work. It should let you make your ideas concrete in the code you write. Surface details shouldn’t obscure the architecture of your program. All programming languages provide devices that help express abstractions. In essence, these devices are ways of grouping implementation details, hiding them, and giving them, at least to some extent, a common interface—much as a mechanical objectseparatesitsinterface from itsimplementation, asillustrated in Figure 2-1. Figure 2-1 Interface and implementation 9 10 11 8 7 6 interface implementation Looking at such a unit from the inside, as the implementer, you’d be concerned with what it’s composed of and how it works. Looking at it from the outside, as the user, you’re concerned only with what it is and what it does. You can look past the details and think solely in terms of the role that the unit plays at a higher level. The principal units of abstraction in the C language are structures and functions. Both, in different ways, hide elements of the implementation: ● On the data side of the world, C structures group data elements into larger units that can then be handled as single entities. While some code must delve inside the structure and manipulate the fields separately, much of the program can regard it as a single thing—not as a collection of elements, but as what those elements taken together represent. One structure can include others, so a complex arrangement of information can be built from simpler layers. Object-Oriented Programming Interface and Implementation 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 9In modern C, the fields of a structure live in their own namespace—that is, their names won’t conflict with identically named data elements outside the structure. Partitioning the program namespace is essential for keeping implementation details out of the interface. Imagine, for example, the enormous task of assigning a different name to every piece of data in a large program and of making sure new names don’t conflict with old ones. ● On the proceduralside of the world, functions encapsulate behaviorsthat can be used repeatedly without being reimplemented. Data elements local to a function, like the fields within a structure, are protected within their own namespace. Because functions can reference (call) other functions, complex behaviors can be built from smaller pieces. Functions are reusable. Once defined, they can be called any number of times without again considering the implementation. The most generally useful functions can be collected in libraries and reused in many different applications. All the user needs is the function interface, not the source code. However, unlike data elements, functions aren’t partitioned into separate namespaces. Each function must have a unique name. Although the function may be reusable, its name is not. C structures and functions are able to expresssignificant abstractions, but they maintain the distinction between data and operations on data. In a procedural programming language, the highest units of abstraction still live on one side or the other of the data-versus-operations divide. The programs you design must always reflect, at the highest level, the way the computer works. Object-oriented programming languages don’t lose any of the virtues of structures and functions—they go a step further and add a unit capable of abstraction at a higher level, a unit that hides the interaction between a function and its data. Suppose, for example, that you have a group of functions that act on a particular data structure. You want to make those functions easier to use by, asfar as possible, taking the structure out of the interface. So you supply a few additional functions to manage the data. All the work of manipulating the data structure—allocating memory for it, initializing it, getting information from it, modifying values within it, keeping it up to date, and freeing its memory—is done through the functions. All the user doesis call the functions and passthe structure to them. With these changes, the structure has become an opaque token that other programmers never need to look inside. They can concentrate on what the functions do, not on how the data is organized. You’ve taken the first step toward creating an object. The nextstep isto give thisidea support in the programming language and completely hide the data structure so that it doesn’t even have to be passed between the functions. The data becomes an internal implementation detail; all that’s exported to users is a functional interface. Because objects completely encapsulate their data (hide it), users can think of them solely in terms of their behavior. Object-Oriented Programming Interface and Implementation 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 10With thisstep, the interface to the functions has become much simpler. Callers don’t need to know how they’re implemented (what data they use). It’s fair now to call this an object. The hidden data structure unites all the functions that share access to it. So an object is more than a collection of random functions; it’s a bundle of related behaviors that are supported by shared data. To use a function that belongs to an object, you first create the object (thus giving it its internal data structure), and then tell the object which function it should perform. You begin to think in terms of what the object does, rather than in terms of the individual functions. This progression from thinking about functions and data structures to thinking about object behaviors is the essence of learning object-oriented programming. It may seem unfamiliar at first, but as you gain experience with object-oriented programming, you find it’s a more natural way to think about things. Everyday programming terminology isreplete with analogiesto real-world objects of various kinds—lists, containers, tables, controllers, even managers. Implementing such things as programming objects merely extends the analogy in a natural way. A programming language can be judged by the kinds of abstractions that it enables you to encode. You shouldn’t be distracted by extraneous matters or forced to express yourself using a vocabulary that doesn’t match the reality you’re trying to capture. If, for example, you must alwaystend to the business of keeping the right data matched with the right procedure, you’re forced at all times to be aware of the entire program at a low level of implementation. While you might still invent programs at a high level of abstraction, the path from imagination to implementation can become quite tenuous—and more and more difficult as programs become bigger and more complicated. By providing another, higher level of abstraction, object-oriented programming languages give you a larger vocabulary and a richer model to program in. Object-Oriented Programming Interface and Implementation 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 11The insight of object-oriented programming isto combine state and behavior—data and operations on data—in a high-level unit, an object, and to give it language support. An object is a group of related functions and a data structure that serves those functions. The functions are known as the object’s methods, and the fields of its data structure are its instance variables. The methods wrap around the instance variables and hide them from the rest of the program, as Figure 3-1 illustrates. Figure 3-1 An object method method e m doht method data If you’ve ever tackled any kind of difficult programming problem, it’slikely that your design hasincluded groups of functions that work on a particular kind of data—implicit “objects” without the language support. Object-oriented programming makes these function groups explicit and permits you to think in terms of the group, rather than its components. The only way to an object’s data, the only interface, is through its methods. By combining both state and behavior in a single unit, an object becomes more than either alone; the whole really is greater than the sum of its parts. An object is a kind of self-sufficient “subprogram” with jurisdiction over a specific functional area. It can play a full-fledged modular role within a larger program design. Terminology: Object-oriented terminology variesfrom language to language. For example, in C++, methods are called member functions and instance variables are known as data members. This document uses the terminology of Objective-C, which has its basis in Smalltalk. For example, if you were to write a program that modeled home water usage, you might invent objects to represent the various components of the water-delivery system. One might be a Faucet object that would have methods to start and stop the flow of water, set the rate of flow, return the amount of water consumed in a given period, and so on. To do this work, a Faucet object would need instance variables to keep track of whether the tap is open or shut, how much water is being used, and where the water is coming from. 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 12 The Object ModelClearly, a programmatic Faucet object can be smarter than a real one (it’s analogous to a mechanical faucet with lots of gauges and instruments attached). But even a real faucet, like any system component, exhibits both state and behavior. To effectively model a system, you need programming units, like objects, that also combine state and behavior. A program consists of a network of interconnected objects that call upon each other to solve a part of the puzzle (as illustrated in Figure 3-2 (page 13)). Each object has a specific role to play in the overall design of the program and is able to communicate with other objects. Objects communicate through messages, which are requests to perform methods. Figure 3-2 Objects in a network data data data message The objects in the network won’t all be the same. For example, in addition to Faucet objects, the program that models water usage might also have Pipe objectsthat can deliver water to the Faucet and Valve objects to regulate the flow among pipes. There could be a Building object to coordinate a set of pipes, valves, and faucets, some Appliance objects—corresponding to dishwashers, toilets, and washing machines—that can turn valves on and off, and maybe some User objects to work the appliances and faucets. When a Building object is asked how much water is being used, it might call upon each Faucet and Valve object to report its current state. When a user starts up an appliance, the appliance will need to turn on a valve to get the water it requires. The Messaging Metaphor Every programming paradigm comes with its own terminology and metaphors. The jargon associated with object-oriented programming invites you to think about what goes on in a program from a particular perspective. The Object Model The Messaging Metaphor 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 13There’s a tendency, for example, to think of objects as actors and to endow them with human-like intentions and abilities. It’s tempting sometimes to talk about an object deciding what to do about a situation, asking other objectsfor information, introspecting about itself to get requested information, delegating responsibility to another object, or managing a process. Rather than think in terms of functions or methods doing the work, as you would in a procedural programming language, this metaphor asks you to think of objects as performing their methods. Objects are not passive containers for state and behavior, but are said to be the agents of the program’s activity. This metaphor is actually very useful. An object is like an actor in a couple of respects: It has a particular role to play within the overall design of the program, and within that role it can act fairly independently of the other parts of the program. It interacts with other objects as they play their own roles, but it is self-contained and to a certain extent can act on its own. Like an actor onstage, it can’t stray from the script, but the role it plays can be multifaceted and complex. The idea of objects as actorsfits nicely with the principal metaphor of object-oriented programming—the idea that objects communicate through messages. Instead of calling a method as you would a function, you send a message to an object requesting it to perform one of its methods. Although it can take some getting used to, this metaphor leads to a useful way of looking at methods and objects. It abstracts methods away from the particular data they act on and concentrates on behavior instead. For example, in an object-oriented programming interface, a start method might initiate an operation, an archive method might archive information, and a draw method might produce an image. Exactly which operation is initiated, which information is archived, and which image is drawn isn’t revealed by the method name. Different objects might perform these methods in different ways. Thus, methods are a vocabulary of abstract behaviors. To invoke one of those behaviors, you have to make it concrete by associating the method with an object. This is done by naming the object as the receiver of a message. The object you choose as receiver determines the exact operation that is initiated, the data that is archived, or the image that is drawn. Because methods belong to objects, they can be invoked only through a particular receiver (the owner of the method and of the data structure the method will act on). Different receivers can have different implementations of the same method. Consequently, different receivers can do different thingsin response to the same message. The result of a message can’t be calculated from the message or method name alone; it also depends on the object that receives the message. By separating the message (the requested behavior) from the receiver (the owner of a method that can respond to the request), the messaging metaphor perfectly captures the idea that behaviors can be abstracted away from their particular implementations. The Object Model The Messaging Metaphor 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 14Classes A program can have more than one object of the same kind. The program that models water usage, for example, might have several faucets and pipes and perhaps a handful of appliances and users. Objects of the same kind are said to be members of the same class. All members of a class are able to perform the same methods and have matching sets of instance variables. They also share a common definition; each kind of object is defined just once. In this, objects are similar to C structures. Declaring a structure defines a type. For example, the declaration struct key { char *word; int count; }; defines the struct key type. Once defined, the structure name can be used to produce any number of instances of the type: struct key a, b, c, d; struct key *p = malloc(sizeof(struct key) * MAXITEMS); The declaration is a template for a kind of structure, but it doesn’t create a structure that the program can use. It takes another step to allocate memory for an actual structure of that type, a step that can be repeated any number of times. Similarly, defining an object creates a template for a kind of object. It defines a class of objects. The template can be used to produce any number of similar objects—instances of the class. For example, there would be a single definition of the Faucet class. Using this definition, a program could allocate as many Faucet instances as it needed. A class definition is like a structure definition in that it lays out an arrangement of data elements (instance variables) that become part of every instance. Each instance has memory allocated for its own set of instance variables, which store values particular to the instance. However, a class definition differs from a structure declaration in that it also defines methods that specify the behavior of class members. Every instance is characterized by its access to the methods defined for the class. Two objects with equivalent data structures but different methods would not belong to the same class. The Object Model Classes 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 15Modularity To a C programmer, a module is nothing more than a file containing source code. Breaking a large (or even not-so-large) program into different files is a convenient way of splitting it into manageable pieces. Each piece can be worked on independently and compiled alone, and then integrated with other pieces when the program is linked. Using the static storage class designator to limit the scope of names to just the files where they’re declared enhances the independence of source modules. This kind of module is a unit defined by the file system. It’s a container for source code, not a logical unit of the language. What goes into the container is up to each programmer. You can use them to group logically related parts of the code, but you don’t have to. Files are like the drawers of a dresser; you can put your socks in one drawer, underwear in another, and so on, or you can use another organizing scheme or simply choose to mix everything up. Access to methods: It’s convenient to think of methods as being part of an object, just as instance variables are. As in Figure 3-1 (page 12), methods can be diagrammed as surrounding the object’s instance variables. But methods aren’t grouped with instance variables in memory. Memory is allocated for the instance variables of each new object, but there’s no need to allocate memory for methods. All an instance needs is access to its methods, and all instances of the same class share access to the same set of methods. There’s only one copy of the methods in memory, no matter how many instances of the class are created. Object-oriented programming languages support the use of file containers for source code, but they also add a logical module to the language—class definitions. As you’d expect, it’s often the case that each classis defined in its own source file—logical modules are matched to container modules. In Objective-C, for example, it would be possible to define the part of the Valve class that interacts with Pipe objects in the same file that defines the Pipe class, thus creating a container module for Pipe-related code and splitting the Valve class into more than one file. The Valve class definition would still act as a modular unit within the construction of the program—it would still be a logical module—no matter how many files the source code was located in. The mechanisms that make class definitions logical units of the language are discussed in some detail under “Mechanisms of Abstraction” (page 18). Reusability A principal goal of object-oriented programming is to make the code you write as reusable as possible—to have it serve many different situations and applications—so that you can avoid reimplementing, even if only slightly differently, something that’s already been done. The Object Model Classes 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 16Reusability is influenced by factors such as these: ● How reliable and bug-free the code is ● How clear the documentation is ● How simple and straightforward the programming interface is ● How efficiently the code performs its tasks ● How full the feature set is These factors don’t apply just to the object model. They can be used to judge the reusability of any code—standard C functions as well as class definitions. Efficient and well-documented functions, for example, would be more reusable than undocumented and unreliable ones. Nevertheless, a general comparison would show that class definitions lend themselves to reusable code in waysthat functions do not. There are variousthings you can do to make functions more reusable—for example, passing data as parameters rather than assuming specifically named global variables. Even so, it turns out that only a small subset of functions can be generalized beyond the applications they were originally designed for. Their reusability is inherently limited in at least three ways: ● Function names are global; each function must have a unique name (except for those declared static). This naming requirement makesit difficult to rely heavily on library code when building a complex system. The programming interface would be hard to learn and so extensive that it couldn’t easily capture significant generalizations. Classes, on the other hand, can share programming interfaces. When the same naming conventions are used over and over, a great deal of functionality can be packaged with a relatively small and easy-to-understand interface. ● Functions are selected from a library one at a time. It’s up to programmersto pick and choose the individual functions they need. In contrast, objects come as packages of functionality, not as individual methods and instance variables. They provide integrated services, so users of an object-oriented library won’t get bogged down piecing together their own solutions to a problem. ● Functions are typically tied to particular kinds of data structures devised for a specific program. The interaction between data and function is an unavoidable part of the interface. A function is useful only to those who agree to use the same kind of data structures it accepts as parameters. Because it hides its data, an object doesn’t have this problem. This is one of the principal reasons that classes can be reused more easily than functions. The Object Model Classes 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 17An object’s data is protected and won’t be touched by any other part of the program. Methods can therefore trust its integrity. They can be sure that external access hasn’t put data into an illogical or untenable state. As a result, an object data structure is more reliable than one passed to a function, and methods can depend on it more. Reusable methods are consequently easier to write. Moreover, because an object’s data is hidden, a class can be reimplemented to use a different data structure without affecting its interface. All programs that use the class can pick up the new version without changing any source code; no reprogramming is required. Mechanisms of Abstraction To this point, objects have been introduced as units that embody higher-level abstractions and as coherent role-players within an application. However, they couldn’t be used this way without the support of various language mechanisms. Two of the most important mechanisms are encapsulation and polymorphism. Encapsulation keeps the implementation of an object out of its interface, and polymorphism results from giving each class its own namespace. The following sections discuss each of these mechanisms in turn. Encapsulation To design effectively at any level of abstraction, you need to be able to leave details of implementation behind and think in terms of units that group those details under a common interface. For a programming unit to be truly effective, the barrier between interface and implementation must be absolute. The interface must encapsulate the implementation—that is, hide it from other parts of the program. Encapsulation protects an implementation from unintended actions and from inadvertent access. In C, a function is clearly encapsulated; its implementation is inaccessible to other parts of the program and protected from whatever actions might be taken outside the body of the function. In Objective-C, method implementations are similarly encapsulated, but more importantly so are an object’sinstance variables. They’re hidden inside the object and invisible outside it. The encapsulation of instance variables is sometimes also called information hiding . It might seem, at first, that hiding the information in instance variables might constrain your freedom as a programmer. Actually, it gives you more room to act and frees you from constraints that might otherwise be imposed. If any part of an object’s implementation could leak out and become accessible or a concern to other parts of the program, it would tie the hands both of the object’s implementer and of those who would use the object. Neither could make modifications without first checking with the other. The Object Model Mechanisms of Abstraction 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 18Suppose, for example, that you’re interested in the Faucet object being developed for the program that models water use and you want to incorporate it in another program you’re writing. Once the interface to the object is decided, you don’t have to be concerned when others work on it, fix bugs, and find better ways to implement it. You get the benefit of these improvements, but none of them affects what you do in your program. Because you depend solely on the interface, nothing they do can break your code. Your program is insulated from the object’s implementation. Moreover, although those implementing the Faucet object might be interested in how you use the class and might try to make sure it meets your needs, they don’t have to be concerned with the way you write your code. Nothing you do can touch the implementation of the object or limit their freedom to make implementation changesin future releases. The implementation isinsulated from anything that you or other users of the object might do. Polymorphism The ability of different objects to respond, each in its own way, to identical messages is called polymorphism. Polymorphism results from the fact that every class lives in its own namespace. The names assigned within a class definition don’t conflict with names assigned anywhere outside it. Thisistrue both of the instance variables in an object’s data structure and of the object’s methods: ● Just as the fields of a C structure are in a protected namespace, so are an object’s instance variables. ● Method names are also protected. Unlike the names of C functions, method names aren’t global symbols. The name of a method in one class can’t conflict with method names in other classes; two very different classes can implement identically named methods. Method names are part of an object’sinterface. When a message issent requesting that an object do something, the message names the method the object should perform. Because different objects can have methods with the same name, the meaning of a message must be understood relative to the particular object that receives the message. The same message sent to two different objects can invoke two distinct methods. The main benefit of polymorphism is that it simplifies the programming interface. It permits conventions to be established that can be reused in class after class. Instead of inventing a new name for each new function you add to a program, the same names can be reused. The programming interface can be described as a set of abstract behaviors, quite apart from the classes that implement them. The Object Model Mechanisms of Abstraction 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 19Overloading: The terms polymorphism and parameter overloading refer basically to the same thing, but from slightly different points of view. Polymorphism takes a pluralistic point of view and notesthatseveral classes can each have a method with the same name. Parameter overloading takes the point of the view of the method name and notes that it can have different effects depending on the parameters passed to it.Operator overloading issimilar. Itrefersto the ability to turn operators of the language (such as == and + in C) into methods that can be assigned particular meanings for particular kinds of objects. Objective-C implements polymorphism of method names, but not parameter or operator overloading. For example, suppose you want to report the amount of water used by an Appliance object over a given period of time. Instead of defining an amountConsumed method for the Appliance class, an amountDispensedAtFaucet method for a Faucet class, and a cumulativeUsage method for a Building class, you can simply define a waterUsed method for each class. This consolidation reduces the number of methods used for what is conceptually the same operation. Polymorphism also permits code to be isolated in the methods of different objects rather than be gathered in a single function that enumerates all the possible cases. This makes the code you write more extensible and reusable. When a new case comes along, you don’t have to reimplement existing code; you need only to add a new class with a new method, leaving the code that’s already written alone. For example, suppose you have code that sends a draw message to an object. Depending on the receiver, the message might produce one of two possible images. When you want to add a third case, you don’t have to change the message or alter existing code; you merely allow another object to be assigned as the message receiver. Inheritance The easiest way to explain something new is to start with something understood. If you want to describe what a schooner is, it helpsif your listeners already know what a sailboat is. If you want to explain how a harpsichord works, it’s best if you can assume your audience has already looked inside a piano, or has seen a guitar played, or at least is familiar with the idea of a musical instrument. The same is true if you want to define a new kind of object; the description is simpler if it can start from the definition of an existing object. With thisin mind, object-oriented programming languages permit you to base a new class definition on a class already defined. The base class is called a superclass; the new class is its subclass. The subclass definition specifies only how it differs from the superclass; everything else is taken to be the same. The Object Model Inheritance 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 20Nothing is copied from superclass to subclass. Instead, the two classes are connected so that the subclass inherits all the methods and instance variables of itssuperclass, much as you want your listener’s understanding ofschooner to inherit what they already know aboutsailboats. If the subclass definition were empty (if it didn’t define any instance variables or methods of its own), the two classes would be identical (except for their names) and would share the same definition. It would be like explaining what a fiddle is by saying that it’s exactly the same as a violin. However, the reason for declaring a subclassisn’t to generate synonyms; it to create something at least a little different from its superclass. For example, you might extend the behavior of the fiddle to allow it to play bluegrass in addition to classical music. Class Hierarchies Any class can be used as a superclass for a new class definition. A class can simultaneously be a subclass of another class, and a superclass for its own subclasses. Any number of classes can thus be linked in a hierarchy of inheritance, such as the one depicted in Figure 3-3. Figure 3-3 An inheritance hierarchy root C D E F A B Every inheritance hierarchy begins with a root class that has no superclass. From the root class, the hierarchy branches downward. Each class inherits from its superclass and, through its superclass, from all the classes above it in the hierarchy. Every class inherits from the root class. Each class is the accumulation of all the class definitions in its inheritance chain. In the example above, class D inherits both from C—itssuperclass—and the root class. Members of the D class have methods and instance variables defined in all three classes—D, C, and root. Typically, every class has just one superclass and can have an unlimited number of subclasses. However, in some object-oriented programming languages (though not in Objective-C), a class can have more than one superclass; it can inherit through multiple sources. Instead of having a single hierarchy that branches downward as shown in Figure 3-3, multiple inheritance lets some branches of the hierarchy (or of different hierarchies) merge. Subclass Definitions A subclass can make three kinds of changes to the definition it inherits through its superclass: The Object Model Inheritance 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 21● It can expand the class definition it inherits by adding new methods and instance variables. This is the most common reason for defining a subclass. Subclasses always add new methods and add new instance variables if the methods require it. ● It can modify the behavior it inherits by replacing an existing method with a new version. This is done by simply implementing a new method with the same name as one that’sinherited. The new version overrides the inherited version. (The inherited method doesn’t disappear; it’s still valid for the class that defined it and other classes that inherit it.) ● It can refine or extend the behavior it inherits by replacing an existing method with a new version, but it still retains the old version by incorporating it in the new method. A subclass sends a message to perform the old version in the body of the new method. Each class in an inheritance chain can contribute part of a method’s behavior. In Figure 3-3 (page 21), for example, class D might override a method defined in class C and incorporate C’s version, while C’s version incorporates a version defined in the root class. Subclasses thus tend to fill out a superclass definition, making it more specific and specialized. They add, and sometimesreplace, code rather than subtract it. Note that methods generally can’t be disinherited and instance variables can’t be removed or overridden. Uses of Inheritance The classic examples of an inheritance hierarchy are borrowed from animal and plant taxonomies. For example, there could be a class corresponding to the Pinaceae (pine) family of trees. Itssubclasses could be Fir, Spruce, Pine, Hemlock, Tamarack, DouglasFir, and TrueCedar, corresponding to the various genera that make up the family. The Pine class might have SoftPine and HardPine subclasses, with WhitePine, SugarPine, and BristleconePine as subclasses of SoftPine, and PonderosaPine, JackPine, MontereyPine, and RedPine as subclasses of HardPine. There’s rarely a reason to program a taxonomy like this, but the analogy is a good one. Subclasses tend to specialize a superclass or adapt it to a special purpose, much as a species specializes a genus. Here are some typical uses of inheritance: ● Reusing code. If two or more classes have some things in common but also differ in some ways, the common elements can be put in a single class definition that the other classes inherit. The common code is shared and need only be implemented once. For example, Faucet, Valve, and Pipe objects, defined for the program that models water use, all need a connection to a water source and should be able to record the rate of flow. These commonalities can be encoded once, in a class that the Faucet, Valve, and Pipe classes inherit from. A Faucet object can be said to be a kind of Valve object, so perhaps the Faucet class would inherit most of what it is from Valve, and add little of its own. The Object Model Inheritance 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 22● Setting up a protocol. A class can declare a number of methods that its subclasses are expected to implement. The class might have empty versions of the methods, or it might implement partial versions that are to be incorporated into the subclass methods. In either case, its declarations establish a protocol that all its subclasses must follow. When different classes implement similarly named methods, a program is better able to make use of polymorphism in its design. Setting up a protocol that subclasses must implement helps enforce these conventions. ● Delivering generic functionality. One implementer can define a class that contains a lot of basic, general code to solve a problem but that doesn’t fill in all the details. Other implementers can then create subclasses to adapt the generic class to their specific needs. For example, the Appliance class in the program that models water use might define a generic water-using device thatsubclasses would turn into specific kinds of appliances. Inheritance is thus both a way to make someone else’s programming task easier and a way to separate levels of implementation. ● Making slight modifications. When inheritance is used to deliver generic functionality, set up a protocol, or reuse code, a class is devised that other classes are expected to inherit from. But you can also use inheritance to modify classes that aren’t intended as superclasses. Suppose, for example, that there’s an object that would work well in your program, but you’d like to change one or two things that it does. You can make the changes in a subclass. ● Previewing possibilities. Subclasses can also be used to factor out alternatives for testing purposes. For example, if a class is to be encoded with a particular user interface, alternative interfaces can be factored into subclasses during the design phase of the project. Each alternative can then be demonstrated to potential usersto see which they prefer. When the choice is made, the selected subclass can be reintegrated into its superclass. Dynamism At one time in programming history, the question of how much memory a program might use was typically settled when the source code was compiled and linked. All the memory the program would ever need was set aside for it as it was launched. This memory was fixed; it could neither grow nor shrink. In hindsight, it’s evident what a serious constraint this was. It limited not only how programs were constructed, but what you could imagine a program doing. It constrained design, not just programming technique. The introduction of functions that dynamically allocate memory as a program runs (such as malloc) opened possibilities that didn’t exist before. The Object Model Dynamism 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 23Compile-time and link-time constraints are limiting because they force issues to be decided from information found in the programmer’s source code, rather than from information obtained from the user as the program runs. Although dynamic allocation removes one such constraint, many others, equally as limiting as static memory allocation, remain. For example, the elements that make up an application must be matched to data types at compile time. And the boundaries of an application are typically set at link time. Every part of the application must be united in a single executable file. New modules and new types can’t be introduced as the program runs. Objective-C seeks to overcome these limitations and to make programs as dynamic and fluid as possible. It shifts much of the burden of decision making from compile time and link time to runtime. The goal is to let program users decide what will happen, rather than constrain their actions artificially by the demands of the language and the needs of the compiler and linker. Three kinds of dynamism are especially important for object-oriented design: ● Dynamic typing, waiting until runtime to determine the class of an object ● Dynamic binding, determining at runtime what method to invoke ● Dynamic loading, adding new components to a program as it runs Dynamic Typing The compiler typically complains if the code you write assigns a value to a type that can’t accommodate it. You might see warnings like these: incompatible types in assignment assignment of integer from pointer lacks a cast Type checking is useful, but there are times when it can interfere with the benefits you get from polymorphism, especially if the type of every object must be known to the compiler. Suppose, for example, that you want to send an object a message to perform the start method. Like other data elements, the object isrepresented by a variable. If the variable’stype (its class) must be known at compile time, it would be impossible to let runtime factors influence the decision about what kind of object should be assigned to the variable. If the class of the variable is fixed in source code, so is the version of start that the message invokes. The Object Model Dynamism 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 24If, on the other hand, it’s possible to wait until runtime to discover the class of the variable, any kind of object could be assigned to it. Depending on the class of the receiver, the start message might invoke different versions of the method and produce very different results. Dynamic typing thus givessubstance to dynamic binding (discussed next). But it does more than that. It permits associations between objects to be determined at runtime, rather than forcing them to be encoded in a static design. For example, a message could pass an object as a parameter without declaring exactly what kind of object it is—that is, without declaring its class. The message receiver might then send its own messages to the object, again without ever caring about what kind of object it is. Because the receiver uses the passed object to do some of its work, it is in a sense customized by an object of indeterminate type (indeterminate in source code, that is, not at runtime). Dynamic Binding In standard C, you can declare a set of alternative functions, such as the standard string-comparison functions, int strcmp(const char *, const char *); /* case sensitive */ int strcasecmp(const char *, const char *); /*case insensitive*/ and declare a pointer to a function that has the same return and parameter types: int (* compare)(const char *, const char *); You can then wait until runtime to determine which function to assign to the pointer, if ( **argv == 'i' ) compare = strcasecmp; else compare = strcmp; and call the function through the pointer: if ( compare(s1, s2) ) ... Thisis akin to what in object-oriented programming is called dynamic binding, delaying the decision of exactly which method to perform until the program is running. The Object Model Dynamism 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 25Although not all object-oriented languages support it, dynamic binding can be routinely and transparently accomplished through messaging. You don’t have to go through the indirection of declaring a pointer and assigning values to it as shown in the example above. You also don’t have to assign each alternative procedure a different name. Messages invoke methods indirectly. Every message expression must find a method implementation to “call.” To find that method, the messaging machinery must check the class of the receiver and locate itsimplementation of the method named in the message. When this is done at runtime, the method is dynamically bound to the message. When it’s done by the compiler, the method is statically bound. Late binding: Some object-oriented programming languages (notably C++) require a message receiver to be statically typed in source code, but don’t require the type to be exact. An object can be typed to its own class or to any classthat it inheritsfrom. The compiler therefore can’t tell whether the message receiver is an instance of the class specified in the type declaration, an instance of a subclass, or an instance ofsome more distantly derived class. Because it doesn’t know the exact class of the receiver, it can’t know which version of the method named in the message to invoke. In this circumstance, the choice is between treating the receiver as if it were an instance of the specified class and simply bind the method defined for that class to the message, or waiting until some later time to resolve the situation. In C++, the decision is postponed to link time for methods (member functions) that are declared virtual. Thisissometimesreferred to aslate binding rather than dynamic binding . While dynamic in the sense that it happens at runtime, it carries with it strict compile-time type constraints. As discussed here (and implemented in Objective-C), dynamic binding is unconstrained. Dynamic binding is possible even in the absence of dynamic typing, but it’s not very interesting. There’s little benefit in waiting until runtime to match a method to a message when the class of the receiver is fixed and known to the compiler. The compiler could just as well find the method itself; the runtime result won’t be any different. However, if the class of the receiver is dynamically typed, there’s no way for the compiler to determine which method to invoke. The method can be found only after the class of the receiver isresolved at runtime. Dynamic typing thus entails dynamic binding. Dynamic typing also makes dynamic binding interesting, for it opensthe possibility that a message might have very different results depending on the class of the receiver. Runtime factors can influence the choice of receiver and the outcome of the message. The Object Model Dynamism 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 26Dynamic typing and binding also open the possibility that the code you write can send messages to objects not yet invented. If object types don’t have to be decided until runtime, you can give others the freedom to design their own classes and name their own data types and still have your code send messagesto their objects. All you need to agree on are the messages, not the data types. Note: Dynamic binding is routine in Objective-C. You don’t need to arrange for it specially, so your design never needs to bother with what’s being done when. Dynamic Loading Historically, in many common environments, before a program can run, all its parts had to be linked together in one file. When it was launched, the entire program was loaded into memory at once. Some object-oriented programming environments overcome this constraint and allow different parts of an executable program to be kept in different files. The program can be launched in bits and pieces as they’re needed. Each piece is dynamically loaded and linked with the rest of program as it’s launched. User actions can determine which parts of the program are in memory and which aren’t. Only the core of a large program needs to be loaded at the start. Other modules can be added as the user requests their services. Modules the user doesn’t request make no memory demands on the system. Dynamic loading raisesinteresting possibilities. For example, an entire program does not have to be developed at once. You can deliver your software in pieces and update one part of it at a time. You can devise a program that groups several tools under a single interface, and load just the tools the user wants. The program can even offer sets of alternative tools to do the same job—the user then selects one tool from the set and only that tool would be loaded. Perhaps the most important current benefit of dynamic loading is that it makes applications extensible. You can allow others to add to and customize a program you’ve designed. All your program needs to do is provide a framework that others can fill in, and, at runtime, find the pieces that they’ve implemented and load them dynamically. The main challenge that dynamic loading faces is getting a newly loaded part of a program to work with parts already running, especially when the different parts were written by different people. However, much of this problem disappears in an object-oriented environment because code is organized into logical modules with a clear division between implementation and interface. When classes are dynamically loaded, nothing in the newly loaded code can clash with the code already in place. Each class encapsulates its implementation and has an independent namespace. The Object Model Dynamism 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 27In addition, dynamic typing and dynamic binding let classes designed by othersfit effortlessly into the program you’ve designed. Once a class is dynamically loaded, it’s treated no differently than any other class. Your code can send messages to their objects and theirs to yours. Neither of you has to know what classes the other has implemented. You need only agree on a communications protocol. Loading and linking: Although it’s the term commonly used, dynamic loading could just as well be called dynamic linking . Programs are linked when their various parts are joined so that they can work together; they’re loaded when they’re read into volatile memory at launch time. Linking usually precedes loading. Dynamic loading refers to the process of separately loading new or additional parts of a program and linking them dynamically to the parts already running. The Object Model Dynamism 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 28Object-oriented programs have two kinds of structure. One can be seen in the inheritance hierarchy of class definitions. The other is evident in the pattern of message passing as the program runs. These messages reveal a network of object connections. ● The inheritance hierarchy explains how objects are related by type. For example, in the program that models water use, it might turn out that faucets and pipes are the same kind of object, except that faucets can be turned on and off and pipes can have multiple connections to other pipes. This similarity would be captured in the program design if the Faucet and Pipe classes inherit from a common superclass. ● The network of object connections explains how the program works. For example, Appliance objects might send messages requesting water to valves, and valves to pipes. Pipes might communicate with the Building object, and the Building object with all the valves, faucets, and pipes, but not directly with appliances. To communicate with each other in this way, objects must know about each other. An Appliance object would need a connection to a Valve object, and a Valve object to a Pipe object, and so on. These connections define a program structure. Object-oriented programs are designed by laying out the network of objects with their behaviors and patterns of interaction and by arranging the hierarchy of classes. There’s structure both in the program’s activity and in its definition. Outlet Connections Part of the task of designing an object-oriented program isto arrange the object network. The network doesn’t have to be static; it can change dynamically as the program runs. Relationships between objects can be improvised as needed, and the cast of objects that play assigned roles can change from time to time. But there has to be a script. Some connections can be entirely transitory. A message might contain a parameter identifying an object, perhaps the sender of the message, that the receiver can communicate with. As it responds to the message, the receiver can send messages to that object, perhaps identifying itself or still another object that the object can in turn communicate with. Such connections are fleeting; they last only as long as the chain of messages. 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 29 Structuring ProgramsBut not all connections between objects can be handled on the fly. Some need to be recorded in program data structures. There are various ways to do this. A table might be kept of object connections, or there might be a service that identifies objects by name. However, the simplest way is for each object to have instance variables that keep track of the other objects it must communicate with. These instance variables—termed outlets because they record the outlets for messages—define the principal connections between objects in the program network. Although the names of outlet instance variables are arbitrary, they generally reflect the rolesthat outlet objects play. Figure 4-1 illustrates an object with four outlets—an agent, a friend, a neighbor, and a boss. The objects that play these parts may change every now and then, but the roles remain the same. Figure 4-1 Outlets agent friend neighbor boss Some outlets are set when the object isfirst initialized and may never change. Others might be set automatically asthe consequence of other actions. Still others can be set freely, using methods provided just for that purpose. However they’re set, outlet instance variables reveal the structure of the application. They link objects into a communicating network, much as the components of a water system are linked by their physical connections or as individuals are linked by their patterns of social relations. Extrinsic and Intrinsic Connections Outlet connections can capturemany different kinds of relationships between objects. Sometimesthe connection is between objects that communicate more or less as equal partners in an application, each with its own role to play and neither dominating the other. For example, an Appliance object might have an outlet instance variable to keep track of the valve it’s connected to. Structuring Programs Outlet Connections 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 30Sometimes one object should be seen as being part of another. For example, a Faucet object might use a Meter object to measure the amount of water being released. The Meter object would serve no other object and would act only under orders from the Faucet object. It would be an intrinsic part of the Faucet object, in contrast to an Appliance object’s extrinsic connection to a Valve object. Similarly, an object that oversees other objects might keep a list of its charges. A Building object, for example, might have a list of all the Pipe objects in the program. The Pipe objects would be considered an intrinsic part of the Building object and belong to it. Pipe objects, on the other hand, would maintain extrinsic connections to each other. Intrinsic outlets behave differently from extrinsic ones. When an object is freed or archived in a file on disk, the objects that its intrinsic outlets point to must be freed or archived with it. For example, when a faucet is freed, its meter is rendered useless and therefore should be freed as well. A faucet archived without its meter would be of little use when it’s unarchived (unless it could create a new Meter object for itself). Extrinsic outlets, on the other hand, capture the organization of the program at a higher level. They record connections between relatively independent program subcomponents. When an Appliance object is freed, the Valve object it was connected to still is of use and remains in place. When an Appliance object is unarchived, it can be connected to another valve and resume playing the same sort of role it played before. Activating the Object Network The object network is set into motion by an external stimulus. If you’re writing an interactive application with a user interface, it will respond to user actions on the keyboard and mouse. A program that tries to factor very large numbers might start when you pass it a target number on the command line. Other programs might respond to data received over a phone line, information obtained from a database, or information about the state of a mechanical process the program monitors. Programs often are activated by a flow of events, which are reports of external activity ofsome sort. Applications that display a user interface are driven by events from the keyboard and mouse. Every press of a key or click of the mouse generates events that the application receives and responds to. An object-oriented program structure (a network of objects that’s prepared to respond to an external stimulus) is ideally suited for this kind of user-driven application. Aggregation and Decomposition Another part of the design task is deciding the arrangement of classes—when to add functionality to an existing class by defining a subclass and when to define an independent class. The problem can be clarified by imagining what would happen in the extreme case: Structuring Programs Aggregation and Decomposition 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 31● It’s possible to conceive of a program consisting of just one object. Because it’s the only object, it can send messages only to itself. It therefore can’t take advantage of polymorphism, or the modularity of a variety of classes, or a program design conceived as a network of interconnected objects. The true structure of the program would be hidden inside the class definition. Despite being written in an object-oriented language, there would be very little that was object-oriented about it. ● On the other hand, it’s also possible to imagine a program that consists of hundreds of different kinds of objects, each with very few methods and limited functionality. Here, too, the structure of the program would be lost, this time in a maze of object connections. Obviously, it’s best to avoid either of these extremes, to keep objects large enough to take on a substantial role in the program but small enough to keep that role well-defined. The structure of the program should be easy to grasp in the pattern of object connections. Nevertheless, the question often arises of whether to add more functionality to a class or to factor out the additional functionality and put it in a separate class definition. For example, a Faucet object needs to keep track of how much water is being used over time. To do that, you could either implement the necessary methods in the Faucet class, or you could devise a generic Meter object to do the job, as suggested earlier. Each Faucet object would have an outlet connecting it to a Meter object, and the meter would not interact with any object but the faucet. The choice often depends on your design goals. If the Meter object could be used in more than one situation, perhaps in another project entirely, it would increase the reusability of your code to factor the metering task into a separate class. If you have reason to make Faucet objects as self-contained as possible, the metering functionality could be added to the Faucet class. It’s generally better to try for reusable code and avoid having large classes that do so many things that they can’t be adapted to othersituations. When objects are designed as components, they become that much more reusable. What works in one system or configuration might well work in another. Dividing functionality between different classes doesn’t necessarily complicate the programming interface. If the Faucet class keeps the Meter object private, the Meter interface wouldn’t have to be published for users of the Faucet class; the object would be as hidden as any other Faucet instance variable. Models and Frameworks Objects combine state and behavior, and so resemble things in the real world. Because they resemble real things, designing an object-oriented program is very much like thinking about real things—what they do, how they work, and how one thing is connected to another. Structuring Programs Models and Frameworks 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 32When you design an object-oriented program, you are, in effect, putting together a computer simulation of how something works. Object networks look and behave like models of real systems. An object-oriented program can be thought of as a model, even if there’s no actual counterpart to it in the real world. Each component of the model—each kind of object—is described in terms of its behavior, responsibilities, and interactions with other components. Because an object’s interface lies in its methods, not its data, you can begin the design process by thinking about what a system component must do, not how it’s represented in data. Once the behavior of an object is decided, the appropriate data structure can be chosen, but this is a matter of implementation, not the initial design. For example, in the water-use program, you wouldn’t begin by deciding what the Faucet data structure looked like, but what you wanted a Faucet object to do—make a connection to a pipe, be turned on and off, adjust the rate of flow, and so on. The design is therefore not bound from the outset by data choices. You can decide on the behavior first and implement the data afterwards. Your choice of data structures can change over time without affecting the design. Designing an object-oriented program doesn’t necessarily entail writing great amounts of code. The reusability of class definitions means that the opportunity is great for building a program largely out of classes devised by others. It might even be possible to construct interesting programs entirely out of classes someone else defined. As the suite of class definitions grows, you have more and more reusable parts to choose from. Reusable classes come from many sources. Development projects often yield reusable class definitions, and some enterprising developers market them. Object-oriented programming environments typically come with class libraries. There are well over two hundred classes in the Cocoa libraries. Some of these classes offer basic services (hashing, data storage, remote messaging). Others are more specific (user interface devices, video displays, sound). Typically, a group of library classes work together to define a partial program structure. These classes constitute a software framework (or kit) that can be used to build a variety of different kinds of applications. When you use a framework, you accept the program model it provides and adapt your design to it. You use the framework by: ● Initializing and arranging instances of framework classes ● Defining subclasses of framework classes ● Defining new classes of your own to work with classes defined in the framework In each of these ways, you not only adapt your program to the framework, but you also adapt the generic framework structure to the specialized purposes of your application. The framework, in essence, sets up part of an object network for your program and provides part of its class hierarchy. Your own code completes the program model started by the framework. Structuring Programs Models and Frameworks 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 33Object-oriented programming not only structures programs in a better way, it also helps structure the programming task. As software tries to do more and more, and programs become bigger and more complicated, the problem of managing the task also grows. There are more pieces to fit together and more people working together to build them. The object-oriented approach offers ways of dealing with this complexity, not just in design, but also in the organization of the work. Collaboration Complex software requires an extraordinary collaborative effort among people who must be individually creative, yet still make what they do fit exactly with what others are doing. The sheer size of the effort and the number of people working on the same project at the same time in the same place can get in the way of the group’s ability to work cooperatively towards a common goal. In addition, collaboration is often impeded by barriers of time, space, and organization: ● Code must be maintained, improved, and used long after it’s written. Programmers who collaborate on a project may not be working on it at the same time, so they may not be in a position to talk things over and keep each other informed about details of the implementation. ● Even if programmers work on the same project at the same time, they may not be located in the same place. This also inhibits how closely they can work together. ● Programmers working in different groups with different priorities and different schedules often must collaborate on projects. Communication across organizational barriers isn’t always easy to achieve. The answer to these difficulties must grow out of the way programs are designed and written. It can’t be imposed from the outside in the form of hierarchical management structures and strict levels of authority. These often get in the way of people’s creativity, and become burdensin and of themselves. Rather, collaboration must be built into the work itself. 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 34 Structuring the Programming TaskThat’s where object-oriented programming techniques can help. For example, the reusability of object-oriented code means that programmers can collaborate effectively, even when they work on different projects at different times or are in different organizations, just by sharing their code in libraries. This kind of collaboration holds a great deal of promise, for it can conceivably lighten difficult tasks and make seemingly impossible projects achievable. Organizing Object-Oriented Projects Object-oriented programming helps restructure the programming task in ways that benefit collaboration. It helps eliminate the need to collaborate on low-level implementation details, while providing structures that facilitate collaboration at a higher level. Almost every feature of the object model, from the possibility of large-scale design to the increased reusability of code, has consequences for the way people work together. Designing on a Large Scale When programs are designed at a high level of abstraction, the division of labor is more easily conceived. It can match the division of the program on logical lines; the way a project is organized can grow out of its design. With an object-oriented design, it’s easier to keep common goals in sight, instead of losing them in the implementation, and easier for everyone to see how the piece they’re working on fits into the whole. Their collaborative efforts are therefore more likely to be on target. Separating the Interface from the Implementation The connections between the various components of an object-oriented program are worked out early in the design process. They can be well-defined, at least for the initial phase of development, before implementation begins. During implementation, only this interface needs to be coordinated, and most of that falls naturally out of the design. Because each class encapsulates its implementation and has its own namespace, there’s no need to coordinate implementation details. Collaboration is simpler when there are fewer coordination requirements. Dividing the Work into Modules The modularity of object-oriented programming means that the logical components of a large program can each be implemented separately. Different people can work on different classes. Each implementation task is isolated from the others. Structuring the Programming Task Organizing Object-Oriented Projects 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 35Modularity has benefits, not just for organizing the implementation, but for fixing problems later. Because implementations are contained within class boundaries, problems that come up are also likely to be isolated. It’s easier to track down bugs when they’re located in a well-defined part of the program. Separating responsibilities by class also means that each part can be worked on by specialists. Classes can be updated periodically to optimize their performance and make the best use of new technologies. These updates don’t have to be coordinated with other parts of the program. As long as the interface to an object doesn’t change, improvements to its implementation can be scheduled at any time. Keeping the Interface Simple The polymorphism of object-oriented programs yields simpler programming interfaces, since the same names and conventions can be reused in any number of classes. The result is less to learn, a greater shared understanding of how the whole system works, and a simpler path to cooperation and collaboration. Making Decisions Dynamically Because object-oriented programs make decisions dynamically at runtime, lessinformation needsto be supplied at compile time (in source code) to make two pieces of code work together. Consequently, there’s less to coordinate and less to go wrong. Inheriting Generic Code Inheritance is a way of reusing code. If you can define your classes as specializations of more generic classes, your programming task is simplified. The design is simplified as well, since the inheritance hierarchy lays out the relationships between the different levels of implementation and makes them easier to understand. Inheritance also increases the reusability and reliability of code. The code placed in a superclass is tested by its subclasses. The generic class you find in a library will have been tested by other subclasses written by other developers for other applications. Reusing Tested Code The more software you can borrow from others and incorporate in your own programs, the less you have to do yourself. There’s more software to borrow in an object-oriented programming environment because the code is more reusable. Collaboration between programmers working in different places for different organizations is enhanced, while the burden of each project is eased. Structuring the Programming Task Organizing Object-Oriented Projects 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 36Classes and frameworks from an object-oriented library can make substantial contributions to your program. When you program with the software frameworks provided by Apple, for example, you’re effectively collaborating with the programmers at Apple; you’re contracting a part of your program, often a substantial part, to them. You can concentrate on what you do best and leave other tasks to the library developer. Your projects can be prototyped faster, completed faster, with less of a collaborative challenge at your own site. The increased reusability of object-oriented code also increases its reliability. A class taken from a library is likely to have found its way into a variety of applications and situations. The more the code has been used, the more likely it is that problems will have been encountered and fixed. Bugs that would have seemed strange and hard to find in your program might already have been tracked down and eliminated. Structuring the Programming Task Organizing Object-Oriented Projects 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 37This table describes the changes to Object-Oriented Programming with Objective-C. Date Notes 2010-11-15 Edited for content and clarity. 2008-11-19 Corrected typographical errors. 2007-12-11 Corrected a typographical error. 2007-07-09 Originally published as part of "The Objective-C Programming Language". 2010-11-15 | © 2010 Apple Inc. All Rights Reserved. 38 Document Revision HistoryApple Inc. © 2010 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, Mac, and Objective-C are trademarks of Apple Inc., registered in the U.S. and other countries. NeXT is a trademark of NeXT Software, Inc., registered in the U.S. and other countries. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. Table View Programming Guide for iOSContents About Table Views in iOS Apps 8 At a Glance 9 Table Views Draw Their Rows Using Cells 9 Responding to Selections of Rows 9 In Editing Mode You Can Add, Delete, and Reorder Rows 10 To Create a Table View, Use a Storyboard 10 Prerequisites 11 See Also 11 Table View Styles and Accessory Views 12 Table View Styles 12 Plain Table Views 13 Grouped Table Views 15 Standard Styles for Table View Cells 17 Accessory Views 21 Overview of the Table View API 23 Table View 23 Table View Controller 23 Data Source and Delegate 23 Extension to the NSIndexPath Class 24 Table View Cells 24 Navigating a Data Hierarchy with Table Views 25 Hierarchical Data Models and Table Views 25 The Data Model as a Hierarchy of Model Objects 25 Table Views and the Data Model 26 View Controllers and Navigation-Based Apps 28 Navigation Controllers 28 Navigation Bars 29 Table View Controllers 31 Managing Table Views in a Navigation-Based App 32 Design Pattern for Navigation-Based Apps 35 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 2Creating and Configuring a Table View 36 Basics of Table View Creation 36 Recommendations for Creating and Configuring Table Views 38 Creating a Table View Using a Storyboard 38 Choose the Table View’s Display Style 39 Choose the Table View’s Content Type 40 Design the Table View’s Rows 42 Create Additional Table Views 42 Learn More by Creating a Sample App 43 Creating a Table View Programmatically 43 Adopt the Data Source and Delegate Protocols 43 Create and Configure a Table View 44 Populating a Dynamic Table View with Data 44 Populating a Static Table View With Data 46 Populating an Indexed List 47 Optional Table View Configurations 52 Add a Custom Title 52 Provide a Section Title 53 Indent a Row 53 Vary a Row’s Height 54 Customize Cells 54 A Closer Look at Table View Cells 55 Characteristics of Cell Objects 55 Using Cell Objects in Predefined Styles 57 Customizing Cells 60 Loading Table View Cells from a Storyboard 61 Programmatically Adding Subviews to a Cell’s Content View 69 Cells and Table View Performance 73 Managing Selections 74 Selections in Table Views 74 Responding to Selections 74 Programmatically Selecting and Scrolling 78 Inserting and Deleting Rows and Sections 80 Inserting and Deleting Rows in Editing Mode 81 When a Table View is Edited 81 An Example of Deleting a Table-View Row 84 An Example of Adding a Table-View Row 85 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 3 ContentsBatch Insertion, Deletion, and Reloading of Rows and Sections 87 An Example of Batched Insertion and Deletion Operations 88 Ordering of Operations and Index Paths 89 Managing the Reordering of Rows 91 What Happens When a Row is Relocated 92 Examples of Moving a Row 93 Document Revision History 95 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 4 ContentsFigures and Listings About Table Views in iOS Apps 8 Figure I-1 Table views of various kinds 8 Table View Styles and Accessory Views 12 Figure 1-1 A table view in the plain style 13 Figure 1-2 A table view configured as an indexed list 14 Figure 1-3 A table view configured as a selection list 15 Figure 1-4 A table view in the grouped style 16 Figure 1-5 Header and footer of a section 17 Figure 1-6 Default table row style 18 Figure 1-7 Table row style with a subtitle under the title 19 Figure 1-8 Table row style with a right-aligned subtitle 20 Figure 1-9 Table row style in Contacts format 21 Navigating a Data Hierarchy with Table Views 25 Figure 3-1 Mapping levels of the data model to table views 27 Figure 3-2 Navigation controller and view controllers in a navigation-based app 29 Figure 3-3 Navigation bars and common control items 30 Figure 3-4 A storyboard with two table view controllers 33 Listing 3-1 Passing data to a destination view controller 34 Listing 3-2 Passing data to a source view controller 34 Creating and Configuring a Table View 36 Figure 4-1 Calling sequence for creating and configuring a table view 37 Figure 4-2 The master view controller in the Master-Detail Application storyboard 39 Figure 4-3 A dynamic table view 40 Figure 4-4 A static table view 41 Listing 4-1 Adopting the data source and delegate protocols 43 Listing 4-2 Creating a table view 44 Listing 4-3 Populating a dynamic table view with data 45 Listing 4-4 Populating a static table view with data 46 Listing 4-5 Defining the model-object interface 48 Listing 4-6 Loading the table-view data and initializing the model objects 48 Listing 4-7 Preparing the data for the indexed list 49 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 5Listing 4-8 Providing section-index data to the table view 50 Listing 4-9 Populating the rows of an indexed list 51 Listing 4-10 Adding a title to the table view 52 Listing 4-11 Returning a title for a section 53 Listing 4-12 Custom indentation of a row 53 Listing 4-13 Varying row height 54 A Closer Look at Table View Cells 55 Figure 5-1 Parts of a table view cell 55 Figure 5-2 Parts of a table-view cell in editing mode 56 Figure 5-3 Default cell content in a UITableViewCell object 57 Figure 5-4 A table view with rows showing both images and text 58 Figure 5-5 Table view cells in a storyboard 61 Figure 5-6 Table view rows drawn with a custom prototype cell 62 Figure 5-7 Table view rows drawn with multiple cells 66 Figure 5-8 Making connections to your static cell content 68 Figure 5-9 Cells with custom content as subviews 70 Listing 5-1 Configuring a UITableViewCell object with both image and text 58 Listing 5-2 Alternating the background color of cells 60 Listing 5-3 Adding data to a cell using tags 63 Listing 5-4 Adding data to a cell using outlets 65 Listing 5-5 Defining outlet properties for static cell objects 66 Listing 5-6 Setting the data in the user interface 68 Listing 5-7 Adding subviews to a cell’s content view 70 Managing Selections 74 Listing 6-1 Responding to a row selection 75 Listing 6-2 Setting a switch object as an accessory view and responding to its action message 75 Listing 6-3 Managing a selection list—exclusive list 77 Listing 6-4 Managing a selection list—inclusive list 77 Listing 6-5 Programmatically selecting a row 78 Inserting and Deleting Rows and Sections 80 Figure 7-1 Calling sequence for inserting or deleting rows in a table view 82 Figure 7-2 Deletion of section and row and insertion of row 90 Listing 7-1 View controller responding to setEditing:animated: 84 Listing 7-2 Customizing the editing style of rows 84 Listing 7-3 Updating the data-model array and deleting the row 85 Listing 7-4 Adding an Add button to the navigation bar 85 Listing 7-5 Responding to a tap on the Add button 86 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 6 Figures and ListingsListing 7-6 Adding the new item to the data-model array 86 Listing 7-7 Batch insertion and deletion methods 87 Listing 7-8 Inserting and deleting a block of rows in a table view 88 Managing the Reordering of Rows 91 Figure 8-1 Reordering a row 91 Figure 8-2 Calling sequence for reordering a row in a table view 92 Listing 8-1 Excluding a row from relocation 93 Listing 8-2 Updating the data-model array for the relocated row 94 Listing 8-3 Retargeting the destination row of a move operation 94 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 7 Figures and ListingsTable views are versatile user interface objects frequently found in iOS apps. A table view presents data in a scrollable list of multiple rows that may be divided into sections. Table views have many purposes: ● To let users navigate through hierarchically structured data ● To present an indexed list of items ● To display detail information and controls in visually distinct groupings ● To present a selectable list of options Figure I-1 Table views of various kinds A table view has only one column and allows vertical scrolling only. It consists of rows in sections. Each section can have a header and a footer that displaystext or an image. However, many table views have only one section with no visible header or footer. Programmatically, the UIKit framework identifies rows and sections through their index number: Sections are numbered 0 through n – 1 from the top of a table view to the bottom; rows are numbered 0 through n – 1 within a section. A table view can have its own header and footer, distinct from any section; the table header appears before the first row of the first section, and the table footer appears after the last row of the last section. 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 8 About Table Views in iOS AppsAt a Glance A table view is an instance of the UITableView class in one of two basic styles, plain or grouped. A plain table view is an unbroken list; a grouped table view has visually distinct sections. A table view has a data source and might have a delegate. The data source object provides the data for populating the sections and rows of the table view. The delegate object customizes its appearance and behavior. Related chapters: “Table View Styles and Accessory Views” (page 12) Table Views Draw Their Rows Using Cells A table view draws its visible rows using cells—that is, UITableViewCell objects. Cells are views that can display text, images, or other kinds of content. They can have background views for both normal and selected states. Cells can also have accessory views, which function as controls for selecting or setting an option. The UIKit framework defines four standard cell styles, each with its own layout of the three default content elements: main label, detail label, and image. You may also create your own custom cellsto acquire a distinctive style for your app’s table views. When you configure the attributes of a table view in the storyboard editor, you choose between two types of cell content: static cells or dynamic prototypes. ● Static cells. Use static cells to design a table with a fixed number of rows, each with its own layout. Use static cells when you know what the table looks like at design time, regardless of the specific information it displays. ● Dynamic prototypes. Use dynamic prototypes to design one cell and then use it as the template for other cells in the table. Use a dynamic prototype when multiple cells in a table should use the same layout to display information. Dynamic prototype content is managed by the data source at runtime, with an arbitrary number of cells. Related Chapters: “Table View Styles and Accessory Views” (page 12), “A Closer Look at Table View Cells” (page 55) Responding to Selections of Rows When users select a row (by tapping it), the delegate of the table view is informed via a message. The delegate is passed the indexes of the row and the section that the row is in. It uses this information to locate the corresponding item in the app’s data model. This item might be at an intermediate level in the hierarchy of About Table Views in iOS Apps At a Glance 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 9data or it might be a “leaf node" in the hierarchy. If the item is at an intermediate level, the app displays a new table view. If the item is a leaf node, the app displays details about the selected item in a grouped-style table view or some other kind of view. In table views that list a series of options, tapping a row simply selects its associated option. No subsequent view of data is displayed. Related Chapters: “Navigating a Data Hierarchy with Table Views” (page 25), “Managing Selections” (page 74) In Editing Mode You Can Add, Delete, and Reorder Rows Table views can enter an editing mode in which users can insert or delete rows, or relocate them within the table. In editing mode, rows that are marked for insertion or deletion display a green plus sign (insertion) or a red minus sign (deletion) near the left edge of the row. If users touch a deletion control or, in some table views, swipe across a row, a red Delete button appears, prompting usersto delete that row. Rowsthat can be relocated display (near their right edge) an image consisting of several horizontal lines. When the table view leaves editing mode, the insertion, deletion, and reordering controls disappear. When users attempt to insert, delete, or reorder rows, the table view sends a sequence of messages to its data source and delegate so that they can manage these operations. Related Chapters: “Inserting and Deleting Rows and Sections” (page 80), “Managing the Reordering of Rows” (page 91) To Create a Table View, Use a Storyboard The easiest and recommended way to create and manage a table view is to use a custom UITableViewController object in a storyboard. If your app is based largely on table views, create your Xcode project using the Master-Detail Application template. This template includes an initial custom UITableViewController class and a storyboard for the scenes in the user interface, including the custom view controller and its table view. In the storyboard editor, choose one of the two styles for this table view and design its content. At runtime, UITableViewController creates the table view and assigns itself as delegate and data source. Immediately after it’s created, the table view asks its data source for the number of sections, the number of rows in each section, and the table view cell to use to draw each row. The data source manages the application data used for populating the sections and rows of the table view. About Table Views in iOS Apps At a Glance 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 10Related Chapters: “Navigating a Data Hierarchy with Table Views” (page 25), “Creating and Configuring a Table View” (page 36) Prerequisites Before reading this document, you should read Start Developing iOS Apps Today to understand the basic process for developing iOS apps. Then read View Controller Programming Guide for iOS for a comprehensive look at view controllers and storyboards. Finally, to gain valuable hands-on experience using table views in a storyboard, read the tutorial Your Second iOS App: Storyboards. The information presented in this introduction and in “Table View Styles and Accessory Views” (page 12) summarizes prescriptive information on table views presented in iOS Human Interface Guidelines. You can find a complete description of the styles and characteristics of table views, as well as their recommended uses, in the chapter “iOS UI Element Usage Guidelines”. See Also You will find the following sample code projects to be instructive models for your own table view implementations: ● SimpleDrillDown project ● Table View Animations and Gestures project For guidance on how to use the standard container view controllers provided by UIKit, see View Controller Catalog for iOS . This document describes split view controllers and navigation controllers, which can both contain table view controllers as children. About Table Views in iOS Apps Prerequisites 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 11Table views come in distinctive styles that are suitable for specific purposes. In addition, the UIKit framework provides standard styles for the cells used to draw the rows of table views. It also gives you standard accessory views (that is, controls) that you can include in cells. Table View Styles There are two major styles of table views: plain and grouped. The two styles are distinguished mainly by appearance. 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 12 Table View Styles and Accessory ViewsPlain Table Views A table view in the plain (or regular) style displays rows that stretch across the screen and have a creamy white background (see Figure 1-1). A plain table view can have one or more sections, sections can have one or more rows, and each section can have its own header or footer title. (A header or footer may also have a custom view, for instance one containing an image). When the user scrolls through a section with many rows, the header of the section floats to the top of the table view and the footer of the section floats to the bottom. Figure 1-1 A table view in the plain style A variation of plain table views associates an index with sections for quick navigation; Figure 1-2 shows an example of this kind of table view, which is called an indexed list. The index runs down the right edge of the table view. Entries in the index correspond to section header titles. Touching an item in the index scrolls the table view to the associated section. For example, the section headings could be two-letterstate abbreviations, Table View Styles and Accessory Views Table View Styles 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 13and the rows for a section could be the cities in that state; touching at a certain spot in the index displays the cities for the selected state. The rows in indexed lists should not have disclosure indicators or detail disclosure buttons, because these interfere with the index. Figure 1-2 A table view configured as an indexed list Table View Styles and Accessory Views Table View Styles 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 14The simplest kind of table view is a selection list (see Figure 1-3). A selection list is a plain table view that presents a menu of optionsthat users can select. It can limit the selection to one row or allow multiple selections. A selection list marks a selected row with a checkmark (see Figure 1-3). Figure 1-3 A table view configured as a selection list Grouped Table Views A grouped table view also displays a list of information, but it groups related rows in visually distinct sections. As shown in Figure 1-4, each section has rounded corners and by default appears against a bluish-gray background. Each section may have text or an image for its header or footer to provide some context or Table View Styles and Accessory Views Table View Styles 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 15summary for the section. A grouped table works especially well for displaying the most detailed information in a data hierarchy. It allows you to separate detailsinto conceptual groups and provide contextual information to help users understand it quickly. Figure 1-4 A table view in the grouped style Table View Styles and Accessory Views Table View Styles 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 16The headers and footers of sections in a grouped table view have relative locations and sizes as indicated in Figure 1-5. Figure 1-5 Header and footer of a section Padding Padding Header Table cell Footer On iPad devices, a grouped table view automatically gets wider margins when the table view itself is wide. Standard Styles for Table View Cells In addition to defining two styles of table views, the UIKit framework defines four styles for the cells that a table view uses to draw its rows. You may create custom table view cells with different appearances if you want, but these four predefined cell styles are suitable for most purposes. The techniques for creating table view cells in a predefined style and for creating custom cells are described in “A Closer Look at Table View Cells” (page 55). Table View Styles and Accessory Views Standard Styles for Table View Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 17The defaultstyle for table view rows uses a simple cellstyle that has a single title and an optional image (Figure 1-6). This style is associated with the UITableViewCellStyleDefault constant. Figure 1-6 Default table row style Table View Styles and Accessory Views Standard Styles for Table View Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 18The cell style for the rows in Figure 1-7 left-aligns the main title and puts a gray subtitle under it. It also permits an image in the default image location. This style is associated with the UITableViewCellStyleSubtitle constant. Figure 1-7 Table row style with a subtitle under the title Table View Styles and Accessory Views Standard Styles for Table View Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 19The cell style for the rows in Figure 1-8 left-aligns the main title. It puts the subtitle in blue text and right-aligns it on the right side of the row. Images are not permitted. This style is used in the Settings app, where the subtitle indicatesthe currentsetting for a preference. It is associated with the UITableViewCellStyleValue1 constant. Figure 1-8 Table row style with a right-aligned subtitle Table View Styles and Accessory Views Standard Styles for Table View Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 20The cell style for the rows in Figure 1-9 puts the main title in blue and right-aligns it at a point that’s indented from the left side of the row. The subtitle is left aligned at a short distance to the right of this point. This style does not allow images. It is used in the Contacts part of the Phone app and is associated with the UITableViewCellStyleValue2 constant. Figure 1-9 Table row style in Contacts format Accessory Views There are three standard kinds of accessory views (shown with their accessory-type constants): Standard accessory views Description Disclosure indicator—UITableViewCellAccessoryDisclosureIndicator. You use the disclosure indicator when selecting a cell results in the display of another table view reflecting the next level in the data model hierarchy. Detail disclosure button—UITableViewCellAccessoryDetailDisclosureButton. You use the detail disclosure button when selecting a cell resultsin a detail view of that item (which may or may not be a table view). Table View Styles and Accessory Views Accessory Views 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 21Standard accessory views Description Checkmark—UITableViewCellAccessoryCheckmark. You use a checkmark when a touch on a row results in the selection of that item. This kind of table view is known as a selection list, and it is analogous to a pop-up list. Selection lists can limit selections to one row, or they can allow multiple rows with checkmarks. Instead of the standard accessory views, you may specify a control (for example, a switch) or a custom view as the accessory view. Table View Styles and Accessory Views Accessory Views 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 22The table view programming interface includesseveral UIKit classes, two formal protocols, and a category added to a Foundation framework class. Table View A table view itself is an instance of the UITableView class. You use its methods to configure the appearance of the table view—for example,specifying the default height of rows or providing a subview used asthe header for the table. Other methods give you access to the currently selected row as well as specific rows or cells. You can call other methods of UITableView to manage selections, scroll the table view, and insert or delete rows and sections. UITableView inherits from the UIScrollView class, which defines scrolling behavior for views with content larger than the size of the window. UITableView redefines the scrolling behavior to allow vertical scrolling only. Table View Controller The UITableViewController class manages a table view and addssupport for many standard table-related behaviors such as selection management, row editing, table configuration, and others. This additional support is there to minimize the amount of code you have to write to create and initialize your table-based interface. You don’t use this class directly—instead you subclass UITableViewController to add custom behaviors. Data Source and Delegate A UITableView object must have a delegate and a data source. Following the Model-View-Controller design pattern, the data source mediates between the app’s data model (that is, its model objects) and the table view. The delegate, on the other hand, manages the appearance and behavior of the table view. The data source and the delegate are often (but not necessarily) the same object, and that object is usually a custom subclass of UITableViewController. (See “Navigating a Data Hierarchy with Table Views” (page 25) for further information.) 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 23 Overview of the Table View APIThe data source adoptsthe UITableViewDataSource protocol. UITableViewDataSource hastwo required methods. The tableView:numberOfRowsInSection: method tellsthe table view how many rowsto display in each section, and the tableView:cellForRowAtIndexPath: method provides the cell to display for each row in the table. Optional methods allow the data source to configure multiple sections, provide headers and/or footers, and support adding, removing, and reordering rows in the table. The delegate adoptsthe UITableViewDelegate protocol. This protocol has no required methods. It declares methods that allow the delegate to modify visible aspects of the table view, manage selections, support an accessory view, and support editing of individual rows in a table. An app can make use of the convenience class UILocalizedIndexedCollation to help the data source organize the data for indexed lists and display the proper section when users tap an item in the index. The UILocalizedIndexedCollation class also localizes section titles. Extension to the NSIndexPath Class Many table view methods use index paths as parameters or return values. An index path identifies a path to a specific node in a tree of nested arrays, and in the Foundation framework it isrepresented by an NSIndexPath object. UIKit declares a category on NSIndexPath with methods that return key paths, locate rows in sections, and construct NSIndexPath objects from row and section indexes. For more information, see NSIndexPath UIKit Additions. Table View Cells As noted in “Data Source and Delegate” (page 23), the data source must return a cell object for each visible row that a table view displays. These cell objects must inherit from the UITableViewCell class. This class includes methodsfor managing cellselection and editing, managing accessory views, and configuring the cell. You can instantiate cells directly in the standard styles defined by the UITableViewCell class and give these cells content consisting of one or two strings of text and, in some styles, both image and text. Instead of using a cell in a standard style, you can put your own custom subviews in the content view of an “off-the-shelf” cell object. You may also subclass UITableViewCell to customize the appearance and behavior of table view cells. These approaches are all discussed in “A Closer Look at Table View Cells” (page 55). Overview of the Table View API Extension to the NSIndexPath Class 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 24A common use of table views—and one to which they’re ideally suited—is to navigate hierarchies of data. A table view at a top level of the hierarchy lists categories of data at the most general level. Users select a row to “drill down” to the next level in the hierarchy. At the bottom of the hierarchy is a view (often a table view) that presents details about a specific item (for example, an address book record) and may allow users to edit the item. This section explains how you can map the levels of the data model hierarchy to a succession of table views and describes how you can use the facilities of the UIKit framework to help you implement such navigation-based apps. Hierarchical Data Models and Table Views For a navigation-based app, you typically design your app data as a graph of model objects that is sometimes referred to as the app’s data model. You can then implement the model layer of your app using various mechanisms or technologies, including Core Data, property lists, or archives of custom objects. Regardless of the approach, the traversal of your app’s data model follows patterns that are common to all navigation-based apps. The data model has hierarchical depth, and objects at variouslevels of this hierarchy should be the source for populating the rows of a table view. Note: To learn about the Core Data technology and framework, see Core Data Starting Point. The Data Model as a Hierarchy of Model Objects A well-designed app factors its classes and objects in a way that conforms to the Model-View-Controller (MVC) design pattern. The app’s data model consists of the model objects in this pattern. You can describe model objects (using the terminology provided by the object modeling pattern) in terms of their properties. These properties are of two general kinds: attributes and relationships. 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 25 Navigating a Data Hierarchy with Table ViewsNote: The notion of “property” here is abstractly related to, but not identical with, the declared property feature of Objective-C. A class definition typically represents properties programmatically through instance variables and declared properties. For more on declared properties, see The Objective-C Programming Language . To find out more about MVC and object modeling, read “Cocoa Design Patterns” in Cocoa Fundamentals Guide . Attributes represent elements of model-object data. Attributes can range from an instance of a primitive class (for example, an NSString, NSDate, or UIColor object) to a C structure or a simple scalar value. Attributes are generally what you use to populate a table view that represents a “leaf node” of the data hierarchy and that presents a detail view of that item. A model object may also have relationships with other model objects. It is through these relationships that a data model acquires hierarchical depth by composing an object graph. Relationships are of two general kinds in terms of cardinality: to-one and to-many. To-one relationships define an object’s relationship with another object (for example, a parent relationship). A to-many relationship, on the other hand, defines an object’s relationship with multiple objects of the same kind. The to-many relationship is characterized by containment and can be programmatically represented by collections such as NSArray objects (or, simply, arrays). An array might contain other arrays, or it could contain multiple dictionaries, which are collections that identify their contained valuesthrough keys. Dictionaries, in turn, can contain one or more other collections, including arrays, sets, and even other dictionaries. As collections nest in other collections, your data model can acquire hierarchical depth. Table Views and the Data Model The rows of a plain table view are typically backed by collection objects of the app’s data model; these objects are usually arrays. Arrays contain strings or other elements that a table view can use when displaying row content. When you create a table view (described in “Creating and Configuring a Table View” (page 36)), it immediately queries its data source for its dimensions—that is, it requests the number of sections and the number of rows per section—and then asks for the content of each row. The data source fetches this content from an array in the appropriate level of the data-model hierarchy. In many of the methods defined for a table view’s data source and delegate, the table view passes in an index path to identify the section and row that is the focus of the current operation—for example, fetching content for a row or indicating the row the user tapped. An index path is an instance of the Foundation framework’s NSIndexPath classthat you can use to identify an item in a tree of nested arrays. The UIKit framework extends NSIndexPath to add a section and a row property to the class. The data source should use these properties to map a section and row of the table view to a value at the corresponding index of the array being used as the table view’s source of data. Navigating a Data Hierarchy with Table Views Hierarchical Data Models and Table Views 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 26Note: The UIKit framework extension of the NSIndexPath class is described in NSIndexPath UIKit Additions. In the sequence of table views in Figure 3-1, the top level of the data hierarchy is an array of four arrays, with each inner array containing objects representing the trails for a particular region. When the user selects one of these regions, the next table view lists names identifying the trails within the selected array. When the user selects a particular trail, the next table view describes that trail using a grouped table view. Figure 3-1 Mapping levels of the data model to table views "Name" = "Sylvan Trail Loop" "Location" = "Edgewood City Park (Redwood City)" "Distance" = 2 "Difficulty" = "Moderate" "Restrictions" = "No bicycles, pets, or horses" "Map" = pen_map6.png // other key/value pairs Alambique-Skyline Sweeny Ridge Sawyer Camp Trail Purisima Creek Dean-Crystal Springs Sylvan Trail Loop regions array trails array trail dictionary East Bay North Bay Peninsula South Bay Navigating a Data Hierarchy with Table Views Hierarchical Data Models and Table Views 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 27Note: You could easily redesign the app in Figure 3-1 (page 27) to have only two table views. The first table view would be an indexed list of trails by region. The second table view would display the detail for a selected trail. View Controllers and Navigation-Based Apps The UIKit framework provides a number of view controller classesfor managing common user interface patterns in iOS. View controllers are controller objects that inherit from the UIViewController class. They are an essential tool for view management, especially when an app uses those views to present successive levels of its data hierarchy. This section describes how two subclasses of UIViewController, navigation controllers and table view controllers, present and manage a succession of table views. Note: Thissection gives an overview of view controllersto provide some background for the coding tasks discussed later in this document. To learn about view controllers in depth, see View Controller Programming Guide for iOS . Navigation Controllers The UINavigationController classinheritsfromUIViewController, a base classthat definesthe common programmatic interface and behavior for controller objects that manage views in iOS. Through inheritance from this base class, a view controller acquires an interface for general view management. After it implements parts of thisinterface, a view controller can autorotate its view, respond to low-memory notifications, overlay “modal” views, respond to taps on the Edit button, and otherwise manage the view. A navigation controller maintains a stack of view controllers, one for each of the table views displayed (see Figure 3-2). It begins with what’s known as the root view controller. When the user taps a row of the table view (often on a detail disclosure button), the root view controller pushes the next view controller onto the stack. The new view controller’s table view visually slides into place from the right, and the navigation bar Navigating a Data Hierarchy with Table Views View Controllers and Navigation-Based Apps 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 28items are updated appropriately. When users tap the back button in the navigation bar, the current view controller is popped off the stack. As a consequence, the navigation controller displaysthe table view managed by the view controller that is now at the top of the stack. Figure 3-2 Navigation controller and view controllers in a navigation-based app UITableView UIViewController UINavigationController UINavigationBar Navigation Bars Navigation bars are a user-interface device that enables users to navigate a hierarchy of data. Users start with general, top-level items and “drill down” the hierarchy to detailed viewsshowing specific properties of leaf-node items. The view below the navigation bar presents the current level of data. A navigation bar includes a title for the current view and, if that view is lower in the hierarchy than the top level, a back button on the left side of the bar; the back button is a navigation control that the user taps to return to the previous level. (The back Navigating a Data Hierarchy with Table Views View Controllers and Navigation-Based Apps 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 29button by default displaysthe title for the previous view.) A navigation bar may also have an Edit button—used to enter editing mode for the current view—or custom buttons for functions that manage content (see Figure 3-3). Figure 3-3 Navigation bars and common control items Navigational control Controls to manage content A UINavigationController manages the navigation bar, including the items that are displayed in the bar for the view below it. A UIViewController object manages a view displayed below the navigation bar. For this view controller, you create a subclass of UIViewController or a subclass of a view controller class that the UIKit framework provides for managing a particular type of view. For table views, this view controller class is UITableViewController. For a navigation controller that displays a sequence of table views reflecting levels within a data hierarchy, you need to create a separate custom table view controller for each table view. The UIViewController class includes methods that let view controllers access and set the navigation items displayed in the navigation bar for the currently displayed table view. This class also declares a title property through which you can set the title of the navigation bar for the current table view. Navigating a Data Hierarchy with Table Views View Controllers and Navigation-Based Apps 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 30Table View Controllers Although you could manage a table view using a direct subclass of UIViewController, you save yourself a lot of work if instead you subclass UITableViewController. The UITableViewController class takes care of many of the details you would have to implement if you created a directsubclass of UIViewController to manage a table view. The recommended way to create a table view controller is to specify it in a storyboard. The associated table view isloaded from the storyboard, along with the table view’s attributes,size, and autoresizing characteristics. The table view controller sets itself as the data source and the delegate of the table view. Note: You can create a table view controller programmatically by allocating memory for it and initializing it with the initWithStyle: method, passing in either UITableViewStylePlain or UITableViewStyleGrouped for the required table view style. When the table view is about to appear for the first time, the table view controller sends reloadData to the table view, which prompts it to request data from its data source. The data source tells the table view how many sections and rows per section it wants, and then gives the table view the data to display in each row. This process is described in “Creating and Configuring a Table View” (page 36). The UITableViewController class also performs other common tasks. It clears selections when the table view is about to be displayed and flashes the scroll indicators when the table finishes displaying. In addition, it responds properly when users tap the Edit button by putting the table view into editing mode (or taking it out of editing mode if users tap Done). The class exposes one property, tableView, which gives you access to the managed table view. Note: A table view controller supports inline editing of table view rows; if, for example, rows have embedded text fields in editing mode, it scrolls the row being edited above the virtual keyboard that is displayed. It also supports the NSFetchedResultsController class for managing the results returned from a Core Data fetch request. The UITableViewController class implements the foregoing behavior by overriding loadView, viewWillAppear:, and other methods inherited from UIViewController. In your subclass of UITableViewController, you may also override these methods to acquire specialized behavior. If you do override these methods, be sure to invoke the superclass implementation of the method, usually as the first method call, to get the default behavior. Navigating a Data Hierarchy with Table Views View Controllers and Navigation-Based Apps 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 31Note: You should use a UIViewController subclass rather than a subclass of UITableViewController to manage a table view if the view to be managed is composed of multiple subviews, only one of which is a table view. The default behavior of the UITableViewController class is to make the table view fill the screen between the navigation bar and the tab bar (if either are present). If you decide to use a UIViewController subclass rather than a subclass of UITableViewController to manage a table view, you should perform a couple of the tasks mentioned above to conform to the human interface guidelines. To clear any selection in the table view before it’s displayed, implement the viewWillAppear: method to clear the selected row (if any) by calling deselectRowAtIndexPath:animated:. After the table view has been displayed, you should flash the scroll view’sscroll indicators by sending a flashScrollIndicators message to the table view; you can do this in an override of the viewDidAppear: method of UIViewController. Managing Table Views in a Navigation-Based App A UITableViewController object—or any other object that assumes the roles of data source and delegate for a table view—must respond to messages sent by the table view in order to populate its rows, configure it, respond to selections, and manage editing sessions. In the rest of this document, you learn how to do these things. However, there are certain other things you need to do to ensure the proper display of a sequence of table views in a navigation-based app. Note: Thissection summarizes view-controller and navigation-controller tasks, with a focus on table views. For a thorough discussion of view controllers and navigation controllers, including the complete details of their implementation, see View Controller Programming Guide for iOS and View Controller Catalog for iOS . At this point, let’s assume that a table view managed by a table view controller presents a list to the user. How does the app display the next table view in the sequence? When a user taps a row of the table view, the table view callsthe tableView:didSelectRowAtIndexPath: or tableView:accessoryButtonTappedForRowWithIndexPath:method implemented by the delegate. (That latter method is invoked if the user taps a row’s detail disclosure button.) The delegate creates the table view controller managing the next table view in the sequence, sets the data it needs to populate its table view, and pushes this new view controller onto the navigation controller’s stack of view controllers. A storyboard provides the specification that allows UIKit to perform most of this work for you. Navigating a Data Hierarchy with Table Views View Controllers and Navigation-Based Apps 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 32Storyboards represent the screens in an app and the transitions between them. The storyboard in a basic app may contain just a few screens, but a more complex app might have multiple storyboards, each of which represents a different subset of its screens. The storyboard example in Figure 3-4 presents a graphical representation of each scene, its contents, and its connections. Figure 3-4 A storyboard with two table view controllers A scene represents an onscreen content area that is managed by a view controller. (In the context of a storyboard, scene and view controller are synonymous terms.) The leftmost scene in the default storyboard represents a navigation controller. A navigation controller is a container view controller because, in addition to its views, it also manages a set of other view controllers. For example, the navigation controller in Figure 3-4 (page 33) manages the master and detail view controllers, in addition to the navigation bar and the back button that you see when you run the app. A relationship is a type of connection between scenes. In Figure 3-4, there is a relationship between the navigation controller and the master scene. In this case, the relationship represents the containment of the master and detailscenes by the navigation controller. When the app runs, the navigation controller automatically loads the master scene and displays the navigation bar at the top of the screen. A segue represents a transition from one scene (called the source) to the next scene (called the destination). For example, in Figure 3-4, the master scene is the source and the detail scene is the destination. When you select the Detail item in the master list, you trigger a segue from the source to the destination. In this case, the segue is a push segue, which means that the destination scene slides over the source scene from right to left. As the detail screen is revealed, a back button appears at the left end of the navigation bar, titled with the previous screen’s title (in this case, “Master”). The back button is provided automatically by the navigation controller that manages the master-detail hierarchy. Navigating a Data Hierarchy with Table Views View Controllers and Navigation-Based Apps 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 33Storyboards make it easy to pass data from one scene to another via the prepareForSegue:sender: method of the UIViewController class. This method is called when the first scene (the source) is about to transition to the next scene (the destination). The source view controller can implement prepareForSegue:sender: to perform setup tasks, such as passing information to the destination view controller about what it should display in its table view. Listing 3-1 shows one implementation of this method. Listing 3-1 Passing data to a destination view controller - (void)prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender { if ([[segue identifier] isEqualToString:@"ShowDetails"]) { MyDetailViewController *detailViewController = [segue destinationViewController]; NSIndexPath *indexPath = [self.tableView indexPathForSelectedRow]; detailViewController.data = [self.dataController objectInListAtIndex:indexPath.row]; } } A segue represents a one-way transition from a source scene to a destination scene. One of the consequences of this design is that you can use a segue to pass data to a destination, but you can’t use a segue to send data from a destination to its source. To solve this problem, you create a delegate protocol that declares methods that the destination view controller calls when it needs to pass back some data. Listing 3-2 shows one implementation of a protocol for passing data back to a source view controller. Listing 3-2 Passing data to a source view controller @protocol MyAddViewControllerDelegate - (void)addViewControllerDidCancel:(MyAddViewController *)controller; - (void)addViewControllerDidFinish:(MyAddViewController *)controller data:(NSString *)item; @end - (void)addViewControllerDidCancel:(MyAddViewController *)controller { [self dismissViewControllerAnimated:YES completion:NULL]; } Navigating a Data Hierarchy with Table Views View Controllers and Navigation-Based Apps 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 34- (void)addViewControllerDidFinish:(MyAddViewController *)controller data:(NSString *)item { if ([item length]) { [ self.dataController addData:item]; [[self tableView] reloadData]; } [self dismissViewControllerAnimated:YES completion:NULL]; } Note: The full details of creating storyboards are described in Xcode 4 User Guide . To learn more about using view controllers in storyboards, see View Controller Programming Guide for iOS . Design Pattern for Navigation-Based Apps A navigation-based app with table views should follow these design best practices: ● A view controller (typically a subclass of UITableViewController), acting in the role of data source, populates its table view with data from an object representing a level of the data hierarchy. When the table view displays a list of items, the object is typically an array. When the table view displays item detail (that is, a leaf node of the data hierarchy), the object can be a custom model object, a Core Data managed object, a dictionary, or something similar. ● The view controller stores the data it needs for populating its table view. The view controller can use this data directly for populating the table view, or it can use it to fetch or otherwise obtain the necessary data. When you design your view controller subclass, you should define a property to hold this data. View controllers should not obtain the data for their table view through a global variable or a singleton objectsuch asthe app delegate. Such direct dependencies make your code lessreusable and more difficult to test and debug. ● The current view controller on top of the navigation-controller stack creates the next view controller in the sequence and, before it pushes it onto the stack, sets the data that this view controller, acting as data source, needs to populate its table view. Navigating a Data Hierarchy with Table Views Design Pattern for Navigation-Based Apps 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 35Your app must present a table view to users before it can manage it in response to taps on rows and other actions. This chapter shows what you must do to create a table view, configure it, and populate it with data. Most of the code examples shown in this chapter come from the sample projects TableViewSuite and TheElements. Basics of Table View Creation To create a table view, several entities in an app must interact: the view controller, the table view itself, and the table view’s data source and delegate. The view controller, data source, and delegate are usually the same object. The view controller starts the calling sequence, diagrammed in Figure 4-1 (page 37). 1. The view controller creates a UITableView instance in a certain frame and style. It can do this either programmatically or in a storyboard. The frame is usually set to the screen frame, minus the height of the status bar or, in a navigation-based app, to the screen frame minus the heights of the status bar and the navigation bar. The view controller may also set global properties of the table view at this point, such as its autoresizing behavior or a global row height. To learn how to create table views in a storyboard and programmatically, see “Creating a Table View Using a Storyboard” (page 38) and “Creating a Table View Programmatically” (page 43). 2. The view controllersetsthe data source and delegate of the table view and sends a reloadData message to it. The data source must adopt the UITableViewDataSource protocol, and the delegate must adopt the UITableViewDelegate protocol. 3. The data source receives a numberOfSectionsInTableView: message from the UITableView object and returns the number of sections in the table view. Although this is an optional protocol method, the data source must implement it if the table view has more than one section. 4. For each section, the data source receives a tableView:numberOfRowsInSection: message and responds by returning the number of rows for the section. 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 36 Creating and Configuring a Table View5. The data source receives a tableView:cellForRowAtIndexPath: message for each visible row in the table view. It responds by configuring and returning a UITableViewCell object for each row. The UITableView object uses this cell to draw the row. Figure 4-1 Calling sequence for creating and configuring a table view tableView: cellForRowAtIndexPath: Set data source and delegate Client Data Source Table View initWithFrame:style: numberOfSectionsInTableView: tableView:numberOfRowsInSection: The diagram in Figure 4-1 shows the required protocol methods as well as the numberOfSectionsInTableView: method. Populating the table view with data occurs in steps 3 through 5. To learn how to implement the methods in these steps, see “Populating a Dynamic Table View with Data” (page 44). The data source and the delegate may implement other optional methods of their protocolsto further configure the table view. For example, the data source might want to provide titles for each of the sections in the table view by implementing the tableView:titleForHeaderInSection: method. For more on some of these optional table view customizations, see “Optional Table View Configurations” (page 52). You create a table view in either the plain style (UITableViewStylePlain) or the grouped style (UITableViewStyleGrouped). (You specify the style in a storyboard.) Although the procedure for creating a table view in eitherstylesisidentical, you may want to perform different kinds of configurations. For example, because a grouped table view generally presents item detail, you may also want to add custom accessory views (for example, switches and sliders) or custom content (for example, text fields). For an example, see “A Closer Look at Table View Cells” (page 55). Creating and Configuring a Table View Basics of Table View Creation 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 37Recommendations for Creating and Configuring Table Views There are many ways to put together a table view app. For example, you can use an instance of a custom NSObject subclass to create, configure, and manage a table view. However, you will find the task much easier if you adopt the classes, techniques, and design patterns that the UIKit framework offers for this purpose. The following approaches are recommended: ● Use an instance of a subclass of UITableViewController to create and manage a table view. Most apps use a custom UITableViewController object to manage a table view. As described in “Navigating a Data Hierarchy with Table Views” (page 25), UITableViewController automatically creates a table view, assignsitself as both delegate and data source (and adoptsthe corresponding protocols), and initiates the procedure for populating the table view with data. It also takes care of several other “housekeeping” details of behavior. The behavior of UITableViewController (a subclass of UIViewController) within the navigation controller architecture is described in “Table View Controllers” (page 31). ● If your app is largely based on table views, select the Master-Detail Application template provided by Xcode when you create your project. As described in “Creating a Table View Using a Storyboard” (page 38), the template includes stub code and a storyboard defining an app delegate, the navigation controller, and the master view controller (which is an instance of a custom subclass of UITableViewController). ● For successive table views, you should implement custom UITableViewController objects. You can either load them from a storyboard or create the associated table views programmatically. Although either option is possible, the storyboard route is generally easier. If the view to be managed is a composite view in which a table view is one of multiple subviews, you must use a custom subclass of UIViewController to manage the table view (and other views). Do not use UITableViewController, because this controller class sizes the table view to fill the screen between the navigation bar and the tab bar (if either is present). Creating a Table View Using a Storyboard Create an app with a table view using Xcode. When you create your project, select a template that contains stub code and a storyboard that, by default, supply the structure for setting up and managing table views. To create an app structured around table views 1. In Xcode, choose File > New > Project. Creating and Configuring a Table View Recommendations for Creating and Configuring Table Views 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 382. In the iOS section at the left side of the dialog, select Application. 3. In the main area of the dialog, select Master-Detail Application and then click Next. 4. Choose your project options (make sure Use Storyboard is selected), and then click Next. 5. Choose a save location for your project and then click Create. Depending on which device family you chose in step 4, the project has one or two storyboards. To display the storyboard canvas, double-click a storyboard file in the project navigator. If the device family is iPhone, for example, your storyboard should contain a table view controller that looks similar to the one in Figure 4-2. Figure 4-2 The master view controller in the Master-Detail Application storyboard To make sure that the scene on the canvas represents the master view controller class in your code 1. On the canvas, click the scene’s title bar to select the table view controller. 2. Click the Identity button at the top of the utility area to open the Identity inspector. 3. Verify that the Class field contains the project’s custom subclass of UITableViewController. Choose the Table View’s Display Style As described in “Table View Styles” (page 12), every table view has a display style: plain or grouped. To choose the display style of a table view in a storyboard 1. Click the center of the scene to select the table view. Creating and Configuring a Table View Creating a Table View Using a Storyboard 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 392. In the utility area, display the Attributes inspector. 3. In the Table View section of the Attributes inspector, use the Style pop-up menu to choose Plain or Grouped. Choose the Table View’s Content Type Storyboards introduce two convenient ways to design a table view’s content: ● Dynamic prototypes. Design a prototype cell and then use it as the template for other cells in the table. Use a dynamic prototype when multiple cells in a table should use the same layout to display information. Dynamic content is managed by the table view data source (the table view controller) at runtime, with an arbitrary number of cells. Figure 4-3 shows a plain table view with a one prototype cell. Figure 4-3 A dynamic table view Note: If a table view in a storyboard is dynamic, the custom subclass of UITableViewController that contains the table view needs to implement the data source protocol. For more information, see “Populating a Dynamic Table View with Data” (page 44). Creating and Configuring a Table View Creating a Table View Using a Storyboard 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 40● Static cells. Use static content to design the overall layout of the table, including the total number of cells. A table view with static content has a fixed set of cells that you can configure at design time. You can also configure otherstatic data elementssuch assection headers. Use static cells when a table does not change its layout, regardless of the specific information it displays. Figure 4-4 shows a grouped table view with three static cells. Figure 4-4 A static table view Note: If a table view in a storyboard isstatic, the custom subclass of UITableViewController that contains the table view should not implement the data source protocol. Instead, the table view controllershould use its viewDidLoad method to populate the table view’s data. For more information, see “Populating a Static Table View With Data” (page 46). By default, when you add a table view controller to a storyboard, the controller contains a table view that uses prototype-based cells. If you want to use static cells: 1. Select the table view. 2. Display the Attributes inspector. 3. In the the Content pop-up menu, choose Static Cells. Creating and Configuring a Table View Creating a Table View Using a Storyboard 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 41If you’re designing a prototype cell, the table view needs a way to identify the prototype when the data source dequeues reusable cells for the table at runtime. Therefore you must assign a reuse identifier to the cell. In the Table View Cell section of the Attributes inspector, enter an ID in the Identifier text field. By convention, a cell’s reuse identifier should describe what the cell contains, such as BirdSightingCell. Design the Table View’s Rows As described in “Standard Styles for Table View Cells” (page 17), UIKit defines four styles for the cells that a table view uses to draw its rows. You can use one of the four standard styles, design a custom style, or subclass UITableViewCell to define additional behavior or properties for the cell. This topic is covered in detail in “A Closer Look at Table View Cells” (page 55). A table view cell can also have an accessory, as described in “Accessory Views” (page 21). An accessory is a standard user interface element that UIKit draws at the right end of a table cell. For example, the disclosure indicator, which looks similar to a right angle bracket (>), tells users that tapping an item reveals related information in a new screen. In the Attributes inspector, use the Accessory pop-up menu to select a cell’s accessory. Create Additional Table Views If your app displays and manages more than one table view, add those table views to your storyboard. You add a table view by adding a custom UITableViewController object, which contains the table view it manages. To add custom class files to your project 1. In Xcode, choose File > New > File. 2. In the iOS section at the left side of the dialog, select Cocoa Touch. 3. In the main area of the dialog, select Objective-C class, and then click Next. 4. Enter a name for your new class, choose subclass of UITableViewController, and then click Next. 5. Choose a save location for your class files, and then click Create. To add a table view controller to a storyboard 1. Display the storyboard to which you want to add the table view controller. 2. Drag a table view controller out of the object library and drop it on the storyboard. 3. With the new scene still selected on the canvas, click the Identity button in the utility area to open the Identity inspector. Creating and Configuring a Table View Creating a Table View Using a Storyboard 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 424. In the Custom Class section, choose the new custom class in the Class pop-up menu. 5. Set the new table view’s style and cell content (dynamic or static). 6. Create a segue to the new scene. The details of step 7 vary depending on the project. To learn more about adding segues, see Xcode 4 User Guide . Note: Populating a table view with data and configuring a table view are discussed in “Populating a Dynamic Table View with Data” (page 44) and “Optional Table View Configurations” (page 52). Learn More by Creating a Sample App The tutorial Your Second iOS App: Storyboards shows how to create a sample app that is structured around table views. After you complete the steps in this tutorial, you’ll have a working knowledge of how to create both dynamic and static table views using a storyboard. The tutorial creates a basic navigation-based app called BirdWatching that uses table view controllers connected by both push and modal segues. Creating a Table View Programmatically If you choose not to use UITableViewController for table view management, you must replicate what this class gives you “for free.” Adopt the Data Source and Delegate Protocols The class creating the table view typically makes itself the data source and delegate by adopting the UITableViewDataSource and UITableViewDelegate protocols. The adoption syntax appears just after the superclass in the @interface directive, as shown in Listing 4-1. Listing 4-1 Adopting the data source and delegate protocols @interface RootViewController : UIViewController @property (nonatomic, strong) NSArray *timeZoneNames; @end Creating and Configuring a Table View Creating a Table View Programmatically 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 43Create and Configure a Table View The next step is for the client to allocate and initialize an instance of the UITableView class. Listing 4-2 gives an example of a client that creates a UITableView object in the plain style, specifies its autoresizing characteristics, and then sets itself to be both data source and delegate. Again, keep in mind that the UITableViewController does all of this for you automatically. Listing 4-2 Creating a table view - (void)loadView { UITableView *tableView = [[UITableView alloc] initWithFrame:[[UIScreen mainScreen] applicationFrame] style:UITableViewStylePlain]; tableView.autoresizingMask = UIViewAutoresizingFlexibleHeight|UIViewAutoresizingFlexibleWidth; tableView.delegate = self; tableView.dataSource = self; [tableView reloadData]; self.view = tableView; } Because in this example the class creating the table view is a subclass of UIViewController, it assigns the created table view to its view property, which it inherits from that class. It also sends a reloadData message to the table view, causing the table view to initiate the procedure for populating its sections and rows with data. Populating a Dynamic Table View with Data Just after a table view object is created, it receives a reloadData message, which tells it to start querying the data source and delegate for the information it needs for the sections and rows it displays. The table view immediately asks the data source for its logical dimensions—that is, the number of sections and the number of rows in each section. It then repeatedly invokes the tableView:cellForRowAtIndexPath: method to get a cell object for each visible row; it uses this UITableViewCell object to draw the content of the row. (Scrolling a table view also causes an invocation of tableView:cellForRowAtIndexPath: for each newly visible row.) Creating and Configuring a Table View Populating a Dynamic Table View with Data 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 44As noted in “Choose the Table View’s Content Type” (page 40), if the table view is dynamic then you need to implement the required data source methods. Listing 4-3 shows an example of how the data source and the delegate could configure a dynamic table view. Listing 4-3 Populating a dynamic table view with data - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return [regions count]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Number of rows is the number of time zones in the region for the specified section. Region *region = [regions objectAtIndex:section]; return [region.timeZoneWrappers count]; } - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { // The header for the section is the region name -- get this from the region at the section index. Region *region = [regions objectAtIndex:section]; return [region name]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *MyIdentifier = @"MyReuseIdentifier"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:MyIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:MyIdentifier]]; } Region *region = [regions objectAtIndex:indexPath.section]; Creating and Configuring a Table View Populating a Dynamic Table View with Data 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 45TimeZoneWrapper *timeZoneWrapper = [region.timeZoneWrappers objectAtIndex:indexPath.row]; cell.textLabel.text = timeZoneWrapper.localeName; return cell; } The data source, in its implementation of the tableView:cellForRowAtIndexPath: method, returns a configured cell object that the table view can use to draw a row. For performance reasons, the data source tries to reuse cells as much as possible. It first asks the table view for a specific reusable cell object by sending it a dequeueReusableCellWithIdentifier: message. If no such object exists, the data source creates it, assigning it a reuse identifier. The data source sets the cell’s content (in this example, its text) and returns it. “A Closer Look at Table View Cells” (page 55) discusses this data source method and UITableViewCell objects in more detail. If the dequeueReusableCellWithIdentifier: method asks for a cell that’s defined in a storyboard, the method always returns a valid cell. If there is not a recycled cell waiting to be reused, the method creates a new one using the information in the storyboard itself. This eliminates the need to check the return value for nil and create a cell manually. The implementation of the tableView:cellForRowAtIndexPath: method in Listing 4-3 includes an NSIndexPath argument that identifies the table view section and row. UIKit declares a category of the NSIndexPath class, which is defined in the Foundation framework. This category extends the class to enable the identification of table view rows by section and row number. For more information on this category, see NSIndexPath UIKit Additions. Populating a Static Table View With Data As noted in “Choose the Table View’s Content Type” (page 40), if a table view is static then you should not implement any data source methods. The configuration of the table view is known at compile time, so UIKit can get this information from the storyboard at runtime. However, you still need to populate a static table view with data from your data model. “Populating a Static Table View With Data” shows an example of how a table view controller could load user data in a static table view. This example is adapted from Your Second iOS App: Storyboards. Listing 4-4 Populating a static table view with data - (void)viewDidLoad { Creating and Configuring a Table View Populating a Static Table View With Data 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 46[super viewDidLoad]; BirdSighting *theSighting = self.sighting; static NSDateFormatter *formatter = nil; if (formatter == nil) { formatter = [[NSDateFormatter alloc] init]; [formatter setDateStyle:NSDateFormatterMediumStyle]; } if (theSighting) { self.birdNameLabel.text = theSighting.name; self.locationLabel.text = theSighting.location; self.dateLabel.text = [formatter stringFromDate:(NSDate*)theSighting.date]; } } The table view is populated with data in the UIViewController method viewDidLoad, which is called after the view is loaded into memory. The data is passed to the table view controller in the sighting object, which isset in the previous view controller’s prepareForSegue:sender: method. The properties birdNameLabel, locationLabel, and dateLabel are outlets connected to labelsin the static table view (see Figure 4-4 (page 41)). Populating an Indexed List An indexed list (see Figure 1-2 (page 14)) is ideally suited for navigating large amounts of data organized by a conventional ordering scheme such as an alphabet. An indexed list is a table view in the plain style that is specially configured through three UITableViewDataSource methods: ● sectionIndexTitlesForTableView: Returns an array of the strings to use as the index entries (in order). ● tableView:titleForHeaderInSection: Maps these index strings to the titles of the table view’s sections (they don’t have to be the same). ● tableView:sectionForSectionIndexTitle:atIndex: Returns the section index related to the entry the user tapped in the index. Creating and Configuring a Table View Populating an Indexed List 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 47The data you use to populate an indexed list should be organized to reflect this indexing model. Specifically, you need to build an array of arrays. Each inner array corresponds to a section in the table. Section arrays are sorted (or collated) within the outer array according to the prevailing ordering scheme, which is often an alphabetical scheme (for example, A through Z). Additionally, the items in each section array are sorted. You can build and sort this array of arrays yourself, but fortunately the UILocalizedIndexedCollation class greatly simplifies the tasks of building and sorting these data structures and providing data to the table view. The class also collates items in the arrays according to the current localization. However you internally manage this array-of-arrays structure is up to you. The objects to be collated should have a property or method that returns a string value that the UILocalizedIndexedCollation class uses in collation; if it is a method, it should have no parameters. You might find it convenient to define a custom model class whose instances represent the rows in the table view. These model objects not only return a string value but also define a property that holdsthe index of the section array to which the object is assigned. Listing 4-5 illustrates the definition of a class that declares a name property and a sectionNumber property. Listing 4-5 Defining the model-object interface @interface State : NSObject @property(nonatomic,copy) NSString *name; @property(nonatomic,copy) NSString *capitol; @property(nonatomic,copy) NSString *population; @property NSInteger sectionNumber; @end Before your table view controller is asked to populate the table view, you load the data to be used (from whatever source) and create instances of your model class from this data. The example in Listing 4-6 loads data defined in a property list and creates the model objects from that. It also obtains the shared instance of UILocalizedIndexedCollation and initializes the mutable array (states) that will contain the section arrays. Listing 4-6 Loading the table-view data and initializing the model objects - (void)viewDidLoad { [super viewDidLoad]; UILocalizedIndexedCollation *theCollation = [UILocalizedIndexedCollation currentCollation]; self.states = [NSMutableArray arrayWithCapacity:1]; Creating and Configuring a Table View Populating an Indexed List 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 48NSString *thePath = [[NSBundle mainBundle] pathForResource:@"States" ofType:@"plist"]; NSArray *tempArray; NSMutableArray *statesTemp; if (thePath && (tempArray = [NSArray arrayWithContentsOfFile:thePath]) ) { statesTemp = [NSMutableArray arrayWithCapacity:1]; for (NSDictionary *stateDict in tempArray) { State *aState = [[State alloc] init]; aState.name = [stateDict objectForKey:@"Name"]; aState.population = [stateDict objectForKey:@"Population"]; aState.capitol = [stateDict objectForKey:@"Capitol"]; [statesTemp addObject:aState]; } } else { return; } After the data source has this “raw” array of model objects, it can process it with the facilities of the UILocalizedIndexedCollation class. In Listing 4-7, the code is annotated with numbers. Listing 4-7 Preparing the data for the indexed list // viewDidLoad continued... // (1) for (State *theState in statesTemp) { NSInteger sect = [theCollation sectionForObject:theState collationStringSelector:@selector(name)]; theState.sectionNumber = sect; } // (2) NSInteger highSection = [[theCollation sectionTitles] count]; NSMutableArray *sectionArrays = [NSMutableArray arrayWithCapacity:highSection]; for (int i = 0; i < highSection; i++) { NSMutableArray *sectionArray = [NSMutableArray arrayWithCapacity:1]; [sectionArrays addObject:sectionArray]; } Creating and Configuring a Table View Populating an Indexed List 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 49// (3) for (State *theState in statesTemp) { [(NSMutableArray *)[sectionArrays objectAtIndex:theState.sectionNumber] addObject:theState]; } // (4) for (NSMutableArray *sectionArray in sectionArrays) { NSArray *sortedSection = [theCollation sortedArrayFromArray:sectionArray collationStringSelector:@selector(name)]; [self.states addObject:sortedSection]; } } // end of viewDidLoad Here's what the code in Listing 4-7 does: 1. The data source enumerates the array of model objects and sends sectionForObject:collationStringSelector: to the collation manager on each iteration. This method takes as arguments a model object and a property or method of the object that it usesin collation. Each call returns the index of the section array to which the model object belongs, and that value is assigned to the sectionNumber property. 2. The data source source then creates a (temporary) outer mutable array and mutable arraysfor each section; it adds each created section array to the outer array. 3. It then enumerates the array of model objects and adds each object to its assigned section array. 4. The data source enumerates the array of section arrays and calls sortedArrayFromArray:collationStringSelector: on the collation manager to sort the items in each array. It passes in a section array and a property or method that is to be used in sorting the items in the array. Each sorted section array is added to the final outer array. Now the data source is ready to populate its table view with data. It implements the methods specific to indexed lists as shown in Listing 4-8. In doing this it calls two UILocalizedIndexedCollation methods: sectionIndexTitles and sectionForSectionIndexTitleAtIndex:. Also note that in tableView:titleForHeaderInSection: itsuppresses any headersfrom appearing in the table view when the associated section does not have any items. Listing 4-8 Providing section-index data to the table view - (NSArray *)sectionIndexTitlesForTableView:(UITableView *)tableView { Creating and Configuring a Table View Populating an Indexed List 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 50return [[UILocalizedIndexedCollation currentCollation] sectionIndexTitles]; } - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { if ([[self.states objectAtIndex:section] count] > 0) { return [[[UILocalizedIndexedCollation currentCollation] sectionTitles] objectAtIndex:section]; } return nil; } - (NSInteger)tableView:(UITableView *)tableView sectionForSectionIndexTitle:(NSString *)title atIndex:(NSInteger)index { return [[UILocalizedIndexedCollation currentCollation] sectionForSectionIndexTitleAtIndex:index]; } Finally, the data source should implement the UITableViewDataSource methods that are common to all table views. Listing 4-9 gives examples of these implementations, and illustrates how to use the section and row properties of the table view–specific category of the NSIndexPath class described in NSIndexPath UIKit Additions. Listing 4-9 Populating the rows of an indexed list - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return [self.states count]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [[self.states objectAtIndex:section] count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"StateCell"; Creating and Configuring a Table View Populating an Indexed List 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 51UITableViewCell *cell; cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]]; } State *stateObj = [[self.states objectAtIndex:indexPath.section] objectAtIndex:indexPath.row]; cell.textLabel.text = stateObj.name; return cell; } For table views that are indexed lists, when the data source assigns cells for rows in tableView:cellForRowAtIndexPath:, it should ensure that the accessoryType property of the cell is set to UITableViewCellAccessoryNone. After initially populating the table view following the procedure outlined above, you can reload the contents of the index by calling the reloadSectionIndexTitles method. Optional Table View Configurations The table view API allows you to configure various visual and behavioral aspects of a table view, including specific rows and sections. The following examples serve to give you some idea of the options available to you. Add a Custom Title In the same block of code that createsthe table view, you can apply global configurations using certain methods of the UITableView class. The code example in Listing 4-10 adds a custom title for the table view (using a UILabel object). Listing 4-10 Adding a title to the table view - (void)loadView { CGRect titleRect = CGRectMake(0, 0, 300, 40); UILabel *tableTitle = [[UILabel alloc] initWithFrame:titleRect]; Creating and Configuring a Table View Optional Table View Configurations 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 52tableTitle.textColor = [UIColor blueColor]; tableTitle.backgroundColor = [self.tableView backgroundColor]; tableTitle.opaque = YES; tableTitle.font = [UIFont boldSystemFontOfSize:18]; tableTitle.text = [curTrail objectForKey:@"Name"]; self.tableView.tableHeaderView = tableTitle; [self.tableView reloadData]; } Provide a Section Title The example in Listing 4-11 returns a title string for a section. Listing 4-11 Returning a title for a section - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { // Returns section title based on physical state: [solid, liquid, gas, artificial] return [[[PeriodicElements sharedPeriodicElements] elementPhysicalStatesArray] objectAtIndex:section]; } Indent a Row The code in Listing 4-12 moves a specific row to the next level of indentation. Listing 4-12 Custom indentation of a row - (NSInteger)tableView:(UITableView *)tableView indentationLevelForRowAtIndexPath:(NSIndexPath *)indexPath { if ( indexPath.section==TRAIL_MAP_SECTION && indexPath.row==0 ) { return 2; } return 1; } Creating and Configuring a Table View Optional Table View Configurations 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 53Vary a Row’s Height The example in Listing 4-13 varies the height of a specific row based on its index value. Listing 4-13 Varying row height - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { CGFloat result; switch ([indexPath row]) { case 0: { result = kUIRowHeight; break; } case 1: { result = kUIRowLabelHeight; break; } } return result; } Customize Cells You can also affect the appearance of rows by returning custom UITableViewCell objects with specially formatted subviews for content in tableView:cellForRowAtIndexPath:. Cell customization is discussed in “A Closer Look at Table View Cells” (page 55). Creating and Configuring a Table View Optional Table View Configurations 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 54A table view uses cell objects to draw its visible rows and then caches those objects as long as the rows are visible. Cells inherit from the UITableViewCell class. The table view’s data source provides the cell objects to the table view by implementing the tableView:cellForRowAtIndexPath: method, a required method of the UITableViewDataSource protocol. In this chapter, you’ll learn about: ● The characteristics of cells ● How to use the default capabilities of UITableViewCell for setting cell content ● How to create custom UITableViewCell objects Characteristics of Cell Objects A cell object has various parts, which can change depending on the mode of the table view. Normally, most of a cell object is reserved for its content: text, image, or any other kind of distinctive identifier. Figure 5-1 shows the major parts of a cell. Figure 5-1 Parts of a table view cell Cell content Accessory view The smaller area on the rightside of the cell isreserved for accessory views: disclosure indicators, detail disclosure controls, control objects such as sliders or switches, and custom views. 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 55 A Closer Look at Table View CellsWhen the table view goes into editing mode, the editing control for each cell object (if it’s configured to have such a control) appears on its left side, in the area shown in Figure 5-2. Figure 5-2 Parts of a table-view cell in editing mode Editing control Cell content Reordering control The editing control can be either a deletion control (a red minus sign inside a circle) or an insertion control (a green plus sign inside a circle). The cell’s content is pushed toward the right to make room for the editing control. If the cell object is configured for reordering (that is, relocation within the table view), the reordering control appearsin the rightside of the cell, next to any accessory view specified for editing mode. The reordering control is a stack of horizontal lines; to relocate a row within itstable view, users press on the reordering control and drag the cell. If a cell object isreusable—the typical case—you assign it a reuse identifier (an arbitrary string) in the storyboard. At runtime, the table view stores cell objects in an internal queue. When the table view asks the data source to configure a cell object for display, the data source can access the queued object by sending a dequeueReusableCellWithIdentifier: message to the table view, passing in a reuse identifier. The data source sets the content of the cell and any special properties before returning it. This reuse of cell objects is a performance enhancement because it eliminates the overhead of cell creation. With multiple cell objects in a queue, each with its own identifier, you can have table views constructed from cell objects of different types. For example, some rows of a table view can have content based on the image and text properties of a UITableViewCell in a predefined style, while other rows can be based on a customized UITableViewCell that defines a special format for its content. When providing cells for the table view, there are three general approaches you can take. You can use ready-made cell objects in a range of styles, you can add your own subviews to the cell object’s content view (which can be done in Interface Builder), or you can use cell objects created from a custom subclass of UITableViewCell. Note that the content view is a container of other views and so displays no content itself. A Closer Look at Table View Cells Characteristics of Cell Objects 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 56Using Cell Objects in Predefined Styles Using the UITableViewCell class directly, you can create “off-the-shelf” cell objects in a range of predefined styles. “Standard Styles for Table View Cells” (page 17) describes these standard cells and provides examples of how they look in a table view. These cells are associated with the following enum constants, declared in UITableViewCell.h: typedef enum { UITableViewCellStyleDefault, UITableViewCellStyleValue1, UITableViewCellStyleValue2, UITableViewCellStyleSubtitle } UITableViewCellStyle; These cell objects have two kinds of content: one or more text strings and, in some cases, an image. Figure 5-3 shows the approximate areas for image and text. As an image expands to the right, it pushes the text in the same direction. Figure 5-3 Default cell content in a UITableViewCell object Image Text Cell content Accessory view The UITableViewCell class defines three properties for this cell content: ● textLabel—A label for the title (a UILabel object) ● detailTextLabel—A label for the subtitle if there is additional detail (a UILabel object) ● imageView—An image view for an image (a UIImageView object) Because the first two of these properties are labels, you can set the font, alignment, line-break mode, and color of the associated text through the properties defined by the UILabel class (including the color of text when the row is highlighted). For the image view property, you can also set an alternative image for when the cell is highlighted using the highlightedImage property of the UIImageView class. A Closer Look at Table View Cells Using Cell Objects in Predefined Styles 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 57Figure 5-4 gives an example of a table view whose rows are drawn using a UITableViewCell object in the UITableViewCellStyleSubtitle style; it includes both an image and, for textual content, a title and a subtitle. Figure 5-4 A table view with rows showing both images and text Listing 5-1 showsthe implementation of tableView:cellForRowAtIndexPath: that createsthe table view rows in Figure 5-4 (page 58). The first thing the data source should do is send dequeueReusableCellWithIdentifier: to the table view, passing in a reuse identifier. If a prototype for the cell existsin a storyboard, the table view returns a reusable cell object. Then itsetsthe cell object’s content, both text and image. Listing 5-1 Configuring a UITableViewCell object with both image and text - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"MyIdentifier"]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:@"MyIdentifier"]]; cell.selectionStyle = UITableViewCellSelectionStyleNone; } A Closer Look at Table View Cells Using Cell Objects in Predefined Styles 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 58NSDictionary *item = (NSDictionary *)[self.content objectAtIndex:indexPath.row]; cell.textLabel.text = [item objectForKey:@"mainTitleKey"]; cell.detailTextLabel.text = [item objectForKey:@"secondaryTitleKey"]; NSString *path = [[NSBundle mainBundle] pathForResource:[item objectForKey:@"imageKey"] ofType:@"png"]; UIImage *theImage = [UIImage imageWithContentsOfFile:path]; cell.imageView.image = theImage; return cell; } The table view’s data source implementation of tableView:cellForRowAtIndexPath: should always reset all content when reusing a cell. When you configure a UITableViewCell object, you can also set various other properties, including (but not limited to) the following: ● selectionStyle—Controls the appearance of the cell when selected. ● accessoryType and accessoryView—Allow you to set one of the standard accessory views(disclosure indicator or detail disclosure control) or a custom accessory view for a cell in normal (nonediting) mode. For a custom view, you may provide any UIView object, such as a slider, a switch, or a custom view. ● editingAccessoryType and editingAccessoryView—Allow you to set one of the standard accessory views (disclosure indicator or detail disclosure control) or a custom accessory view for a cell in editing mode. For a custom view, you may provide any UIView object, such as a slider, a switch, or a custom view. ● showsReorderControl—Specifies whether it shows a reordering control when in editing mode. The related but read-only editingStyle property specifies the type of editing control the cell has (if any). The delegate returns the value of the editingStyle property in its implementation of the tableView:editingStyleForRowAtIndexPath: method. ● backgroundView and selectedBackgroundView—Provide a background view (when a cell is unselected and selected) to display behind all other views of the cell. ● indentationLevel and indentationWidth—Specify the indentation level for cell content and the width of each indentation level. Because a table view cell inherits from UIView, you can also affect its appearance and behavior by setting the properties defined by that superclass. For example, to affect a cell’s background color, you could set its backgroundColor property. Listing 5-2 shows how you might use the delegate method tableView:willDisplayCell:forRowAtIndexPath: to alternate the background color of rows(via their backing cells) in a table view. A Closer Look at Table View Cells Using Cell Objects in Predefined Styles 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 59Listing 5-2 Alternating the background color of cells - (void)tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cell forRowAtIndexPath:(NSIndexPath *)indexPath { if (indexPath.row%2 == 0) { UIColor *altCellColor = [UIColor colorWithWhite:0.7 alpha:0.1]; cell.backgroundColor = altCellColor; } } Listing 5-2 also illustrates an important aspect of the table view API. A table view sends a tableView:willDisplayCell:forRowAtIndexPath: message to its delegate just before it draws a row. If the delegate chooses to implement this method, it can make last-minute changes to the cell object before it is displayed. With this method, the delegate should change only state-based properties that were set earlier by the table view, such as selection and background color, and not content. Customizing Cells The four predefined styles of UITableViewCell objects suffice for most of the rows that table views display. With these ready-made cell objects, rows can include one or two styles of text, often an image, and an accessory view of some sort. The application can modify the text in its font, color, and other characteristics, and it can supply an image for the row in its selected state as well as its normal state. Asflexible and useful asthis cell content is, it might notsatisfy the requirements of all applications. For example, the labels permitted by a native UITableViewCell object are pinned to specific locations within a row, and the image must appear on the left side of the row. If you want the cell to have different content components and to have these laid out in different locations, or if you want different behavioral characteristics for the cell, you have two alternatives: ● Add subviews to a cell’s content view. ● Create a custom subclass of UITableViewCell. The following sections discuss both approaches. A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 60Loading Table View Cells from a Storyboard In a storyboard, the cells in a table view are dynamic or static. With dynamic content, the table view is a list with a large (and potentially unbounded) number of rows. With static content, the number of rowsis a finite quantity that’s known at compile time. A table view that presents a detail view of an item is a good candidate for static content. You can design dynamic or static cell content directly inside a table view object. Figure 5-5 shows the master and detail table viewsin a simple storyboard. In this example, the master table view contains dynamic prototype cells, and the detail table view contains static cells. Figure 5-5 Table view cells in a storyboard The following sections demonstrate how to load data into table views that contain custom-configured cells. A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 61The Technique for Dynamic Row Content In this section, you compose a custom prototype cell in a storyboard. At runtime, the data source dequeues cells, prepares them, and gives them to its table view for drawing the rows depicted in Figure 5-6. Figure 5-6 Table view rows drawn with a custom prototype cell The data source can use two different ways to access the subviews of the cells. One approach uses the tag property defined by UIView and the other approach uses outlets. Using tags is convenient, although it makes the code more fragile because it introduces a coupling between the tag numbers in the storyboard and the code. Using outlets requires a little more work because you need to define a custom subclass of UITableViewCell. Both approaches are described here. To create a project that uses a storyboard to load custom table view cells 1. Create a project using the Master-Detail Application template and select the Use Storyboards option. 2. On the storyboard canvas, select the master view controller. 3. In the Identity inspector, verify that Class is set to the custom MasterViewController class. 4. Select the table view inside the master view controller. 5. In the Attributes inspector, verify that the Content pop-up menu is set to Dynamic Prototypes. 6. Select the prototype cell. 7. In the Attributes inspector, choose Custom in the Style pop-up menu. A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 628. Enter a reuse identifier in the Identifier text field. This is the same reuse identifier you send to the table view in the dequeueReusableCellWithIdentifier: message. For an example, see Listing 5-3. 9. Choose Disclosure Indicator in the Accessory pop-up menu. 10. Drag objects from the Library onto the cell. For this example, drag two label objects and position them near the ends of the cell (leaving room for the accessory view). 11. Select the objects and set their attributes, sizes, and autoresizing characteristics. An important attribute to set for the programmatic portion of this procedure is each object’s tag property. Find this property in the View section of the Attributes inspector and assign each object a unique integer. Now write the code you would normally write to obtain the table view’s data. (For this example, the only data you need is the row number of each cell.) Implement the data source method tableView:cellForRowAtIndexPath: to create a new cell from the prototype and populate it with data, in a manner similar to Listing 5-3. Listing 5-3 Adding data to a cell using tags - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"MyIdentifier"]; UILabel *label; label = (UILabel *)[cell viewWithTag:1]; label.text = [NSString stringWithFormat:@"%d", indexPath.row]; label = (UILabel *)[cell viewWithTag:2]; label.text = [NSString stringWithFormat:@"%d", NUMBER_OF_ROWS - indexPath.row]; return cell; } A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 63There are a few aspects of this code to note: ● The string identifier you assign to the prototype cell is the same string you pass to the table view in dequeueReusableCellWithIdentifier:. ● Because the prototype cell is defined in a storyboard, the dequeueReusableCellWithIdentifier: method always returns a valid cell. You don’t need to check the return value against nil and create a cell manually. ● The code gets the labels in the cell by calling viewWithTag:, passing in their tag integers. It can then set the textual content of the labels. If you prefer not to use tags, you can use an alternative method for setting the content in the cell. Define a custom UITableViewCell subclass with outlet properties for the objects you want to set. In the storyboard, associate the new class with the prototype cell and connect the outlets to the corresponding objects in the cell. To use outlets for the custom cell content 1. Add an Objective-C class named MyTableViewCell to your project. 2. Add the following code to the interface in MyTableViewCell.h: @interface MyTableViewCell : UITableViewCell @property (nonatomic, weak) IBOutlet UILabel *firstLabel; @property (nonatomic, weak) IBOutlet UILabel *secondLabel; @end 3. Add the following code to the implementation in MyTableViewCell.m: @synthesize firstLabel, secondLabel; 4. Add the following line of code to the source file that implements the data source: #import "MyTableViewCell.h" 5. Use the Identity inspector to set the Class of the prototype cell to MyTableViewCell. A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 646. Use the Connections inspector to connect the two outlets in the prototype cell to their corresponding labels. 7. Implement the data source method tableView:cellForRowAtIndexPath: in a manner similar to Listing 5-4. Listing 5-4 Adding data to a cell using outlets - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { MyTableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:@"MyIdentifier"]; cell.firstLabel.text = [NSString stringWithFormat:@"%d", indexPath.row]; cell.secondLabel.text = [NSString stringWithFormat:@"%d", NUMBER_OF_ROWS - indexPath.row]; return cell; } The code gains access to the labels in the cell using accessor methods (dot notation is used here). The code can then set the textual content of the labels. A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 65The Technique for Static Row Content In this section, you compose several cells in a table view with static content. At runtime, when the table view is loaded from the storyboard, the table view controller has immediate access to these cells and composes the sections and rows of the table view with them, as depicted in Figure 5-7. Figure 5-7 Table view rows drawn with multiple cells As with the procedure for dynamic content, start by adding a subclass of UITableViewController to your project. Define outlet properties for the master row label in the first cell and the slider value label in the last cell, as shown in Listing 5-5. Listing 5-5 Defining outlet properties for static cell objects @interface DetailViewController : UITableViewController @property (strong, nonatomic) id detailItem; @property (weak, nonatomic) IBOutlet UILabel *masterRowLabel; @property (weak, nonatomic) IBOutlet UILabel *sliderValueLabel; @property (weak, nonatomic) IBOutlet UISlider *slider; - (IBAction)logHello; - (IBAction)sliderValueChanged:(UISlider *)slider; A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 66@end In the storyboard, drag a Table View Controller object from the Library onto the canvas. Select the table view and set the following attributes in the Attributes inspector: 1. Set the Content pop-up menu to Static Cells. 2. Set the number of sections to 2. 3. Set the Style pop-up menu to Grouped. For each section in the table view, use the Attributes inspector to enter a string in the Header field. Then for the cells, complete the following steps: 1. Delete two of the three cells in the first table-view section and one cell in the second section. 2. Increase the height of each remaining cell as needed. It isn’t necessary to assign reuse identifiers tof these cells, because you’re not going to implement the data source method tableView:cellForRowAtIndexPath:. 3. Drag objects from the Library to compose the subviews of each cell as depicted in Figure 5-7 (page 66). 4. Set any desired attributes of these objects. The slider in this example has a range of values from 0 to 10 with an initial value of 7.5. A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 67Select the table view controller and display the Connections inspector. Make connections between the three outlets in your table view controller and the corresponding objects, as shown in Figure 5-8. While you’re at it, implement the two action methods declared in Listing 5-5 (page 66) and make target-action connections to the button and the slider. Figure 5-8 Making connections to your static cell content To populate the data in the static cells, implement a method called configureView in the detail view controller. In this example, detailItem is an NSString object passed in by the master view controller in its prepareForSegue:sender: method. The string contains the master row number. Listing 5-6 Setting the data in the user interface - (void)configureView { if (self.detailItem) { self.masterRowLabel.text = [self.detailItem description]; } self.sliderValueLabel.text = [NSString stringWithFormat:@"%1.1f", self.slider.value]; } A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 68The detail view controller calls the configureView method in viewDidLoad and setDetailItem:, as illustrated in the Xcode template Master-Detail Application. Programmatically Adding Subviews to a Cell’s Content View A cell that a table view uses for displaying a row is a view (UITableViewCell inherits from UIView). As a view, a cell has a content view—a superview for cell content—that it exposes as a property. To customize the appearance of rows in a table view, add subviews to the cell’s content view, which is accessible through its contentView property, and lay them out in the desired locations in their superview’s coordinates. You can configure and lay them out programmatically or in Interface Builder. (The approach using Interface Builder is discussed in “Loading Table View Cells from a Storyboard” (page 61).) One advantage of this approach is its relative simplicity; it doesn’t require you to create a custom subclass of UITableViewCell and handle all of the implementation details required for custom views. However, if you do take this approach, avoid making the views transparent, if you can. Transparent subviews affect scrolling performance because of the increased compositing cost. Subviews should be opaque, and typically should have the same background color as the cell. And if the cell is selectable, make sure that the cell content is highlighted appropriately when selected. The content is selected automatically if the subview implements (if appropriate) the accessor methods for the highlighted property. A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 69Suppose you want a cell with text and image content in custom locations. For example, you want the image on the right side of the cell and the title and subtitle of the cell right-aligned against the left side of the image. Figure 5-9 show how a table view with rows drawn with such a cell might look. (This example is for illustration only, and is not intended as a human-interface model.) Figure 5-9 Cells with custom content as subviews The code example in Listing 5-7 illustrates how the data source programmatically composes the cell with which thistable view drawsitsrows. In tableView:cellForRowAtIndexPath:, it first checksto see the table view already has a cell object with the given reuse identifier. If there is no such object, the data source creates two label objects and one image view with specific frames within the coordinate system of their superview (the content view). It also sets attributes of these objects. Having acquired an appropriate cell to use, the data source sets the cell’s content before returning the cell. Listing 5-7 Adding subviews to a cell’s content view #define MAINLABEL_TAG 1 #define SECONDLABEL_TAG 2 #define PHOTO_TAG 3 - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 70static NSString *CellIdentifier = @"ImageOnRightCell"; UILabel *mainLabel, *secondLabel; UIImageView *photo; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]]; cell.accessoryType = UITableViewCellAccessoryDetailDisclosureButton; mainLabel = [[[UILabel alloc] initWithFrame:CGRectMake(0.0, 0.0, 220.0, 15.0)]]; mainLabel.tag = MAINLABEL_TAG; mainLabel.font = [UIFont systemFontOfSize:14.0]; mainLabel.textAlignment = UITextAlignmentRight; mainLabel.textColor = [UIColor blackColor]; mainLabel.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleHeight; [cell.contentView addSubview:mainLabel]; secondLabel = [[[UILabel alloc] initWithFrame:CGRectMake(0.0, 20.0, 220.0, 25.0)]]; secondLabel.tag = SECONDLABEL_TAG; secondLabel.font = [UIFont systemFontOfSize:12.0]; secondLabel.textAlignment = UITextAlignmentRight; secondLabel.textColor = [UIColor darkGrayColor]; secondLabel.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleHeight; [cell.contentView addSubview:secondLabel]; photo = [[[UIImageView alloc] initWithFrame:CGRectMake(225.0, 0.0, 80.0, 45.0)]]; photo.tag = PHOTO_TAG; photo.autoresizingMask = UIViewAutoresizingFlexibleLeftMargin | UIViewAutoresizingFlexibleHeight; [cell.contentView addSubview:photo]; } else { A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 71mainLabel = (UILabel *)[cell.contentView viewWithTag:MAINLABEL_TAG]; secondLabel = (UILabel *)[cell.contentView viewWithTag:SECONDLABEL_TAG]; photo = (UIImageView *)[cell.contentView viewWithTag:PHOTO_TAG]; } NSDictionary *aDict = [self.list objectAtIndex:indexPath.row]; mainLabel.text = [aDict objectForKey:@"mainTitleKey"]; secondLabel.text = [aDict objectForKey:@"secondaryTitleKey"]; NSString *imagePath = [[NSBundle mainBundle] pathForResource:[aDict objectForKey:@"imageKey"] ofType:@"png"]; UIImage *theImage = [UIImage imageWithContentsOfFile:imagePath]; photo.image = theImage; return cell; } When the data source creates the cells, it assigns each subview an identifier called a tag. With tags, you can locate a view in its view hierarchy by calling the viewWithTag: method. If the delegate later getsthe designated cell from the table view’s queue, it uses the tags to obtain references to the three subviews prior to assigning them content. This code creates a UITableViewCell object in the predefined default style (UITableViewCellStyleDefault). Because the content properties of the standard cells—textLabel, detailTextLabel, and imageView—are nil until assigned content, you may use any predefined cell as the template for customization. Note: Another approach is to subclass UITableViewCell and create instances in the UITableViewCellStyleSubtitle style. Then override the layoutSubviewsmethod to reposition the textLabel, detailTextLabel, and imageView subviews (after calling super). One way to achieve “attributed string” effects with textual content is to lay out UILabel subviews of the UITableViewCell content view. The text of each label can have its own font, color,size, alignment, and other characteristics. If you want that kind of variation within a label object, create multiple labels and lay them out relative to each other. A Closer Look at Table View Cells Customizing Cells 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 72Cells and Table View Performance The proper use of table view cells, whether off-the-shelf or custom cell objects, is a major factor in the performance of table views. Ensure that your application does the following three things: ● Reuse cells. Object allocation has a performance cost, especially if the allocation hasto happen repeatedly over a short period—say, when the user scrolls a table view. If you reuse cells instead of allocating new ones, you greatly enhance table view performance. ● Avoid relayout of content. When reusing cells with custom subviews, refrain from laying out those subviews each time the table view requests a cell. Lay out the subviews once, when the cell is created. ● Use opaque subviews. When customizing table view cells, make the subviews of the cell opaque, not transparent. A Closer Look at Table View Cells Cells and Table View Performance 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 73When users tap a row of a table view, usually something happens as a result. Another table view could slide into place, the row could display a checkmark, orsome other action could be performed. The following sections describe how to respond to selections and how to make selections programmatically. Selections in Table Views There are a few human-interface guidelines to keep in mind when dealing with cell selection in table views: ● You should never use selection to indicate state. Instead, use check marks and accessory viewsforshowing state. ● When the user selects a cell, you should respond by deselecting the previously selected cell (by calling the deselectRowAtIndexPath:animated: method) as well as by performing any appropriate action, such as displaying a detail view. ● If you respond to the the selection of a cell by pushing a new view controller onto the navigation controller’s stack, you should deselect the cell (with animation) when the view controller is popped off the stack. You can control whether rows are selectable when the table view is in editing mode by setting the allowsSelectionDuringEditing property of UITableView. In addition, beginning with iOS 3.0, you can control whether cells are selectable when editing mode is not in effect by setting the allowsSelection property. Responding to Selections Users tap a row in a table view either to signal to the application that they want to know more about what that row signifies or to select what the row represents. In response to the user tapping a row, an application could do any of the following: ● Show the next level in a data-model hierarchy. ● Show a detail view of an item (that is, a leaf node of the data-model hierarchy). ● Show a checkmark in the row to indicate that the represented item is selected. 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 74 Managing Selections● If the touch occurred in a control embedded in the row, it could respond to the action message sent by the control. To handle most selections of rows, the table view’s delegate must implement the tableView:didSelectRowAtIndexPath: method. In sample method implementation shown in Listing 6-1, the delegate first deselectsthe selected row. Then it allocates and initializes an instance of the next table-view controller in the sequence. Itsetsthe data this view controller needsto populate itstable view and then pushes this object onto the stack maintained by the application’s UINavigationController object. Listing 6-1 Responding to a row selection - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [tableView deselectRowAtIndexPath:indexPath animated:NO]; BATTrailsViewController *trailsController = [[BATTrailsViewController alloc] initWithStyle:UITableViewStylePlain]; trailsController.selectedRegion = [regions objectAtIndex:indexPath.row]; [[self navigationController] pushViewController:trailsController animated:YES]; [trailsController release]; } If a row has a disclosure control—the white chevron over a blue circle—for an accessory view, clicking the control resultsin the delegate receiving a tableView:accessoryButtonTappedForRowWithIndexPath: message (instead of tableView:didSelectRowAtIndexPath:). The delegate responds to this message in the same general way as it does for other kinds of selections. A row can also have a control object as its accessory view, such as a switch or a slider. This control object functions as it would in any other context: Manipulating the object in the proper way results in an action message being sent to a target object. Listing 6-2 illustrates a data source object that adds a UISwitch object as a cell’s accessory view and then responds to the action messages sent when the switch is “flipped.” Listing 6-2 Setting a switch object as an accessory view and responding to its action message - (UITableViewCell *)tableView:(UITableView *)tv cellForRowAtIndexPath:(NSIndexPath *)indexPath { UITableViewCell *cell = [tv dequeueReusableCellWithIdentifier:@"CellWithSwitch"]; if (cell == nil) { Managing Selections Responding to Selections 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 75cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:@"CellWithSwitch"] autorelease]; cell.selectionStyle = UITableViewCellSelectionStyleNone; cell.textLabel.font = [UIFont systemFontOfSize:14]; } UISwitch *switchObj = [[UISwitch alloc] initWithFrame:CGRectMake(1.0, 1.0, 20.0, 20.0)]; switchObj.on = YES; [switchObj addTarget:self action:@selector(toggleSoundEffects:) forControlEvents:(UIControlEventValueChanged | UIControlEventTouchDragInside)]; cell.accessoryView = switchObj; [switchObj release]; cell.textLabel.text = @"Sound Effects"; return cell; } - (void)toggleSoundEffects:(id)sender { [self.soundEffectsOn = [(UISwitch *)sender isOn]; [self reset]; } You may also define controls as accessory views of table-view cells created in Interface Builder. Drag a control object (switch, slider, and so on) into a nib document window containing a table-view cell. Then, using the connection window, make the control the accessory view of the cell. “Loading Table View Cells from a Storyboard” (page 61) describes the procedure for creating and configuring table-view cell objects in nib files. Selection management is also important with selection lists. There are two kinds of selection lists: ● Exclusive lists where only one row is permitted the checkmark ● Inclusive lists where more than one row can have a checkmark Listing 6-3 illustrates one approach to managing an exclusive selection list. It first deselects the currently selected row and returns if the same row is selected; otherwise it sets the checkmark accessory type on the newly selected row and removes the checkmark on the previously selected row Managing Selections Responding to Selections 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 76Listing 6-3 Managing a selection list—exclusive list - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [tableView deselectRowAtIndexPath:indexPath animated:NO]; NSInteger catIndex = [taskCategories indexOfObject:self.currentCategory]; if (catIndex == indexPath.row) { return; } NSIndexPath *oldIndexPath = [NSIndexPath indexPathForRow:catIndex inSection:0]; UITableViewCell *newCell = [tableView cellForRowAtIndexPath:indexPath]; if (newCell.accessoryType == UITableViewCellAccessoryNone) { newCell.accessoryType = UITableViewCellAccessoryCheckmark; self.currentCategory = [taskCategories objectAtIndex:indexPath.row]; } UITableViewCell *oldCell = [tableView cellForRowAtIndexPath:oldIndexPath]; if (oldCell.accessoryType == UITableViewCellAccessoryCheckmark) { oldCell.accessoryType = UITableViewCellAccessoryNone; } } Listing 6-4 illustrates how to manage a inclusive selection list. As the comments in this example indicate, when the delegate adds a checkmark to a row or removes one, it typically also sets or unsets any associated model-object attribute. Listing 6-4 Managing a selection list—inclusive list - (void)tableView:(UITableView *)theTableView didSelectRowAtIndexPath:(NSIndexPath *)newIndexPath { [theTableView deselectRowAtIndexPath:[theTableView indexPathForSelectedRow] animated:NO]; UITableViewCell *cell = [theTableView cellForRowAtIndexPath:newIndexPath]; if (cell.accessoryType == UITableViewCellAccessoryNone) { cell.accessoryType = UITableViewCellAccessoryCheckmark; Managing Selections Responding to Selections 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 77// Reflect selection in data model } else if (cell.accessoryType == UITableViewCellAccessoryCheckmark) { cell.accessoryType = UITableViewCellAccessoryNone; // Reflect deselection in data model } } In tableView:didSelectRowAtIndexPath: you should always deselect the currently selected row. Programmatically Selecting and Scrolling Occasionally the selection of a row originates within the application itself rather than from a tap in a table view. There could be an externally induced change in the data model. For example, the user adds a new person to an address book and then returnsto the list of contacts; the application wantsto scroll thislist to the recently added person. For situations like these, you can use the UITableView methods selectRowAtIndexPath:animated:scrollPosition: and (if the row is already selected) scrollToNearestSelectedRowAtScrollPosition:animated:. You may also call scrollToRowAtIndexPath:atScrollPosition:animated: if you want to scroll to a specific row without selecting it. The code in Listing 6-5 (somewhat whimsically) programmatically selects and scrolls to a row 20 rows away from the just-selected row using the selectRowAtIndexPath:animated:scrollPosition: method. Listing 6-5 Programmatically selecting a row - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)newIndexPath { NSIndexPath *scrollIndexPath; if (newIndexPath.row + 20 < [timeZoneNames count]) { scrollIndexPath = [NSIndexPath indexPathForRow:newIndexPath.row+20 inSection:newIndexPath.section]; } else { scrollIndexPath = [NSIndexPath indexPathForRow:newIndexPath.row-20 inSection:newIndexPath.section]; } [theTableView selectRowAtIndexPath:scrollIndexPath animated:YES scrollPosition:UITableViewScrollPositionMiddle]; Managing Selections Programmatically Selecting and Scrolling 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 78} Managing Selections Programmatically Selecting and Scrolling 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 79A table view has an editing mode as well as its normal (selection) mode. When a table view goes into editing mode, it displays the editing and reordering controls associated with its rows. The editing controls, which are in the left side of the row, allow the user to insert and delete rows in the table view. The editing controls have distinctive appearances: Deletion control Insertion control When a table view enters editing mode and when users click an editing control, the table view sends a series of messages to its data source and delegate, but only if they implement these methods. These methods allow the data source and delegate to refine the appearance and behavior of rows in the table view; the messages also enable them to carry out the deletion or insertion operation. Even if a table view is not in editing mode, you can insert or delete a number of rows or sections as a group and have those operations animated. The first section below shows you how, when a table is in editing mode, to insert new rows and delete existing rows in a table view in response to user actions. The second section, “Batch Insertion, Deletion, and Reloading of Rows and Sections” (page 87), discusses how you can insert and delete multiple sections and rows animated as a group. 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 80 Inserting and Deleting Rows and SectionsNote: The procedure for reordering rows when in editing mode is described in “Managing the Reordering of Rows” (page 91). Inserting and Deleting Rows in Editing Mode When a Table View is Edited A table view goes into editing mode when it receives a setEditing:animated: message. Typically (but not necessarily) the message originates as an action message sent when the user taps an Edit button in the navigation bar. In editing mode, a table view displays any editing (and reordering) controls that its delegate has assigned to each row. The delegate assigns the controls as a result of returning the editing style for a row in the tableView:editingStyleForRowAtIndexPath: method. Inserting and Deleting Rows and Sections Inserting and Deleting Rows in Editing Mode 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 81Note: If a UIViewController object is managing the table view, it automatically receives a setEditing:animated: message when the Edit button is tapped. In its implementation of this message, it can update button state or do any other kind of task before invoking the table view’s version of the method. When the table view receives setEditing:animated:, itsendsthe same message to the UITableViewCell object for each visible row. Then it sends a succession of messages to its data source and its delegate (if they implement the methods) as depicted in the diagram in Figure 7-1. Figure 7-1 Calling sequence for inserting or deleting rows in a table view tableView: editingStyleForRowAtIndexPath: tableView: canEditRowAtIndexPath: Client Delegate Data Source Table View User presses Edit button User presses editing control User presses Delete button setEditing:YES animated:YES tableView:commitEditingStyle: forRowAtIndexPath: deleteRowsAtIndexPath: withRowAnimation or insertRowsAtIndexPath: withRowAnimation: After resending setEditing:animated: to the cells corresponding to the visible rows, the sequence of messages is as follows: 1. The table view invokesthe tableView:canEditRowAtIndexPath: method if its data source implements it. This method allows the application to exclude rows in the table view from being edited even when their cell’s editingStyle property indicates otherwise. Most applications do not need to implement this method. 2. The table view invokes the tableView:editingStyleForRowAtIndexPath: method if its delegate implements it. This method allows the application to specify a row’s editing style and thus the editing control that the row displays. Inserting and Deleting Rows and Sections Inserting and Deleting Rows in Editing Mode 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 82At this point, the table view is fully in editing mode. It displays the insertion or deletion control for each eligible row. 3. The user taps an editing control (either the deletion control or the insertion control). If he or she taps a deletion control, a Delete button is displayed on the row. The user then taps that button to confirm the deletion. 4. The table view sends the tableView:commitEditingStyle:forRowAtIndexPath: message to the data source. Although this protocol method is marked as optional, the data source must implement it if it wants to insert or delete a row. It must do two things: ● Send deleteRowsAtIndexPaths:withRowAnimation: or insertRowsAtIndexPaths:withRowAnimation: to the table view to direct it to adjust its presentation. ● Update the corresponding data-model array by either deleting the referenced item from the array or adding an item to the array. When the user swipes across a row to display the Delete button for that row, there is a variation in the calling sequence diagrammed in Figure 7-1 (page 82). When the user swipes a row to delete it, the table view first checks to see if its data source has implemented the tableView:commitEditingStyle:forRowAtIndexPath: method; if that is so, it sends setEditing:animated: to itself and enters editing mode. In this “swipe to delete” mode, the table view does not display the editing and reordering controls. Because this is a user-driven event, it also brackets the messagesto the delegate inside of two other messages: tableView:willBeginEditingRowAtIndexPath: and tableView:didEndEditingRowAtIndexPath:. By implementing these methods, the delegate can update the appearance of the table view appropriately. Note: The data source should not call setEditing:animated: from within its implementation of tableView:commitEditingStyle:forRowAtIndexPath:. If forsome reason it must, itshould invoke it after a delay by using the performSelector:withObject:afterDelay: method. Although you can use an insertion control as the trigger to insert a new row in a table view, an alternative approach is to have an Add (or plus sign) button in the navigation bar. Tapping the button sends an action message to the view controller, which overlays the table view with a modal view for entering the new item. Once the item is entered, the controller adds it to the data-model array and reloads the table. “An Example of Adding a Table-View Row” (page 85) discusses this approach. Inserting and Deleting Rows and Sections Inserting and Deleting Rows in Editing Mode 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 83An Example of Deleting a Table-View Row This section gives a guided tour through the parts of a project that work together to set up a table view for editing mode and delete rowsfrom it. This project usesthe navigation controller and view controller architecture to manage its table views. In its loadView method, the custom view controller creates the table view and sets itself to be the data source and delegate. It also sets the right item of the navigation bar to be the standard Edit button. self.navigationItem.rightBarButtonItem = self.editButtonItem; This button is preconfigured to send setEditing:animated: to the view controller when tapped; it toggles the button title (between Edit and Done) and the Boolean editing parameter on alternating taps. In its implementation of the method, as shown in Listing 7-1, the view controller invokes the superclass invocation of the method, sends the same message to the table view, and updates the enabled state of the other button in the navigation bar (a plus-sign button, for adding items). Listing 7-1 View controller responding to setEditing:animated: - (void)setEditing:(BOOL)editing animated:(BOOL)animated { [super setEditing:editing animated:animated]; [tableView setEditing:editing animated:YES]; if (editing) { addButton.enabled = NO; } else { addButton.enabled = YES; } } When its table view enters editing mode, the view controller specifies a deletion control for every row except the last, which has an insertion control. It does this in its implementation of the tableView:editingStyleForRowAtIndexPath: method (Listing 7-2). Listing 7-2 Customizing the editing style of rows - (UITableViewCellEditingStyle)tableView:(UITableView *)tableView editingStyleForRowAtIndexPath:(NSIndexPath *)indexPath { SimpleEditableListAppDelegate *controller = (SimpleEditableListAppDelegate *)[[UIApplication sharedApplication] delegate]; if (indexPath.row == [controller countOfList]-1) { Inserting and Deleting Rows and Sections Inserting and Deleting Rows in Editing Mode 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 84return UITableViewCellEditingStyleInsert; } else { return UITableViewCellEditingStyleDelete; } } The user taps the deletion control in a row and the view controller receives a tableView:commitEditingStyle:forRowAtIndexPath: message from the table view. As shown in Listing 7-3, it handles this message by removing the item corresponding to the row from a model array and sending deleteRowsAtIndexPaths:withRowAnimation: to the table view. Listing 7-3 Updating the data-model array and deleting the row - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { // If row is deleted, remove it from the list. if (editingStyle == UITableViewCellEditingStyleDelete) { SimpleEditableListAppDelegate *controller = (SimpleEditableListAppDelegate *)[[UIApplication sharedApplication] delegate]; [controller removeObjectFromListAtIndex:indexPath.row]; [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; } } An Example of Adding a Table-View Row This section shows project code that inserts a row in a table view. Instead of using the insertion control as the trigger for inserting a row, it uses an Add button (visually a plus sign) in the navigation bar above the table view. This code also is based on the navigation controller and view controller architecture. In its loadView method implementation, the view controller assigns the Add button as the right-side item of the navigation bar using the code shown in Listing 7-4. Listing 7-4 Adding an Add button to the navigation bar addButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(addItem:)]; Inserting and Deleting Rows and Sections Inserting and Deleting Rows in Editing Mode 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 85self.navigationItem.rightBarButtonItem = addButton; Note that the view controller sets the control states for the title as well as the action selector and the target object. When the user taps the Add button, the addItem: message is sent to the target (the view controller). It responds as shown in Listing 7-5. It creates a navigation controller with a single view controller whose view is put onscreen modally—it animates upward to overlay the table view. The presentModalViewController:animated: method to do this. Listing 7-5 Responding to a tap on the Add button - (void)addItem:sender { if (itemInputController == nil) { itemInputController = [[ItemInputController alloc] init]; } UINavigationController *navigationController = [[UINavigationController alloc] initWithRootViewController:itemInputController]; [[self navigationController] presentModalViewController:navigationController animated:YES]; [navigationController release]; } The modal, or overlay, view consists of a single custom text field. The user enters text for the new table-view item and then taps a Save button. This button sends a save: action message to its target: the view controller for the modal view. As shown in Listing 7-6, the view controller extracts the string value from the text field and updates the application’s data-model array with it. Listing 7-6 Adding the new item to the data-model array - (void)save:sender { UITextField *textField = [(EditableTableViewTextField *)[tableView cellForRowAtIndexPath:[NSIndexPath indexPathForRow:0 inSection:0]] textField]; SimpleEditableListAppDelegate *controller = (SimpleEditableListAppDelegate *)[[UIApplication sharedApplication] delegate]; NSString *newItem = textField.text; if (newItem != nil) { [controller insertObject:newItem inListAtIndex:[controller countOfList]]; Inserting and Deleting Rows and Sections Inserting and Deleting Rows in Editing Mode 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 86} [self dismissModalViewControllerAnimated:YES]; } After the modal view is dismissed the table view is reloaded, and it now reflects the added item. Batch Insertion, Deletion, and Reloading of Rows and Sections The UITableView class allows you to insert, delete, and reload a group of rows or sections at one time, animating the operations simultaneously in specified ways. The eight methods shown in Listing 7-7 pertain to batch insertion and deletion. Note that you can call these insertion and deletion methods outside of an animation block (as you do in the data source method tableView:commitEditingStyle:forRowAtIndexPath: as discussed in “Inserting and Deleting Rows in Editing Mode” (page 81)). Listing 7-7 Batch insertion and deletion methods - (void)beginUpdates; - (void)endUpdates; - (void)insertSections:(NSIndexSet *)sections withRowAnimation:(UITableViewRowAnimation)animation; - (void)deleteSections:(NSIndexSet *)sections withRowAnimation:(UITableViewRowAnimation)animation; - (void)reloadSections:(NSIndexSet *)sections withRowAnimation:(UITableViewRowAnimation)animation; - (void)insertRowsAtIndexPaths:(NSArray *)indexPaths withRowAnimation: (UITableViewRowAnimation)animation; - (void)deleteRowsAtIndexPaths:(NSArray *)indexPaths withRowAnimation: (UITableViewRowAnimation)animation; - (void)reloadRowsAtIndexPaths:(NSArray *)indexPaths withRowAnimation:(UITableViewRowAnimation)animation; Inserting and Deleting Rows and Sections Batch Insertion, Deletion, and Reloading of Rows and Sections 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 87Note: The reloadSections:withRowAnimation: and reloadRowsAtIndexPaths:withRowAnimation: methods, which were introduced in iOS 3.0, allow you to request the table view to reload the data for specific sections and rows instead of loading the entire visible table view by calling reloadData. To animate a batch insertion, deletion, and reloading of rows and sections, call the corresponding methods within an animation block defined by successive calls to beginUpdates and endUpdates. If you don’t call the insertion, deletion, and reloading methods within this block, row and section indexes may be invalid. Calls to beginUpdates and endUpdates can be nested; all indexes are treated as if there were only the outer update block. At the conclusion of a block—that is, after endUpdates returns—the table view queries its data source and delegate as usual for row and section data. Thusthe collection objects backing the table view should be updated to reflect the new or removed rows or sections. An Example of Batched Insertion and Deletion Operations To insert and delete a group of rows and sections in a table view, first prepare the array (or arrays) that are the source of data for the sections and rows. After rows and sections are deleted and inserted, the resulting rows and sections are populated from this data store. Next, call the beginUpdates method, followed by invocations of insertRowsAtIndexPaths:withRowAnimation:, deleteRowsAtIndexPaths:withRowAnimation:, insertSections:withRowAnimation:, or deleteSections:withRowAnimation:. Conclude the animation block by calling endUpdates. Listing 7-8 illustrates this procedure. Listing 7-8 Inserting and deleting a block of rows in a table view - (IBAction)insertAndDeleteRows:(id)sender { // original rows: Arizona, California, Delaware, New Jersey, Washington [states removeObjectAtIndex:4]; // Washington [states removeObjectAtIndex:2]; // Delaware [states insertObject:@"Alaska" atIndex:0]; [states insertObject:@"Georgia" atIndex:3]; [states insertObject:@"Virginia" atIndex:5]; NSArray *deleteIndexPaths = [NSArray arrayWithObjects: Inserting and Deleting Rows and Sections Batch Insertion, Deletion, and Reloading of Rows and Sections 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 88[NSIndexPath indexPathForRow:2 inSection:0], [NSIndexPath indexPathForRow:4 inSection:0], nil]; NSArray *insertIndexPaths = [NSArray arrayWithObjects: [NSIndexPath indexPathForRow:0 inSection:0], [NSIndexPath indexPathForRow:3 inSection:0], [NSIndexPath indexPathForRow:5 inSection:0], nil]; UITableView *tv = (UITableView *)self.view; [tv beginUpdates]; [tv insertRowsAtIndexPaths:insertIndexPaths withRowAnimation:UITableViewRowAnimationRight]; [tv deleteRowsAtIndexPaths:deleteIndexPaths withRowAnimation:UITableViewRowAnimationFade]; [tv endUpdates]; // ending rows: Alaska, Arizona, California, Georgia, New Jersey, Virginia } This example removes two strings from an array (and their corresponding rows) and inserts three strings into the array (along with their corresponding rows). The next section, “Ordering of Operations and Index Paths,” explains particular aspects of the row (or section) insertion and deletion behavior. Ordering of Operations and Index Paths You might have noticed something in the code shown in Listing 7-8 that seems peculiar. The code calls the deleteRowsAtIndexPaths:withRowAnimation: method after it calls insertRowsAtIndexPaths:withRowAnimation:. However, this is not the order in which UITableView completes the operations. It defers any insertions of rows or sections until after it has handled the deletions of rows or sections. The table view behaves the same way with reloading methods called inside an update block—the reload takes place with respect to the indexes of rows and sections before the animation block is executed. This behavior happens regardless of the ordering of the insertion, deletion, and reloading method calls. Deletion and reloading operations within an animation block specify which rows and sections in the original table should be removed or reloaded; insertions specify which rows and sections should be added to the resulting table. The index paths used to identify sections and rows follow this model. Inserting or removing Inserting and Deleting Rows and Sections Batch Insertion, Deletion, and Reloading of Rows and Sections 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 89an item in a mutable array, on the other hand, may affect the array index used for the successive insertion or removal operation; for example, if you insert an item at a certain index, the indexes of all subsequent items in the array are incremented. An example is useful here. Say you have a table view with three sections, each with three rows. Then you implement the following animation block: 1. Begin updates. 2. Delete row at index 1 of section at index 0. 3. Delete section at index 1. 4. Insert row at index 1 of section at index 1. 5. End updates. Figure 7-2 illustrates what takes place after the animation block concludes. Figure 7-2 Deletion of section and row and insertion of row Section 1 Section 0 Section 1 Section 2 Section 0 0 New 1 2 0 1 2 0 1 2 3 0 1 0 1 2 x X Inserting and Deleting Rows and Sections Batch Insertion, Deletion, and Reloading of Rows and Sections 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 90A table view has an editing mode as well as its normal (selection) mode. When a table view goes into editing mode, it displays the editing and reordering controls associated with its rows. The reordering control allows the user to move a row to a different location in the table. As shown in Figure 8-1, the reordering control appears on the right side of the row. Figure 8-1 Reordering a row When a table view enters editing mode and when users drag a reordering control, the table view sends a series of messages to its data source and delegate, but only if they implement these methods. These methods allow the data source and delegate to restrict whether and where a row can be moved as well to carry out the actual move operation. The following sections show you how to move rows around in a table view. 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 91 Managing the Reordering of RowsWhat Happens When a Row is Relocated A table view goes into editing mode when it receives a setEditing:animated: message. Typically (but not necessarily) the message originates as an action message sent when the user taps an Edit button in the navigation bar. In editing mode, a table view displays any reordering (and editing) controls that its delegate has assigned to each row. The delegate assigns the controls in tableView:cellForRowAtIndexPath: by setting the showsReorderControl property of UITableViewCell objects to YES. Note: If a UIViewController object is managing the table view, it automatically receives a setEditing:animated: message when the Edit button is tapped. UITableViewController, a subclass of UIViewController, implements this method to update button state and invoke the table view’s version of the method. If you are using UIViewController to manage a table view, you need to implement the same behavior. When the table view receives setEditing:animated:, itsendsthe same message to the UITableViewCell object for each visible row. Then it sends a succession of messages to its data source and its delegate (if they implement the methods) as depicted in the diagram in Figure 8-2. Figure 8-2 Calling sequence for reordering a row in a table view tableView: canMoveRowAtIndexPath: Client Delegate Data Source Table View User presses Edit button User drags reordering control setEditing:YES animated:YES tableView: moveRowAtIndexPath: toRowAtIndexPath: tableView: targetIndexPathForMoveFromRowAtIndexPath: toProposedIndexPath: When the table view receives the setEditing:animated: message, it resends the same message to the cell objects corresponding to its visible rows. After that, the sequence of messages is as follows: Managing the Reordering of Rows What Happens When a Row is Relocated 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 921. The table view sends a tableView:canMoveRowAtIndexPath: message to its data source (if it implements the method). In this method the delegate may selectively exclude certain rows from showing the reordering control. 2. The user drags a row by its reordering control up or down the table view. As the dragged row hovers over a part of the table view, the underlying row slides downward to show where the destination would be. 3. Every time the dragged row is over a destination, the table view sends tableView:targetIndexPathForMoveFromRowAtIndexPath:toProposedIndexPath: to its delegate (if it implements the method). In this method the delegate may reject the current destination for the dragged row and specify an alternative one. 4. The table view sends tableView:moveRowAtIndexPath:toIndexPath: to its data source (if it implements the method). In this method the data source updates the data-model array that is the source of items for the table view, moving the item to a different location in the array. Examples of Moving a Row Thissection comments on some sample code that illustratesthe reordering steps enumerated in “What Happens When a Row is Relocated” (page 92). Listing 8-1 shows an implementation of tableView:canMoveRowAtIndexPath: that excludes the first row in the table view from being reordered (this row does not have a reordering control). Listing 8-1 Excluding a row from relocation - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { if (indexPath.row == 0) // Don't move the first row return NO; return YES; } When the user finishes dragging a row, it slides into its destination in the table view, which sends tableView:moveRowAtIndexPath:toIndexPath: to its data source. Listing 8-2 shows an implementation of this method. Note that it retains the data item fetched from the array (the item to be relocated) because the item is automatically released when it is removed from the array. Managing the Reordering of Rows Examples of Moving a Row 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 93Listing 8-2 Updating the data-model array for the relocated row - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)sourceIndexPath toIndexPath:(NSIndexPath *)destinationIndexPath { NSString *stringToMove = [[self.reorderingRows objectAtIndex:sourceIndexPath.row] retain]; [self.reorderingRows removeObjectAtIndex:sourceIndexPath.row]; [self.reorderingRows insertObject:stringToMove atIndex:destinationIndexPath.row]; [stringToMove release]; } The delegate can also retarget the proposed destination for a move to another row by implementing the tableView:targetIndexPathForMoveFromRowAtIndexPath:toProposedIndexPath: method. The following example restricts rows to relocation in their own group and prevents moves to the last row of a group (which is reserved for the add-item placeholder). Listing 8-3 Retargeting the destination row of a move operation - (NSIndexPath *)tableView:(UITableView *)tableView targetIndexPathForMoveFromRowAtIndexPath:(NSIndexPath *)sourceIndexPath toProposedIndexPath:(NSIndexPath *)proposedDestinationIndexPath { NSDictionary *section = [data objectAtIndex:sourceIndexPath.section]; NSUInteger sectionCount = [[section valueForKey:@"content"] count]; if (sourceIndexPath.section != proposedDestinationIndexPath.section) { NSUInteger rowInSourceSection = (sourceIndexPath.section > proposedDestinationIndexPath.section) ? 0 : sectionCount - 1; return [NSIndexPath indexPathForRow:rowInSourceSection inSection:sourceIndexPath.section]; } else if (proposedDestinationIndexPath.row >= sectionCount) { return [NSIndexPath indexPathForRow:sectionCount - 1 inSection:sourceIndexPath.section]; } // Allow the proposed destination. return proposedDestinationIndexPath; } Managing the Reordering of Rows Examples of Moving a Row 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 94This table describes the changes to Table View Programming Guide for iOS . Date Notes 2012-09-19 Added new information for iOS 5. 2011-01-05 Made some minor corrections. 2010-09-14 Made several minor corrections. 2010-08-03 States now that beginUpdates...endUpdates calls can be nested. 2010-07-08 Changed the title from "Table View Programming Guide for iPhone OS." Made many minor corrections in diagrams and text. Added description of reload behavior in batch udpates. 2010-05-20 2010-03-24 Made the introduction more informative. Explained how to set the background color of cells, emphasized purpose of tableView:willDisplayCell:forRowAtIndexPath:, and made minor corrections. 2009-08-19 Updated to describe 3.0 API, especially predefined cell styles and related properties. It also describes the use of nib files with table views and table-view cells and includes an updated chapter on view controllers and design patterns and strategies. 2009-05-28 Warned against calling setEditing:animated: when in tableView:commitEditingStyle:forRowAtIndexPath:. Updated illustration. 2008-10-15 Added section on batch insertions and deletions, added related classes to TOC frame, added guidelines on clearing selection, and made minor corrections. 2008-09-09 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 95 Document Revision HistoryDate Notes 2008-06-25 First version of this document. Document Revision History 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 96Apple Inc. © 2012 Apple Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrievalsystem, or transmitted, in any form or by any means, mechanical, electronic, photocopying, recording, or otherwise, without prior written permission of Apple Inc., with the following exceptions: Any person is hereby authorized to store documentation on a single computer for personal use only and to print copies of documentation for personal use provided that the documentation contains Apple’s copyright notice. No licenses, express or implied, are granted with respect to any of the technology described in this document. Apple retains all intellectual property rights associated with the technology described in this document. This document is intended to assist application developers to develop applications only for Apple-labeled computers. Apple Inc. 1 Infinite Loop Cupertino, CA 95014 408-996-1010 Apple, the Apple logo, Cocoa, Cocoa Touch, iPad, iPhone, Objective-C, and Xcode are trademarks of Apple Inc., registered in the U.S. and other countries. iOS is a trademark or registered trademark of Cisco in the U.S. and other countries and is used under license. Even though Apple has reviewed this document, APPLE MAKES NO WARRANTY OR REPRESENTATION, EITHER EXPRESS OR IMPLIED, WITH RESPECT TO THIS DOCUMENT, ITS QUALITY, ACCURACY, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.ASARESULT, THISDOCUMENT IS PROVIDED “AS IS,” AND YOU, THE READER, ARE ASSUMING THE ENTIRE RISK AS TO ITS QUALITY AND ACCURACY. IN NO EVENT WILL APPLE BE LIABLE FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL,OR CONSEQUENTIAL DAMAGES RESULTING FROM ANY DEFECT OR INACCURACY IN THIS DOCUMENT, even if advised of the possibility of such damages. THE WARRANTY AND REMEDIES SET FORTH ABOVE ARE EXCLUSIVE AND IN LIEU OF ALL OTHERS, ORAL OR WRITTEN, EXPRESS OR IMPLIED. No Apple dealer, agent, or employee is authorized to make any modification, extension, or addition to this warranty. Some states do not allow the exclusion or limitation of implied warranties or liability for incidental or consequential damages, so the above limitation or exclusion may not apply to you. This warranty gives you specific legal rights, and you may also have other rights which vary from state to state. TableView_iPhone.pdf Deux doigts d’astuces Démarrage Requiert un appareil compatible FaceTime et une connexion Wi-Fi, pour l’appelant comme pour l’appelé. Non disponible partout Bienvenue dans l’univers iPhone. Ce guide de démarrage rapide vous présente comment configurer votre iPhone et utiliser ses fonctionnalités principales. Une fois prêt à profiter de votre iPhone, vous pouvez obtenir des renseignements plus détaillés sur www.apple.com/fr/iphone ou www.apple.com/iphone/countries. À vos marques, configurez, partez ! 1. Téléchargement d’iTunes. Rendez-vous sur www.itunes.com/fr/download et téléchargez la dernière version d’iTunes à installer sur votre Mac ou PC. 2. Connexion à votre ordinateur. Connectez votre iPhone au port USB de votre ordinateur. 3. Synchronisation. Lorsque l’iPhone est connecté, iTunes s’ouvre et vous guide dans l’installation. Sélectionnez les contacts, les calendriers, la musique, les vidéos et les photos que vous souhaitez synchroniser, puis cliquez sur Appliquer dans l’angle droit inférieur. Si vous n’avez jamais utilisé iTunes ou que vous souhaitez vous informer sur la synchronisation, vous pouvez consulter un guide d’initiation rapide sur www.apple.com/fr/itunes/tutorials. Bouton Marche/Veille. Pour allumer l’iPhone, appuyez fermement sur le bouton Marche/Veille. Pour l’éteindre ou le redémarrer, maintenez le bouton Marche/Veille enfoncé pendant quelques secondes, puis faites glisser le curseur pour confirmer. Pour mettre l’iPhone en mode veille, appuyez une fois sur le bouton Marche/Veille, ce qui a pour effet d’éteindre l’écran tout en permettant à l’iPhone de recevoir des appels. Pour mettre en silence un appel entrant, appuyez une fois sur ce bouton. Pour envoyer un appel directement vers la messagerie vocale, appuyez deux fois dessus. Accueil. Lorsque vous utilisez une application, appuyez sur le bouton principal pour fermer celle-ci et revenir à son écran d’accueil. Pour accéder rapidement à l’écran d’accueil principal, appuyez sur le bouton principal depuis n’importe quel autre écran d’accueil. À partir de l’écran de verrouillage, double-cliquez sur le bouton d’accueil pour faire apparaître les commandes iPod. Créer des dossiers. Organiser ses applications. Touchez une icône et maintenez le doigt dessus jusqu’à ce qu’elle s’agite. Faites ensuite glisser une icône sur une autre afin de créer un dossier. Les dossiers sont nommés automatiquement par catégorie mais vous pouvez les renommer. Vous pouvez personnaliser votre écran d’accueil en faisant glisser des icônes et dossiers à différents emplacements et sur différents écrans. Une fois que vous avez terminé, il vous suffit d’appuyer sur le bouton principal. Rechercher. Pour effectuer une recherche sur votre iPhone ou sur le Web, allez à l’écran d’accueil et appuyez sur le bouton principal, ou passez le doigt sur l’écran de gauche à droite. Saisissez ce que vous souhaitez rechercher : un nom, une application, un morceau, un artiste, un film ou un mot-clé. L’iPhone propose des suggestions au fur et à mesure que vous écrivez, pour accélérer encore davantage votre recherche. Pour lancer une recherche depuis une application comme Mail, Contacts, Messages ou iPod, touchez la barre d’état. Effectuer un appel. Pour effectuer un appel, touchez un numéro de téléphone dans Contacts, Favoris, un courrier électronique, un SMS ou MMS, ou pratiquement n’importe où sur l’iPhone. Vous pouvez sinon toucher le bouton du clavier numérique afin de composer un numéro manuellement. Pour répondre à un appel alors que vous utilisez les écouteurs de l’iPhone, appuyez une fois sur le bouton central. Appuyez à nouveau dessus pour mettre fin à l’appel. Pour régler le volume, appuyez sur les boutons « + » et « - » situés au-dessus et en dessous du micro. FaceTime. Pour lancer une vidéoconférence lors d’un appel vocal, touchez le bouton FaceTime. Vous pouvez par ailleurs toucher le bouton FaceTime depuis l’application Contacts. Lors d’une vidéoconférence, vous pouvez activer la caméra de derrière et filmer ainsi autour de vous. Pour accéder à votre courrier électronique, Internet ou d’autres applications, appuyez sur le bouton principal. Multitâche. Pendant un appel, vous pouvez accéder à votre courrier électronique, à votre calendrier ou aux autres applications, et même naviguer sur le Web, dans la mesure où vous êtes connecté via Wi-Fi ou 3G. Pour passer rapidement d’une application à l’autre, appuyez deux fois sur le bouton principal pour afficher la liste des applications récemment utilisées. Faites défiler vers la droite pour voir plus d’applications, puis touchez une icône pour relancer l’application correspondante. Faites défiler vers la gauche jusqu’au bout pour accéder aux contrôles de l’iPod ou verrouiller l’orientation de l’écran. Contrôle vocal. Utilisez la fonction de contrôle vocal pour effectuer un appel ou écouter de la musique tout en gardant les mains libres. Pour activer cette fonction, maintenez enfoncé le bouton principal de l’iPhone ou le bouton central des écouteurs jusqu’à ce que l’écran du contrôle vocal apparaisse. Après la tonalité, énoncez une commande telle que « Appeler Emmanuelle » ou « Composer le 06 62 12 98 54 » en épelant chaque chiffre. Vous pouvez aussi demander à l’iPhone de jouer un album spécifique, un artiste ou une liste de lecture ou de « Jouer d’autres titres comme celui-ci ». Vous pouvez même demander à l’iPhone « Quel est ce morceau ? » ou encore « Écouter des morceaux des Rolling Stones ». Sonnerie/ Silencieux Volume Augmenter/ Diminuer Marche/Veille Suspendre/ Réactiver Barre d’état Bouton principal*Requiert une Apple TV de seconde génération. *La messagerie vocale visuelle et les MMS ne sont pas forcément disponibles dans toutes les régions. Pour en savoir plus, contactez votre fournisseur de services sans fil. Certains services et fonctionnalités ne sont pas disponibles partout. © 2010 Apple Inc. Tous droits réservés. Apple, AirPlay, Apple TV, Cover Flow, FaceTime, iPhone, iPod, iTunes, Mac et Safari sont des marques d’Apple Inc., déposées aux États-Unis et dans d’autres pays. AirPrint est une marque d’Apple Inc. iTunes Store est une marque de service d’Apple Inc., déposée aux États-Unis et dans d’autres pays. App Store et iBookstore sont des marques de service d’Apple Inc. Les autres noms de produits et de sociétés mentionnés ici peuvent être des marques de leurs détenteurs respectifs. Designed by Apple in California. Printed in China. F034-5753-A Perfectionner ses connaissances. Vous pouvez perfectionner vos connaissances sur les fonctionnalités de l’iPhone en vous rendant sur la page web www.apple.com/fr/iphone ou www.apple.com/iphone/countries. Pour consulter le Guide de l’utilisateur de l’iPhone sur votre iPhone, téléchargez-le sur l’iBookstore ou cherchez-le sur le site help.apple.com/iphone ou dans les signets de Safari. Pour obtenir des versions téléchargeables du Guide de l’utilisateur de l’iPhone et du Guide d’informations importantes sur le produit, rendez-vous sur support.apple.com/fr_FR/manuals/ iphone. Obtenir de l’assistance. Contactez votre fournisseur de services sans fil pour obtenir une assistance sur les services liés au réseau, y compris la messagerie vocale visuelle et la facturation. * Rendez-vous sur www.apple.com/fr/support/iphone pour obtenir une assistance technique sur l’iPhone et iTunes. Rechercher un lieu. Effectuer des recherches dans les environs. Pour situer où vous vous trouvez sur une carte, touchez le bouton Localisation. Un point bleu apparaît, signalant votre position géographique à cet instant. Pour connaître la direction face à vous, touchez à nouveau le bouton Localisation afin d’activer l’affichage avec orientation. Recherchez des lieux précis dans les alentours en saisissant des mots comme « Starbucks » ou « pizza » dans le champ de recherche. Touchez deux fois l’écran pour effectuer un zoom avant, et une fois avec deux doigts pour un zoom arrière. Vous pouvez également obtenir des indications sur un trajet ou faire apparaître davantage d’options d’affichage en touchant le bouton représentant une page retournée. App Store. Touchez l’icône App Store pour parcourir sans fil des centaines de milliers d’apps dans des catégories telles que jeux, entreprise, voyage ou réseaux sociaux. Parcourez-les par Sélection, Catégories ou Top 25, ou lancez une recherche par nom. Pour acheter et télécharger une application directement sur votre iPhone, touchez Acheter. Beaucoup d’applications sont gratuites. iTunes Store. Vous pouvez accéder à l’iTunes Store sans fil en touchant l’icône d’iTunes. Parcourez le Store pour y rechercher musique, films, séries télévisées, clips vidéo et plus encore. Naviguez, achetez et téléchargez depuis le Store, directement sur votre iPhone. Touchez n’importe quel élément afin d’en écouter ou d’en visionner un extrait. Clavier intelligent. L’iPhone corrige et suggère des mots automatiquement et au fur et à mesure que vous écrivez. Ainsi, si vous saisissez une lettre erronée, continuez à écrire. Pour accepter le mot suggéré, touchez la barre d’espace. Touchez le « x » pour refuser le mot suggéré et permettre à l’iPhone de mémoriser le mot que vous avez saisi. Le clavier insère automatiquement les apostrophes dans les contractions, le cas échéant. Touchez deux fois la barre d’espace pour ajouter un point. Pour activer le clavier numérique et celui des symboles, touchez le bouton « .?123 ». Couper, copier et coller. Touchez le texte que vous souhaitez modifier, ou bien touchez-le en laissant le doigt dessus afin de faire apparaître la loupe, puis faites glisser votre doigt pour déplacer le point d’insertion. Pour sélectionner un mot, touchez-le deux fois rapidement et faites glisser les points de capture pour sélectionner plus ou moins de texte. Ensuite, touchez Couper, Copier ou Coller. Pour copier du texte depuis des pages web, des courriers électroniques ou des SMS ou MMS, touchez le texte à sélectionner et maintenez le doigt dessus. Pour annuler une modification, remuez l’iPhone puis touchez le bouton d’annulation de la modification. Photos. Chargez vos photos favorites sur l’iPhone depuis votre ordinateur via iTunes, ou utilisez l’appareil photo intégré pour prendre des photos. Touchez Photos afin de voir vos photos. Passez le doigt sur l’écran vers la droite ou la gauche pour vous déplacer entre les images. Touchez deux fois l’écran rapidement ou rapprochez vos doigts dessus pour activer le zoom. Touchez une fois l’écran pour afficher les contrôles à l’écran. Touchez le bouton d’action pour envoyer une photo dans un message MMS ou un message électronique. Vous pouvez également utiliser une photo comme fond d’écran, l’affecter à un contact ou l’imprimer sans fil sur une imprimante compatible AirPrint. Vidéo HD. Pour enregistrer de la vidéo en HD, touchez Caméra puis placez le commutateur Photo/Vidéo en position Vidéo. Touchez le bouton Enregistrer afin de commencer l’enregistrement. Touchez à nouveau ce bouton pour l’arrêter. L’option « Toucher pour mettre au point » vous permet de contrôler la mise au point et l’exposition en touchant n’importe quel élément sur l’écran. L’enregistrement peut être effectué en mode paysage ou portrait. Vous pouvez même allumer la lumière de la caméra si vous filmez dans un endroit sombre. Contrôles pour les vidéos et les morceaux. Touchez l’écran pour afficher les contrôles à l’écran. Touchez-le à nouveau pour les masquer. Si vous touchez deux fois une vidéo, vous passez alors d’un affichage grand écran à un affichage plein écran. Lors de l’écoute de musique, tournez l’iPhone pour faire apparaître la pochette de l’album en mode Cover Flow et parcourir vos albums. Touchez un album pour afficher la liste de ses pistes et touchez-en une pour l’écouter. Pour revenir à la pochette de l’album, touchez l’écran à l’extérieur de la liste de pistes. Lors de l’écoute de musique avec les écouteurs de l’iPhone, appuyez sur le bouton central une fois pour mettre en pause ou en lecture, et appuyez rapidement deux fois pour passer au morceau suivant. Touchez le bouton AirPlay pour diffuser votre musique ou votre vidéo vers une Apple TV*. Voir une page web de plus près. Dans Safari, touchez deux fois un élément d’une page web (image ou texte) pour l’agrandir. Touchez à nouveau deux fois pour revenir à la taille normale. Touchez le bouton Multi-page pour feuilleter plusieurs pages web ou en ouvrir une nouvelle. Tournez l’iPhone pour visualiser la page web au format grand écran. Google, le logo Google et Google Maps sont des marques de Google Inc. © 2010. Tous droits réservés. L’App Store est disponible dans certains pays uniquement. L’iTunes Store est disponible dans certains pays uniquement. AirPort Extreme Base Station Setup Guide LL2870.book Page 1 Thursday, October 28, 2004 12:45 PMLL2870.book Page 2 Thursday, October 28, 2004 12:45 PM 3 1 Contents Chapter 1 5 Introduction to AirPort 5 About the AirPort Extreme Base Station 6 The AirPort Extreme Base Station at a Glance 7 AirPort Extreme Base Station Ports 8 About the AirPort Software Chapter 2 9 Setting Up Your AirPort Extreme Base Station 10 Mounting the AirPort Extreme Base Station on a Wall 12 Setup Overview Chapter 3 15 Using Your AirPort Extreme Base Station 15 Configuring the Base Station 16 Monitoring the AirPort Extreme Base Station’s Internet Connection Status 17 Monitoring AirPort Extreme Base Station Communication 17 Connecting to the Internet Via the AirPort Network 18 Connecting Additional Base Stations to Your AirPort Network 18 Connecting Multiple Base Stations to Power Sourcing Equipment (PSE) 18 Extending the Range of Your AirPort Network 19 Controlling the Range of Your AirPort Network 19 More Information About AirPort Chapter 4 21 Basic Network Designs 22 Setting Up a Home Office Network 23 Setting Up a Network at School 25 Connecting AirPort Base Stations Using Power Over Ethernet (PoE) Chapter 5 27 Troubleshooting Appendix 31 AirPort Extreme Base Station Specifications 33 Communications Regulation Information LL2870.book Page 3 Thursday, October 28, 2004 12:45 PM4 Contents LL2870.book Page 4 Thursday, October 28, 2004 12:45 PM1 5 1 Introduction to AirPort AirPort is a simple and fast way to access the Internet from anywhere in your home, classroom, or office without cables, additional phone lines, or complicated networking software. AirPort is a wireless local area network (WLAN) technology that provides highperformance wireless communication between multiple computers and the Internet. When you connect to the Internet using AirPort, you can share a single Internet connection with many computers at the same time and share files among them. To use AirPort to access the Internet, you may need an account with an Internet service provider (fees may apply) and a way to access the Internet—either through a DSL or cable modem, or an Ethernet network. If your base station has an internal modem, and you have a PPP dial-up connection with an ISP, you can connect to the Internet using the base station’s internal modem. Note: This manual includes information for setting up the AirPort Extreme Base Station using Mac OS X and Windows XP and Windows 2000. The screenshots and general instructions are based on Mac OS X. For more detailed Windows XP and Windows 2000 instructions, see AirPort Help in the AirPort Admin Utility on computers using Windows. About the AirPort Extreme Base Station The AirPort Extreme Base Station establishes a wired connection to the Internet or a network and wireless connections to wireless client computers. Once the base station is connected to the network, all wireless client computers can connect to the Internet by joining the AirPort network. Computers connected to the AirPort network by Ethernet can also share the base station’s Internet connection. The base station manages communications between the Internet and the wireless client computers. LL2870.book Page 5 Thursday, October 28, 2004 12:45 PM6 Chapter 1 Introduction to AirPort The AirPort Extreme Base Station has the following ports: • 10/100Base-T Ethernet WAN ( ) port for connecting a DSL or cable modem, or for connecting to an existing Ethernet network with Internet access • 10/100Base-T Ethernet LAN (G) port for high-speed connection to local printers and Ethernet computers that don’t have Internet access • USB port ( ) for connecting a printer to the base station Some models of the base station also have a built-in 56K modem port (W) for dial-up Internet access with a standard telephone line. Some models of the base station can also receive power over Ethernet (PoE). When the base station Ethernet WAN port is connected to IEEE 802.3af-compliant Power Sourcing Equipment (PSE), such as a line-powered Ethernet switch or hub, with a CAT 5 Ethernet cable, it receives power over the Ethernet cable. The AirPort Extreme Base Station at a Glance Status lights Internal modem port Ethernet (LAN) port Power adapter port Security slot Ethernet (WAN) port USB printer port Reset button External antenna port LL2870.book Page 6 Thursday, October 28, 2004 12:45 PMChapter 1 Introduction to AirPort 7 AirPort Extreme Base Station Ports Your AirPort Extreme Base Station may have six ports, depending on which model you purchased. Note: If this AirPort Extreme Base Station did not come with a power adapter and you don’t plan to use PoE, you can purchase a base station power adapter from your Appleauthorized dealer, Apple retail stores, or the Apple Store at www.apple.com/store. If the base station supports PoE, it and its mounting bracket conform to UL Standard 2043, “Fire Test for Heat and Visible Smoke Release for Discrete Products and Their Accessories Installed in Air-Handling Spaces,” for placement in the air-handling space above suspended ceilings. Using PoE allows you to install a base station in places away from a standard electrical outlet. For more information about using PoE, see the document “Designing AirPort Extreme Networks,” or “AirPort Networks for Windows,” that came on the AirPort CD. The documents are also available at www.apple.com/airportextreme. To determine if your base station supports PoE, check the label on the bottom of the base station. Note: To use the base station in an air-handling space above suspended ceilings, you must connect the Ethernet WAN port to an 802.3af-compliant PSE with a plenum-rated Ethernet cable. You cannot use the AC power adapter to power a base station installed in an air-handling space. If you connect an external antenna to a base station mounted in an air-handling space, make sure it is plenum-rated. See the documentation that came with the antenna. 10/100Base-T Ethernet WAN port Connect a DSL or cable modem, or connect to an existing Ethernet network with Internet access. G 10/100Base-T Ethernet LAN port Connect local Ethernet computers (computers without Internet access) and printers, or other Ethernet devices, such as a hub or a switch. W Internal modem port (on some models) Connect one end of a phone cord to the internal modem port and the other end to a standard telephone jack. Universal Serial Bus (USB) printer port Connect a USB printer so that computers connected to the AirPort network can share the printer. External antenna port Connect an Apple-certified external antenna to extend the range of the wireless network. ¯ Power adapter port Connect one end of the AirPort Extreme Base Station power adapter to the port and the other end to an electrical outlet. Security slot You can purchase a security cable and lock to secure your AirPort Extreme Base Station. LL2870.book Page 7 Thursday, October 28, 2004 12:45 PM8 Chapter 1 Introduction to AirPort About the AirPort Software To extend the range of your network, you can use AirPort Admin Utility to set up multiple base stations in your network connected to one another wirelessly, known as a Wireless Distribution System (WDS), or over Ethernet. You can also extend the range of your wireless network by connecting an Apple-certified external antenna to the antenna port. If you connect a USB printer to the base station, computers on the AirPort network can print to it by selecting the printer via Rendezvous in Printer Setup Utility, located in Applications/Utilities on a Macintosh. You must use Mac OS X v10.2.3 or later, or Windows XP or Windows 2000, to print to a USB printer via an AirPort Extreme Base Station. For information about setting up a computer using Windows XP or Windows 2000, see the document “AirPort Networks for Windows,” that came on the AirPort CD. Note: If the base station is set up to receive power over the Ethernet WAN port, do not connect a printer to the USB port. You cannot print to a USB printer if the base station is powered over Ethernet. AirPort Setup Assistant Use the AirPort Setup Assistant to configure the AirPort Extreme Base Station and to set up your computer to use AirPort. The Assistant is located in Applications/Utilities on a computer using Mac OS X. AirPort Admin Utility AirPort Admin Utility is an advanced tool for setting up and managing the AirPort Extreme Base Station. Use AirPort Admin Utility to adjust network, routing, and security settings and other advanced options. AirPort Admin Utility is located in Applications/ Utilities on a computer using Mac OS X, and in Start > All Programs > AirPort on a computer using Windows XP or Windows 2000. AirPort status menu in the menu bar Use the AirPort status menu to switch quickly between AirPort networks, monitor the signal quality of the current network, create a Computer-to-Computer network, and turn AirPort on and off. The AirPort status menu in the menu bar is part of AirPort for Max OS X. If your base station supports Power over Ethernet, the following Mac OS X applications are included on the AirPort Management Tools CD. AirPort Management Utility AirPort Management Utility allows network administrators to set up and manage multiple base stations from a single location. AirPort Client Monitor The AirPort Client monitor application monitors the signal strength and transmit rate of wireless client computers. LL2870.book Page 8 Thursday, October 28, 2004 12:45 PM2 9 2 Setting Up Your AirPort Extreme Base Station Use the information in this chapter to set up your AirPort Extreme Base Station. Before you set up the AirPort Extreme Base Station for Internet access, make sure that: • You have a computer with an AirPort Card or an AirPort Extreme Card, or a compatible Wi-Fi card installed in a computer using Windows XP or Windows 2000. • Your computer has the latest version of the AirPort software installed. For the latest information on AirPort software, check Software Update in System Preferences or the following Apple websites: • Apple AirPort website at www.apple.com/airportextreme • Apple Support website at www.apple.com/support • You have an account with an Internet service provider (fees may apply) or you have Internet access through a network. For more information on using AirPort with your Internet account, contact your Internet service provider (ISP) or go to the Apple Service & Support website at support.apple.com. • You have a suitable location for your AirPort Extreme Base Station. You can place your AirPort Extreme Base Station on a desk, bookcase, or other flat surface, or you can mount it on a wall. Place your base station in the center of your home or office, away from any source of interference or signal blockage, such as a microwave oven or large metal appliances, and close to a power outlet. If the base station supports PoE, it is suitable for use in environmental air-handling spaces (in accordance with section 300.22(C) of the National Electrical Code and 12-010 of the Canadian Electrical Code), and capable of receiving power over Ethernet. You can install it in a ceiling air-handling space, away from a power outlet. If you install the base station in an air-handling space, you need to connect the Ethernet WAN port to 802.3af-compliant Power Sourcing Equipment (PSE) with a plenum-rated Ethernet cable. If you connect the base station power adapter to an outlet, the Ethernet WAN port no longer receives power from a PSE. LL2870.book Page 9 Thursday, October 28, 2004 12:45 PM10 Chapter 2 Setting Up Your AirPort Extreme Base Station If you use an Ethernet LAN for Internet access, such as in a school or office, connect the Ethernet cable to the 10/100Base-T Ethernet LAN (G) port on the AirPort Extreme Base Station. Note: The “Distribute IP address” checkbox in the Network pane of AirPort Admin Utility is deselected for AirPort Extreme Base Stations that support Power over Ethernet. By default the base stations are set to be used as a bridge, rather than to distribute IP addresses to AirPort clients. For more information on AirPort Admin Utility and using the 10/100Base-T Ethernet LAN (G) port, see the document “Designing AirPort Extreme Networks,” or “AirPort Networks for Windows,” located on the AirPort CD or at www.apple.com/airportextreme. You can use the AirPort Extreme Base Station to provide Internet access to non-AirPort computers that are not otherwise connected to the Internet by connecting them to the 10/100Base-T Ethernet LAN (G) port on the AirPort Extreme Base Station. The base station must be connected to the Internet by the 10/100Base-T Ethernet WAN ( ) port. Mounting the AirPort Extreme Base Station on a Wall You can use the mounting bracket provided with your AirPort Extreme Base Station to mount the base station on a wall. Follow these steps: 1 Select a location close to power and a network connection. If the base station is UL rated and certified for use in suspended ceilings and airhandling spaces, the base station can be mounted in a ceiling space, away from a power outlet, and powered over Ethernet. If you mount the base station in an airhandling space, plug the base station into 802.3af-compliant Power Sourcing Equipment with a plenum-rated Ethernet cable. 2 Screw the mounting bracket into a wall stud using the two screws that came with the base station. Mounting bracket LL2870.book Page 10 Thursday, October 28, 2004 12:45 PMChapter 2 Setting Up Your AirPort Extreme Base Station 11 3 Locate the two mounting bracket holes on the bottom of the base station. 4 Feed the cables through the mounting bracket and then connect them to the base station. The base station is designed to mount with the ports on the top (Apple logo right side up), with the cables passing behind it through the mounting bracket as shown. Note: The mounting bracket has enough space for six cables (power, two Ethernet cables, USB printer cable, a telephone cable, and external antenna cable). In most cases, only two or three cables are used. Mounting bracket holes LL2870.book Page 11 Thursday, October 28, 2004 12:45 PM5 Carefully insert the bottom two prongs on the mounting bracket into the mounting bracket holes on the bottom of the base station. Clip the top prongs on the mounting bracket around the bottom lip of the base station. Setup Overview Once you’re ready, you can set up the AirPort Extreme Base Station in a few steps: 1 Plug the AirPort Extreme Base Station in to a power outlet and connect it to your Internet networking interface. 2 Use the AirPort Setup Assistant on a Macintosh, or use AirPort Admin Utility on a Windows XP or Windows 2000 computer. Step 1: Connect the AirPort Extreme Base Station 1 Connect the power adapter to the AirPort Extreme Base Station power adapter port and an electrical outlet. Important: Use only the power adapter that came with your AirPort Extreme Base Station. Adapters for other electronic devices may look similar, but they may damage the base station. The AirPort Extreme Base Station turns on when the power adapter is plugged into an electrical outlet. There is no power switch. When you plug in the base station, the status lights glow while the base station starts up. Only the middle light glows when startup is complete. The startup process takes about 30 seconds. See “Monitoring AirPort Extreme Base Station Communication” on page 17 for a complete explanation of the lights on the AirPort Extreme Base Station. 2 Connect the AirPort Extreme Base Station to your DSL or cable modem, Ethernet network, or, if your base station has an internal modem, a standard phone line. • If you have an Internet account that uses a device such as a DSL or cable modem, connect the device to the 10/100Base-T Ethernet WAN ( ) port on the AirPort Extreme Base Station. • If you use an Ethernet LAN for Internet access, such as in a school or office, connect the Ethernet cable to the 10/100Base-T Ethernet LAN (G) port on the AirPort Extreme Base Station. • If you use a standard modem and analog telephone line (the type of telephone line found in most residences) to access the Internet, connect one end of the phone cord to the internal modem (W) port and the other end to your telephone jack. Important: Do not connect the base station to a digital telephone line, such as a PBX telephone system. LL2870.book Page 12 Thursday, October 28, 2004 12:45 PMChapter 2 Setting Up Your AirPort Extreme Base Station 13 If your base station has a built-in modem and you connect to the Internet using it, the base station can provide Internet access to computers connected to both Ethernet ports (WAN and LAN G). Step 2: Use the AirPort Setup Assistant on a Macintosh computer The AirPort Setup Assistant: • Sets up your AirPort network • Configures your computer to access the AirPort network created by the AirPort Extreme Base Station Note: You can’t use the AirPort Setup Assistant to set up some advanced features. Use AirPort Admin Utility, located in Applications/Utilities. To use the AirPort Setup Assistant to configure the AirPort Extreme Base Station: 1 Make sure you have plugged in the base station and the middle light is on. 2 Open the AirPort Setup Assistant (in Applications/Utilities on a Mac, and follow the onscreen instructions. Use AirPort Admin Utility on a Windows XP or Windows 2000 computer See the document “AirPort Networks for Windows” that came on the AirPort CD for detailed instructions for setting up your AirPort Extreme Base Station using AirPort Admin Utility. LL2870.book Page 13 Thursday, October 28, 2004 12:45 PMLL2870.book Page 14 Thursday, October 28, 2004 12:45 PM3 15 3 Using Your AirPort Extreme Base Station The information in this chapter will help you understand how to use your base station and how to get the most from your AirPort network. Use the information provided in this chapter to: • Configure your base station’s Internet connection • Use AirPort Admin Utility to modify advanced base station settings • Monitor your AirPort Extreme Base Station status • Connect to and disconnect from the AirPort network • Connect additional base stations to your AirPort network • Extend the range of your AirPort network Configuring the Base Station The AirPort Setup Assistant provides complete configuration options for most AirPort networks. For advanced settings, you can use AirPort Admin Utility (in Applications/Utilities) to configure your AirPort Extreme Base Station. You can use AirPort Admin Utility to do the following: • Configure your AirPort network, including changing the network name and password and specifying whether users need a password to join your network. • Change the AirPort Extreme Base Station name and password. • Set advanced security settings, like Wi-Fi Protected Access (WPA). • Enter the TCP/IP settings for your AirPort Extreme Base Station. • Set up the way Internet access is provided to computers on the AirPort network. • Set up multiple base stations on a single AirPort network. Note: If your base station does not support PoE, by default it is set to use the Internet Dynamic Host Configuration Protocol (DHCP) and Network Address Translation (NAT) to share a single IP address. If your base station supports PoE, by default it is set up as a bridge, and the “Distribute IP address” checkbox is deselected in the Network pane of AirPort Admin Utility. LL2870.book Page 15 Thursday, October 28, 2004 12:45 PM16 Chapter 3 Using Your AirPort Extreme Base Station Important: If you use AirPort Admin Utility instead of the AirPort Setup Assistant to configure your base station for the first time, you may be asked for a password. The initial password for the AirPort Extreme Base Station is public. See the document “Designing AirPort Extreme Networks,” or “AirPort Networks for Windows,” located on the AirPort CD and at www.apple.com/airportextreme, for indepth information on designing and setting up your AirPort network using the AirPort Setup Assistant and AirPort Admin Utility. If you are setting up larger AirPort Extreme networks with base stations that support PoE, you can also use AirPort Management Utility and AirPort Client Monitor (on the Management Tools CD) to set up and manage multiple base stations. See the document “Managing AirPort Extreme Networks,” located on the Management Tools CD, and at www.apple.com/airportextreme, for information and instructions for setting up, managing, and monitoring larger AirPort Extreme networks. Monitoring the AirPort Extreme Base Station’s Internet Connection Status Use the Internet Connect application, located in the Applications folder on a Macintosh, to monitor the wireless signal level and status of your AirPort Extreme Base Station’s Internet connection, as shown below. Use the Wireless Connection Status menu on a Windows XP or Windows 2000 computer to monitor the wireless signal level. LL2870.book Page 16 Thursday, October 28, 2004 12:45 PMChapter 3 Using Your AirPort Extreme Base Station 17 Monitoring AirPort Extreme Base Station Communication The following table explains the AirPort Extreme Base Station indicator lights. For more information about the base station’s indicator lights, see “Designing AirPort Extreme Networks,” or AirPort Networks for Windows,” located on the AirPort CD, or at www.apple.com/airportextreme. Connecting to the Internet Via the AirPort Network If your computer is connected to an AirPort network that has continuous Internet access via Ethernet, DSL, or a cable modem, you may already be connected to the Internet and can open and use any application that requires an Internet connection, such as a web browser or email application. If you are not connected, open Internet Connect, located in the Applications folder, click AirPort in the toolbar, and click Connect. Light number Indicator Status 1 Flashing The AirPort Extreme Base Station is communicating via AirPort. 2 Steady glow The AirPort Extreme Base Station is receiving power and is in normal operating mode. 3 Flashing The AirPort Extreme Base Station is communicating via the LAN port. 1 2 3 LL2870.book Page 17 Thursday, October 28, 2004 12:45 PM18 Chapter 3 Using Your AirPort Extreme Base Station Connecting Additional Base Stations to Your AirPort Network You can connect additional AirPort Extreme Base Stations to extend the range of your wireless network. You can connect the base stations wirelessly or using Ethernet. A network with base stations connected using Ethernet is known as a roaming network. Connecting base stations wirelessly creates what is known as a Wireless Distribution System (WDS). See the document “Designing AirPort Extreme Networks,” or “AirPort Networks for Windows” for more information about setting up a roaming network or extending your network with WDS. Connecting Multiple Base Stations to Power Sourcing Equipment (PSE) If your base stations support PoE, you can connect multiple base stations to an 802.3afcompliant Ethernet device (known as a PSE), and send power and a network or Internet connection over category 5 Ethernet cables. Receiving power over the base station’s Ethernet connection eliminates extra cables and the need to locate the base station near a power outlet. Base stations that support PoE meet flammability classification standards and are UL listed for use above suspended ceilings and in air-handling spaces. The US National Electric Code (NEC) and the Canadian Electrical Code (CEC) require that you use plenum-rated Ethernet cables in air-handling spaces. Extending the Range of Your AirPort Network In addition to adding base stations to your network, you can attach an Apple-certified external antenna to the base station to extend your network’s range. External antennas are available from your Apple-authorized dealer, Apple retail stores, or the Apple Store at store.apple.com. External antennas may not be permitted in some regions outside the US. If your base station supports PoE and is mounted in an air-handling space and receives power over the Ethernet WAN port, do not connect an external antenna unless it is plenum-rated and conforms to UL Standard 2043. Note: Before connecting or disconnecting an external antenna, you must unplug the base station’s power adapter, connect or disconnect the antenna, and then plug the base station back in to its power source. LL2870.book Page 18 Thursday, October 28, 2004 12:45 PMChapter 3 Using Your AirPort Extreme Base Station 19 Controlling the Range of Your AirPort Network You can also shorten the range of your AirPort network by adjusting the power transmitted to the radio in the base station. This might be useful if you want to control access to the network by restricting the range to a single room, for example. To shorten the range of your AirPort network: 1 Open AirPort Admin Utility, in Applications/Utilities on a Macintosh and in Start > All Programs > AirPort on a Windows XP or Windows 2000 computer. 2 Select your base station and click Configure. 3 On a Macintosh, click Wireless Options. On a Windows XP or Windows 2000 computer, click AirPort. 4 Choose a percentage from the Transmitter Power slider. More Information About AirPort You can find more information about AirPort in the following locations: • AirPort Help Look in AirPort Help for information on setting up an AirPort network, using an AirPort Base Station, editing base station settings, avoiding sources of interference, locating additional information on the Internet, and more. Choose Help > Mac Help, and then choose Library > AirPort Help. • “Designing AirPort Extreme Networks” For in-depth information on configuring AirPort networks, see the “Designing AirPort Extreme Networks” document, located at www.apple.com/airportextreme. • “Managing AirPort Extreme Networks” For in-depth information on setting up and managing multiple base stations in AirPort networks, see the “Managing AirPort Extreme Networks” document, located at www.apple.com/support/airportextreme. • “AirPort Networks for Windows” For in-depth information on configuring AirPort networks from a Microsoft Windows computer, see the “AirPort Networks for Windows” document, located at www.apple.com/airportextreme. • AirPort website www.apple.com/airportextreme • Apple Support website www.apple.com/support LL2870.book Page 19 Thursday, October 28, 2004 12:45 PMLL2870.book Page 20 Thursday, October 28, 2004 12:45 PM4 21 4 Basic Network Designs You can set up your AirPort Extreme Base Station just about anywhere and use it for Internet access and wireless networking. You need only a connection to the Internet and a computer with wireless capabilities. You can even add non-wireless computers to the network by connecting them to the base station through the built-in Ethernet LAN (G) port. Connect a USB printer to the base station, and all the computers on the network using Mac OS X v10.2.3 or later, both wired and wireless, can share the printer. If you want to extend the range of your AirPort network, connect an Apple-certified external antenna to the base station antenna port. Apple-certified external antennas for the AirPort Extreme Base Station are available from your Apple-authorized dealer, Apple retail stores, or the Apple Store at store.apple.com. Note: External antennas may not be permitted in some regions. Do not connect an external antenna to a base station that supports PoE and is mounted in the air space above a ceiling. This chapter explains how to set up your base station: • In your home or small office with an Ethernet or dial-up connection to the Internet • In school, where you might have both a broadband and an Ethernet connection • In a business or school using Power over Ethernet LL2870.book Page 21 Thursday, October 28, 2004 12:45 PM22 Chapter 4 Basic Network Designs Setting Up a Home Office Network If you are setting up an AirPort network in your home or small office and you have a broadband DSL or cable modem connection to the Internet, and an existing Ethernet network, you may need the following items: • An AirPort Extreme Base Station or multiple base stations • A DSL or cable modem with Internet access • AirPort- or other wireless-equipped computers • An optional Ethernet network The following illustration is an example of an AirPort network in an office. The AirPort Extreme Base station is connected by the Ethernet WAN ( ) port to the DSL or cable modem (if your base station came with a built-in modem, you can use it to connect). The base station shares its Internet connection with the AirPort-equipped computers wirelessly and with computers connected to the Ethernet LAN (G) port. For more information on AirPort Extreme network designs, see “Designing AirPort Extreme Networks,” located at www.apple.com/airport. For information on managing larger AirPort Extreme networks, see “Managing AirPort Extreme Networks,” located at www.apple.com/support/airport. To the Internet To Ethernet (LAN) To USB printer Power adapter USB LL2870.book Page 22 Thursday, October 28, 2004 12:45 PMChapter 4 Basic Network Designs 23 Setting Up a Network at School If you are setting up a network at school, and have a broadband DSL or cable modem connection to the Internet, and an existing Ethernet network, you may need the following items: • An AirPort Extreme Base Station or multiple base stations • A DSL or cable modem with Internet access • AirPort- or other wireless-equipped computers • An optional Apple-certified external antenna The following illustration is an example of an AirPort network in a school with multiple rooms or buildings. The AirPort Extreme Base Stations are set up as a Wireless Distribution System (WDS), with the main base station connected by the Ethernet WAN ( ) port to the DSL or cable modem. The main base station shares its Internet connection with the wireless computers in the room, or with computers connected to the main base station’s Ethernet LAN (G) port. The main base station also shares the Internet connection with the relay base station in the other room or building. The relay base station transfers the Internet connection to a remote base station set up in a third building. The relay and remote base stations can be set up to share the Internet connection with wireless computers in the room, or computers connected to the base station’s Ethernet LAN (G) port. LL2870.book Page 23 Thursday, October 28, 2004 12:45 PM24 Chapter 4 Basic Network Designs USB Main To the Internet To Ethernet (LAN) Power adapter To USB printer Remote Relay LL2870.book Page 24 Thursday, October 28, 2004 12:45 PMChapter 4 Basic Network Designs 25 Connecting AirPort Base Stations Using Power Over Ethernet (PoE) You can connect multiple base stations that support PoE to 802.3af-compliant Power Sourcing Equipment (PSE), and send power and a network or Internet connection over Ethernet cables. Receiving power over the Ethernet connection eliminates extra cables and the need to locate the base station near a power outlet. The following illustration is an example of an AirPort network in a business or school with multiple rooms or buildings. Plenum-rated Ethernet cables connect to the Ethernet WAN ( ) ports on the base stations and to an 802.3af-compliant PSE. The base stations are mounted in the ceiling air-handling space, and are secure and out of sight. When the base stations receive power and a network connection over the WAN port, the USB port is disabled. You can connect the Ethernet LAN port to a computer or other Ethernet device, but power does not travel to the Ethernet LAN port. Important: Do not connect an external antenna to a base station mounted in an airhandling space above a suspended ceiling unless it is plenum-rated and conforms to UL Standard 2043. LL2870.book Page 25 Thursday, October 28, 2004 12:45 PM26 Chapter 4 Basic Network Designs To 802.3af-compliant PSE Base stations mounted in air-handling space 802.3af-compliant Power Sourcing Equipment (PSE) connected to a network Plenum-rated Ethernet cables AC power outlet Plenum-rated Ethernet cables Base stations mounted in air-handling space LL2870.book Page 26 Thursday, October 28, 2004 12:45 PM5 27 5 Troubleshooting Use the information in this chapter if you are having trouble setting up your AirPort Extreme Base Station. If the AirPort Setup Assistant can’t detect the proper AirPort hardware Make sure that the computer you are using has an AirPort Card or an AirPort Extreme Card installed. If you recently installed the card, shut down your computer and make sure the card is properly installed. Make sure that the AirPort antenna cable is securely connected to the card (you should hear a click when the antenna is connected securely). Make sure that the other end of the card is firmly inserted into the connector in the AirPort Card slot. If you forget your AirPort network or base station password You can clear the AirPort network or base station password by resetting the base station. Follow these steps: 1 On a Mac, open Network preferences. Choose AirPort from the Show pop-up menu and choose Using DHCP from the Configure pop-up menu. On a computer using Windows XP or Windows 2000, open Control Panel from the Start menu, right-click Wireless Network Connection and choose Properties. Click Internet Protocol (TCP/IP) and click Properties. Make sure “Obtain an IP address automatically” is selected. 2 Press and hold the reset button for one full second. Reset button LL2870.book Page 27 Thursday, October 28, 2004 12:45 PM28 Chapter 5 Troubleshooting The middle light flashes, indicating that the base station is in reset mode. The base station remains in reset mode for five minutes. If you do not make your changes within five minutes of pressing the reset button, you must reset it again. 3 Use the AirPort status menu in the menu bar to select the network created by the base station (the network name does not change). 4 Open AirPort Admin Utility (in Applications/Utilities on a Mac, and in Start > All Programs > AirPort on a Windows computer). 5 Select your base station and click Configure. 6 In the dialog that appears, make the following changes: • Reset the AirPort Extreme Base Station password. • Turn encryption on to activate password protection for your AirPort network. If you turn on encryption, enter a new password for your AirPort network. 7 Click OK. The base station restarts to load the new settings. Note: While the base station is in reset mode, access control and RADIUS settings are temporarily interrupted. All of the base station settings will be available after the base station has restarted. If your base station isn’t responding Try unplugging the base station and plugging it back in to a power outlet. If power is supplied over Ethernet, make sure the cables are properly connected and the PSE is plugged in and working correctly. If your base station stops responding completely, you may need to reset it to the factory default settings. This erases all of the settings you’ve made and resets them to the settings that came with the base station. To return the base station to the factory settings: m Press and hold the reset button for five full seconds. The base station restarts with the following settings: • The base station receives its IP address using DHCP. • The network name reverts to Apple Network XXXXXX (where X is a letter or number). • The base station password returns to public. Important: Resetting the base station to factory defaults erases all the settings you have entered for the base station, including access control and RADIUS settings. LL2870.book Page 28 Thursday, October 28, 2004 12:45 PMChapter 5 Troubleshooting 29 If you move your AirPort Extreme Base Station to a location on your network with a different subnet and lose communication with the base station Your AirPort Extreme Base Station may have an invalid IP address. 1 Make sure that your computer is set to access the network from the new location (where you moved the AirPort Extreme Base Station) and that it is in range of the base station. 2 Make sure that the computer is set to use AirPort. 3 Use the AirPort Setup Assistant to reconfigure the base station. Important: You cannot use the AirPort Setup Assistant if you have used AirPort Admin Utility to turn off Internet sharing for your base station. If Internet sharing is turned off, you need to reset your base station and enter a new IP address. See “If you forget your AirPort network or base station password” on page 27. If your printer isn’t responding If you connected a printer to the USB port on the base station and the computers on the AirPort network can’t print, try doing the following: 1 Make sure the printer is plugged in and turned on. 2 Make sure the cables are securely connected to the printer and to the base station’s USB port. 3 Make sure the printer is selected in the Printer List on client computers. To do this: a Open Printer Setup Utility, located in Applications/Utilities. b If the printer is not in the list, click Add. c Choose Rendezvous from the pop-up menu. d Select the printer and click Add. Note: If the base station is set up to receive power over Ethernet, the USB port is disabled. You cannot print to a USB printer connected to the USB port if the base station is powered over Ethernet. LL2870.book Page 29 Thursday, October 28, 2004 12:45 PMLL2870.book Page 30 Thursday, October 28, 2004 12:45 PM 31 Appendix AirPort Extreme Base Station Specifications AirPort Specifications • Wireless Data Rate: Up to 54 megabits per second (Mbps) • Range: Up to 150 feet (45 meters) in typical use (varies with building) • Frequency Band: 2.4 gigahertz (GHz) • Radio Output Power: 15 dBm (nominal) • Standards: Compliant with 802.11 HR Direct Sequence Spread Spectrum (DSSS) 11 Mbps standard, 802.11 DSSS 1 and 2 Mbps standard, and 802.11g specification Interfaces • RJ-45 Ethernet WAN connector for built-in 10/100Base-T ( ). The WAN port may accept power as a Class 0 IEEE 802.3af-compliant Powered Device (PD). • RJ-45 Ethernet LAN connector for built-in 10/100Base-T (G) • Universal Serial Bus (USB) printing ( ) • AirPort Environmental Specifications • Operating Temperature: 32° F to 95° F (0° C to 35° C) • Storage Temperature: –13° F to 140° F (–25° C to 60° C) • Relative Humidity (Operational): 20% to 80% relative humidity • Relative Humidity (Storage): 10% to 90% relative humidity, noncondensing • Operating Altitude: 0 to 10,000 feet (0 to 3048 m) • Maximum Storage Altitude: 15,000 feet (4572 m) Size and Weight • Diameter: 6.9 inches (175 mm) • Height: 3.2 inches (80 mm) • Weight: 1.25 pounds (565 grams) not including the mounting bracket LL2870.book Page 31 Thursday, October 28, 2004 12:45 PM32 Appendix AirPort Extreme Base Station Specifications Base Station LED Sequences The following table explains the base station’s light sequences and what they indicate. Left Center Right Status/description Off Off Off The base station is unplugged or has failed. If the base station is plugged in and all lights are off, contact your Apple-authorized service provider. On On On The base station is in self-check mode. Rapid sequenced flashing, right-to-left The base station is starting up. Slowly flashing Slowly flashing Slowly flashing The base station has failed the power-on self-test. Contact your Apple-authorized service provider. Off Flashing slowly Off The base station is being reset. The network and base station passwords are reset to public. Off Flashing three times Off The base station is being reset, and all settings are returned to their factory defaults. Off/Flashing On Off/Flashing Left and right flashing indicates normal network activity. The left LED flashing indicates AirPort wireless activity and the right LED flashing indicates Ethernet or network activity. LL2870.book Page 32 Thursday, October 28, 2004 12:45 PM 33 Communications Regulation Information FCC Declaration of Conformity This device complies with part 15 of the FCC rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation. See instructions if interference to radio or television reception is suspected. Radio and Television Interference The equipment described in this manual generates, uses, and can radiate radio-frequency energy. If it is not installed and used properly—that is, in strict accordance with Apple’s instructions—it may cause interference with radio and television reception. This equipment has been tested and found to comply with the limits for a Class B digital device in accordance with the specifications in Part 15 of FCC rules. These specifications are designed to provide reasonable protection against such interference in a residential installation. However, there is no guarantee that interference will not occur in a particular installation. You can determine whether your computer system is causing interference by turning it off. If the interference stops, it was probably caused by the computer or one of the peripheral devices. If your computer system does cause interference to radio or television reception, try to correct the interference by using one or more of the following measures: • Turn the television or radio antenna until the interference stops. • Move the computer to one side or the other of the television or radio. • Move the computer farther away from the television or radio. • Plug the computer into an outlet that is on a different circuit from the television or radio. (That is, make certain the computer and the television or radio are on circuits controlled by different circuit breakers or fuses.) If necessary, consult an Apple-authorized service provider or Apple. See the service and support information that came with your Apple product. Or, consult an experienced radio/television technician for additional suggestions. Important: Changes or modifications to this product not authorized by Apple Computer, Inc., could void the FCC Certification and negate your authority to operate the product. This product was tested for FCC compliance under conditions that included the use of Apple peripheral devices and Apple shielded cables and connectors between system components. It is important that you use Apple peripheral devices and shielded cables and connectors between system components to reduce the possibility of causing interference to radios, television sets, and other electronic devices. You can obtain Apple peripheral devices and the proper shielded cables and connectors through an Apple-authorized dealer. For non-Apple peripheral devices, contact the manufacturer or dealer for assistance. Responsible party (contact for FCC matters only): Apple Computer, Inc., Product Compliance, 1 Infinite Loop M/S 26-A, Cupertino, CA 95014-2084, 408-974-2000. Use in Air-Handling Spaces This device has been designed and tested for use in environmental air handling spaces, in accordance with Section 300.22(C) of the National Electrical Code, and Sections 2-128, 12-010(3), and 12-100 of the Canadian Electrical Code, Part 1, C22.1. Peut étre utilisé dans des gaines transportant de l’air traité, coonformément à la section 300.22(C) du National Electrical Code et aux articles 2-128, 12- 010(3) et 12-100 du Code Canadien de l’électricité, Premiére partie C22.1. Industry Canada Statement This Class B device meets all requirements of the Canadian interference-causing equipment regulations. Cet appareil numérique de la Class B respecte toutes les exigences du Règlement sur le matériel brouilleur du Canada. LL2870.book Page 33 Thursday, October 28, 2004 12:45 PM34 VCCI Class B Statement Europe — EU Declaration of Conformity Complies with European Directives 72/23/EEC, 89/336/EEC, 1999/5/EC See http://www.apple.com/euro/compliance/ © 2004 Apple Computer, Inc. All rights reserved. Apple, the Apple logo, AirPort, AppleTalk, Mac, and Mac OS are trademarks of Apple Computer, Inc., registered in the U.S. and other countries. Rendezvous is a trademark of Apple Computer, Inc. AppleCare and AppleStore are service marks of Apple Computer, Inc., registered in the U.S. and other countries. Wi-Fi is a registered certification mark, and Wi-Fi Protected Access is a certification mark of the Wi-Fi Alliance. LL2870.book Page 34 Thursday, October 28, 2004 12:45 PMLL2870.book Page 35 Thursday, October 28, 2004 12:45 PMwww.apple.com/airport www.apple.com/support/airport 034-2870-A Printed in XXXX LL2870.book Page 36 Thursday, October 28, 2004 12:45 PM LiveType 2 User ManualK Apple Inc. Copyright © 2005 Apple Inc. All rights reserved. Your rights to the software are governed by the accompanying software license agreement. The owner or authorized user of a valid copy of Final Cut Studio software may reproduce this publication for the purpose of learning to use such software. No part of this publication may be reproduced or transmitted for commercial purposes, such as selling copies of this publication or for providing paid for support services. The Apple logo is a trademark of Apple Inc., registered in the U.S. and other countries. Use of the “keyboard” Apple logo (Shift-Option-K) for commercial purposes without the prior written consent of Apple may constitute trademark infringement and unfair competition in violation of federal and state laws. Every effort has been made to ensure that the information in this manual is accurate. Apple is not responsible for printing or clerical errors. Note: Because Apple frequently releases new versions and updates to its system software, applications, and Internet sites, images shown in this book may be slightly different from what you see on your screen. Apple Inc. 1 Infinite Loop Cupertino, CA 95014–2084 408-996-1010 www.apple.com Apple, the Apple logo, AppleWorks, Final Cut, Final Cut Pro, Final Cut Studio, FireWire, Keynote, LiveType, Mac, Macintosh, and QuickTime are trademarks of Apple Inc., registered in the U.S. and other countries. Finder is a trademark of Apple Inc. AppleCare is a service mark of Apple Inc., registered in the U.S. and other countries. Helvetica is a registered trademark of Heidelberger Druckmaschinen AG, available from Linotype Library GmbH. Other company and product names mentioned herein are trademarks of their respective companies. Mention of third-party products is for informational purposes only and constitutes neither an endorsement nor a recommendation. Apple assumes no responsibility with regard to the performance or use of these products. 3 1 Contents Preface 7 An Introduction to LiveType 7 How Does Titling Work? 8 A Realm of Creative Possibilities 9 Workflow for Creating Titles 10 About This Manual 10 LiveType Onscreen User Manual 11 Apple Websites Chapter 1 13 The LiveType Interface 14 Canvas 20 Inspector 25 Media Browser 26 Timeline 28 LiveType Media Files Chapter 2 31 Setting Up a Project 31 Templates 33 Starting a New Project and Setting Defaults 34 Setting Project Properties Chapter 3 41 Adding a Background 41 Setting a Background Color 42 Adding a Background Texture 43 Importing a Background Movie or Still Image 45 Considerations for Rendering the Background Chapter 4 47 Working With Tracks 48 Positioning Tracks in the Canvas 49 Creating Angles and Curves 51 Linking Endpoints 52 Adding, Copying, and Deleting Tracks 53 Working With Tracks in the Timeline4 Contents Chapter 5 57 Working With Text 57 Inserting Text 60 Adjusting the Timing of LiveFonts 62 Formatting Text 68 Enhancing Text With Styles 72 Creating a Matte 77 Modifying Individual Characters 78 Disabling Fonts in Mac OS X Chapter 6 79 Working With Objects, Textures, and Imported Elements 80 Working With LiveType Objects 81 Working With LiveType Textures 82 Importing Graphics, Images, and Movies 83 Transforming Objects, Textures, and Imported Elements Chapter 7 87 Working With Effects and Keyframe Animation 88 Preset Effects 88 Applying Preset Effects 90 Adjusting the Timing of an Effect 93 Changing the Order of Effects 93 Duplicating Effects and Tracks 93 Modifying a Preset Effect 103 Creating a New Effect From Scratch Chapter 8 109 Previewing and Fully Rendering Your Titling Movie 109 Previewing Your Work 112 Optimizing Preview Performance 113 Rendering, Saving, and Exporting Your Titling Movie Chapter 9 117 Advanced Design Techniques 117 Words Within Words 119 Warping Shadows and Glows 121 Track Curves 123 Creative Use of Special Characters 126 LiveFonts and Layers 128 Creating Scrolls and Crawls UP01103TOC.fm Page 4 Monday, March 7, 2005 7:23 PMContents 5 Appendix A 131 Solutions to Common Problems and Customer Support 131 Frequently Asked Questions 133 Apple Applications Page for Pro Apps Developers 134 Calling AppleCare Support Appendix B 135 Creating and Editing EffectScripts 135 Header 135 Default Timing 136 Keyframes 140 Sample EffectScripts Glossary 143 Index 149 UP01103TOC.fm Page 5 Monday, March 7, 2005 7:23 PMUP01103TOC.fm Page 6 Monday, March 7, 2005 5:05 PM 7 Preface An Introduction to LiveType Welcome to LiveType, a special-effects titling application that’s powerful, easy to use, and completely versatile— whether you’re creating movie titles and credits, broadcast ads, or web banners. Producing dynamic video titles—titles that really pop—can be a painstaking process, fraught with manual adjustments and keyframe stacks daunting even to experienced animators. With LiveType, you can create phenomenal results, in the output format you require, with a fraction of the effort. How Does Titling Work? Traditionally, titling was the term for adding text to film. The evolution of digital graphics and video technologies has expanded the definition, which now includes just about any combination of text and images you want to add to a movie. Titling is the process of creating a digital overlay, which is added to edited footage in your nonlinear editor (NLE), or compositing program. LiveType is the design studio where you generate titles to import into Final Cut Pro. Alpha channel technology is the basis of titling. Most compositing and animation programs allow you to create art with an alpha channel. In addition, most NLEs use alpha channels they detect in an image or movie to properly lay the element over video. An alpha channel represents eight bits of grayscale pixel information in a 32-bit file. The eight grayscale bits determine which portions of the image to superimpose over other layers. White alpha-channel pixels make the superimposed image completely opaque, while black pixels make the overlay completely transparent, or invisible. Gray levels represent varying levels of opacity. LiveType automatically creates an alpha channel for your project when you render it with a transparent background.8 Preface An Introduction to LiveType A Realm of Creative Possibilities LiveType has revolutionized titling in two major ways. First, it introduced 32-bit LiveFonts, a new approach to text animation in which every character of a font is a separate, animated movie. Second, LiveType handles effects and animation with greater ease than any other titling application. Animated Fonts, Objects, and Textures Three types of animated media are included in LiveType:  LiveType objects are animated graphics.  LiveType textures are moving images used to fill backgrounds, text, or objects.  LiveFonts are complete, animated character sets. All of these elements move inherently, even before you apply motion paths and special effects to them. LiveType comes with dozens of LiveFonts and hundreds of objects and textures. What’s more, you can create your own animated fonts using the LiveType FontMaker utility, building characters using virtually any graphical object—from 3D animations and images created in Photoshop to video clips—and apply effects to them, just as you would to words. Effects Handling Effects in LiveType are handled as separate entities—“packages” encompassing movement, transformation, and timing parameters—that can be applied to any number of elements in the Canvas. You can take advantage of more than 100 customizable effects that come with LiveType, including fades, zooms, rotations, and motion paths. Or you can create your own styles by adjusting existing effects or building them from scratch. From an animation standpoint, LiveType is easier to work with than other titling applications, since one keyframe marker contains all the parameters for an element at a point in time, eliminating the complexity of long keyframe stacks. And powerful timing features allow you to control every aspect of your animation. In addition to basic functions such as loop, speed, and duration, LiveType allows you to sequence your effects. Sequencing lets you animate characters in a line of text individually, with their own timing elements, so you’re not constrained to blocks of text that fly around the screen as a unit. Whether you’re combining prebuilt elements or generating all the pieces yourself, you can create wholly original, eye-catching compositions with surprisingly little effort.Preface An Introduction to LiveType 9 Workflow for Creating Titles Video production is typically approached in layers from back to front, starting with shooting and editing the footage, then building in effects, then applying titles and sound. Likewise, your approach to title creation should be loosely approached from back to front. Of course, because the design process is fluid, there is no hard-and-fast prescription, but the following steps give you a sense of what’s involved for a typical project. Step 1: Configure the working environment  Set the output resolution, frame rate, and other project properties.  Set up the grid, guides, and rulers in the Canvas, according to your working preferences. Step 2: Apply a background, if any Step 3: Create elements (text or objects) in the Canvas, one by one  Position and shape a track for the element.  Add an element to the track.  Select a font.  Adjust attributes and apply styles to the element. Step 4: Animate the elements  Define the movie duration.  Apply effects and adjust the timing.  Customize the animation with keyframe adjustments. Step 5: Preview and fine-tune the movie Step 6: Render the final movie for compositing into your video Step 7: Export the movie to an alternative format, if needed You may be able to save considerable time by taking advantage of LiveType templates—project files provided with the software that offer many examples of titling formats. One might suit your needs with few changes, or you may find that certain elements within a template are useful, which you can copy into your own project. More about templates can be found in Chapter 2, “Setting Up a Project,” on page 31.10 Preface An Introduction to LiveType About This Manual Because LiveType is a creative tool, documentation can only go so far in describing its potential. This manual provides a detailed description of the LiveType interface, features, and functionality, and introduces you to the built-in resources and templates to give you a sense of the versatility of this product. In the end, you are limited only by your own creative vision, and the way to push the limits of LiveType is to jump in and start creating. This manual begins with a description of the interface, followed by a series of chapters that explain the tasks you’ll need to perform, as well as advanced techniques. Note: This user manual is written for people with a rudimentary understanding of film or video production. Experienced users will be quite familiar with all such terminology herein. Others will find that most terms are defined in context, and the glossary at the end of this manual may be helpful as well. LiveType Onscreen User Manual The LiveType onscreen user manual allows you to access information directly onscreen while you’re working in LiveType. To view the onscreen user manual, choose Help > LiveType User Manual. The onscreen user manual is a fully hyperlinked version of the user manual, enhanced with many features that make locating information quick and easy.  The home page provides quick access to various features, including Late-Breaking News, the index, and the LiveType website.  A comprehensive bookmark list allows you to quickly choose what you want to see and takes you there as soon as you click the link. In addition to these navigational tools, the onscreen user manual gives you other means to locate information quickly:  All cross-references in the text are linked. You can click any cross-reference and jump immediately to that location. Then, you can use the navigation bar’s Back button to return to where you were before you clicked the cross-reference.  The table of contents and index are also linked. If you click an entry in either of these sections, you jump directly to that section of the user manual.  You can also use the Find dialog to search the text for specific words or a phrase. LiveType Help also contains information about issues with third-party software and known bugs. This information is found in the Late-Breaking News section of LiveType Help.Preface An Introduction to LiveType 11 To access Late-Breaking News: m Choose Help > Late-Breaking News. Note: You must be connected to the Internet to download the Late-Breaking News file. Additionally, LiveType Help contains a link to the Creating LiveFonts PDF file. This document details the process of creating custom LiveFonts for use with LiveType. To access the Creating LiveFonts PDF file: m Choose Help > Creating LiveFonts. Apple Websites There are a variety of discussion boards, forums, and educational resources related to LiveType on the web. LiveType Website For general information and updates, as well as the latest news on LiveType, go to:  http://www.apple.com/finalcutpro/livetype.html Apple Service and Support Website For software updates and answers to the most frequently asked questions for all Apple products, including LiveType, go to:  http://www.apple.com/support You’ll also have access to product specifications, reference documentation, and Apple and third-party product technical articles. For LiveType support information, go to:  http://www.apple.com/support/livetype/index.html Other Apple Websites Start at the Apple homepage to find the latest and greatest information about Apple products:  http://www.apple.com QuickTime is industry-standard technology for handling video, sound, animation, graphics, text, music, and 360-degree virtual reality (VR) scenes. QuickTime provides a high level of performance, compatibility, and quality for delivering digital video. Go to the QuickTime website for information on the types of media supported, a tour of the QuickTime interface, specifications, and more:  http://www.apple.com/quicktime12 Preface An Introduction to LiveType FireWire is one of the fastest peripheral standards ever developed, which makes it great for use with multimedia peripherals, such as video camcorders and the latest highspeed hard disk drives. Visit this website for information about FireWire technology and available third-party FireWire products:  http://www.apple.com/firewire For information about seminars, events, and third-party tools used in web publishing, design and print, music and audio, desktop movies, digital imaging, and the media arts, go to:  http://www.apple.com/pro For resources, stories, and information about projects developed by users in education using Apple software, including LiveType, go to:  http://www.apple.com/education Go to the Apple Store to buy software, hardware, and accessories direct from Apple and to find special promotions and deals that include third-party hardware and software products:  http://www.apple.com/store1 13 1 The LiveType Interface The LiveType interface consists of four primary windows—the Canvas, the Inspector, the Media Browser, and the Timeline.  Canvas: This is where projects take shape. You use it to position text and objects, create motion paths, and view the results as you design.  Inspector: A toolbox of settings and parameters, including virtually every option for building and customizing your titling creations.  Media Browser: This area provides access to all the fonts, textures, objects, and effects you’ll use to create your titles.  Timeline: This is where you manage the frame-by-frame action of your titling projects. Animation keyframes are created and adjusted in the Timeline, allowing you to orchestrate the movement of your titling elements. Canvas Inspector Media Browser Timeline14 Chapter 1 The LiveType Interface The four windows float freely, and can be moved and resized to suit your working preferences. To restore the default layout of LiveType: m Choose Window > Apply Default Layout. Canvas The Canvas is your creative working environment, reflecting the output dimensions you configure in the Project Properties dialog. (See “Setting Project Properties” on page 34.) Whether you are working in HDTV, NTSC, PAL, or any other format, the Canvas is designed to help you lay out and view your titling project easily. About the Canvas Interface There are various interface elements and controls in the Canvas, outlined below. Background When you first open LiveType, the default checkerboard pattern in the Canvas represents a transparent background, allowing alpha channel titles to overlay video footage when composited in a nonlinear editor (NLE) such as Final Cut Pro. You can set the background as any combination of the following:  Transparent  Solid color  Animated texture or object  Still image  Movie Background (transparent) Action safe guidelines Track, showing multiple lines of text on one track Zoom pop-up menu Title safe guidelines Transport controlsChapter 1 The LiveType Interface 15 Backgrounds often cover the entire Canvas. However, when used with the matte feature in the Attributes tab of the Inspector, an element can appear to “punch through” an underlying element to reveal the background color, image, or movie. See “Creating a Matte” on page 72 for more about creating mattes. Tracks The dark blue horizontal line that appears in the default Canvas is a track. Tracks are the foundation of any LiveType composition. Every element of a project resides on a track. Tracks define:  The position of text and objects in the Canvas  The layering of elements  In some cases, the path taken by moving elements Tracks have two endpoints, and can have any number of “control points,” which are nodes that create angles and curves in the track. When more than one track is in the Canvas, only the endpoints of the selected, or active, track are visible. This identification is helpful when you’re applying attributes to a track. Action Safe and Title Safe Guidelines The green hairline boxes in the Canvas represent the “action safe” and “title safe” areas. The action safe area, defined by the outer line, is the extent of the screen where the image is readily visible, given the curvature of the cathode-ray tube (CRT). The title safe area, represented by the inner line, is the boundary beyond which text is not easily read. To turn the action safe and title safe guidelines off or on: m Choose View > Title Safe.16 Chapter 1 The LiveType Interface Canvas Zoom Pop-up Menu At the bottom of the Canvas is a pop-up menu for changing the magnification of the Canvas. To change the Canvas zoom, do one of the following: m Open the Canvas zoom pop-up menu at the bottom of the Canvas and choose one of the magnification options. m Choose Fit to Window from the Canvas Zoom pop-up menu, then resize the Canvas window to a new magnification. m Choose View > Zoom In or Zoom Out. m With the Canvas active, use the Command-Z keyboard shortcut for Fit to Window. m With the Canvas active, use the Command-+ or Command-– keyboard shortcut to zoom in or out. Transport Controls The transport controls allow you to generate a RAM preview of your project so you can preview your titling movie right in the Canvas. When you click the Play button, LiveType renders each frame into RAM memory. This feature is referred to as a RAM preview. Canvas Zoom pop-up menu Previous frame Play Next Frame LoopChapter 1 The LiveType Interface 17 To render a RAM preview of your project in the Canvas: 1 Click the Play button (or press the Space bar when the Canvas or Timeline is active). The preview renders each frame, then runs through the preview in real time. 2 Stop the preview by clicking anywhere in the Canvas. The Play icon turns into a Pause icon when the RAM preview is playing. The Loop button is a toggle that gives you the choice of a single run-through or repeating the preview in a continuous loop. See Chapter 8, “Previewing and Fully Rendering Your Titling Movie,” on page 109 for more about previewing your work. Customizing the Canvas Most Canvas settings can be customized from the View menu, allowing you to configure guidelines and magnification, and choose which elements appear in the Canvas. The grid, rulers, and guides are helpful for precisely aligning and positioning elements in the Canvas. To show the rulers or the grid: m Choose View > Rulers. m Choose View > Grid. You can set the number of pixels between each grid line in the Project Properties dialog. Grid Ruler for vertical guides Ruler for horizontal guides18 Chapter 1 The LiveType Interface To customize the grid: 1 Choose Edit > Project Properties. 2 In the Ruler and Grid Settings area at the bottom, enter a new value in the Grid Width field. To add a guide to the Canvas: m Click inside one of the rulers to insert a guide marked by its horizontal or vertical pixel position. To add crosshairs to the Canvas: m Click inside one of the rulers and drag the pointer onto the Canvas. To remove guides from the Canvas, do one of the following: m Drag guide markers off either end of the ruler. m Choose View > Clear Guides, which removes all guides. You can isolate a single track and display all other elements as bounding boxes— rectangles that roughly show the size, position, and orientation of an element. This option is useful for cleaning up the Canvas as you work on a single track, and it saves preview-rendering time, because only one item of your composition is being rendered. Guide marker showing pixel position Crosshair guideChapter 1 The LiveType Interface 19 To isolate a single track in the Canvas: m Select the track you want to continue working on, then choose View > Selected Only. Revert to the normal view by choosing View > Selected Only again. The Proxy Frame Only option in the View menu–which applies only when you’re using installed LiveType media—renders LiveFonts, textures, and objects as proxy frames in the Canvas, essentially freezing their inherent animation. Particularly when the animated element has highly variable content from frame to frame (such as Particles objects, which contain few if any pixels in the beginning and ending frames), the proxy frame is easier to work with, because it shows a more representative shape of the object regardless of the playhead position. Bounding boxes show the size and position of deselected elements. Choose View > Selected Only to view only the contents of the selected track.20 Chapter 1 The LiveType Interface Inspector The Inspector is your toolbox for transforming elements—text, objects, or images. There are unlimited combinations of parameters and attributes you can use to make your titles dynamic and original. The Inspector consists of a text-entry box and Live Wireframe Preview at the top of the window, and five tabs of parameters. Inspector settings always apply to the track, character, or effect that is currently selected in the Canvas or Timeline. Text-Entry Boxes There are two areas in the Inspector where you can add text to a track. One of these is in the upper-left corner of the Inspector. Because this text-entry box is visible no matter which Inspector tab is selected, it is a convenient way to identify the active track, as well as to add or change the text on a track, as you can type directly into it. The textentry box at the bottom of the Text tab is larger, making it easier to insert and edit larger amounts of text. The text-entry boxes also allow you to select individual letters or words on a track. When you highlight text in the text-entry box, those characters are selected in the Canvas. This is particularly useful when the text you want to modify is obscured in the Canvas by other elements. Text-entry box Live Wireframe Preview Inspector tabs Text-entry box (in the Text tab only)Chapter 1 The LiveType Interface 21 Live Wireframe Preview In the upper-right corner of the Inspector, the Live Wireframe Preview continually plays your titling movie, with small bounding boxes indicating the movement of each character or object. This feature gives you a quick indication of how your adjustments have changed the overall animation, without rendering a full preview with every change you make. To freeze or unfreeze the Live Wireframe Preview: m Click inside the preview area. Inspector Tabs There are five tabs in the Inspector.  Text tab: This is where you enter text and adjust the size, alignment, and spacing of text on the active track. Preview playhead Small bounding boxes depict the movement of the Canvas elements. Click to turn the preview on or off. Text tab settings22 Chapter 1 The LiveType Interface  Style tab: This tab provides options for the Shadow, Glow, Outline, and Extrude treatments, which can be applied to text or objects. These are often used to add depth and highlight the text or object, although a wide variety of graphical outcomes are possible.  Effects tab: This tab lists the effects that have been applied to the active track, and is used to view and change effect parameters at any point in your titling movie. Effects are combinations of movement and transformation that can be applied to any track. The On column of the Effects tab allows you to turn an effect off or on for individual characters on the track. Style tab settings Effects tabChapter 1 The LiveType Interface 23  Timing tab: Timing parameters for tracks and effects are controlled in this tab. While the Timeline provides a frame-by-frame diagram of tracks and effects with their associated keyframes, the Timing tab is a single pane that allows you to adjust the overall timing and modify the parameters of your animation. Some timing adjustments are made more easily by moving elements in the Timeline, rather than entering values in the Timing tab. However, the Timing tab gives you access to the full range of timing variables, as well as effect parameters that let you fine-tune your animation, creating exactly the look you want. Timing tab24 Chapter 1 The LiveType Interface  Attributes: This is where you assign a variety of attributes—opacity, blur, scale, offset, rotation, and color—to elements in the Canvas. Attributes can be applied to entire tracks or individual characters on a track. The Attributes tab also contains options for creating a matte effect, in which an element appears as a cut-out window that reveals the element below it. A simple line of text, for example, can be matted to a movie clip, which essentially “fills” the text. The Matte to Texture option lets you fill track contents—even individual characters—with an animated texture, without having to add the texture to your project as a separate element. Matte settings are variables for creating cutouts and textured fills. Glyph settings include attributes such as the shape, color, and position of text and objects. Attributes tab in the Matte pane Attributes tab in the Glyph paneChapter 1 The LiveType Interface 25 Media Browser Most of the installed resources available for your titling projects are available through the Media Browser—except for LiveType templates and images and movies you import from other sources. There are various tabs representing different elements installed on your computer: LiveFonts, system fonts, textures, objects, and effects. Using the Media Browser, you can scroll through and view representations of all these elements before you apply them to your project. The Media Browser preview is the only way to see how LiveType media—LiveFonts, textures, and objects—move and transform until you install the full data file onto your computer. When you first apply one of these elements to the Canvas, a single representative frame is displayed, not the entire animated sequence. Installing the data component allows you to see a true representation of the LiveFont in each frame of your movie. See “LiveType Media Files” on page 28 for more about LiveType file management. Browser preview Tabs of media and effects26 Chapter 1 The LiveType Interface Timeline The Timeline depicts the frame-by-frame orchestration of your titling project, and provides many tools for designing the movement and timing of your titles. The Timeline allows you to do the following:  Set the timing and duration of tracks and effects  Manage the track order, or layers  Group tracks to maintain their relative position  Enable and disable tracks and effects  Work with keyframes to customize your animation  Select specific frames to view or adjust  Set markers to render only a portion of your movie for previews or final output About the Timeline Interface The following are the interface elements and controls in the Timeline. Project Tabs Tabs at the upper-left corner of the Timeline indicate which projects are currently open, and which one is active. Playhead, Timecode, and Frame Ruler The playhead and timecode on the frame ruler indicate which frame is showing in the Canvas. The playhead moves along the frame ruler when you play your project, and it can be dragged to any given frame. To view a specific frame, do one of the following: m Drag the playhead to the desired frame. m Click a frame in the frame ruler. The Canvas always reflects the frame under the playhead. Playhead Timeline zoom slider Project tab Grouping buttons Keyframe Background bar Timecode Enable/Disable buttons Track Effect Frame ruler Render Selection Out pointChapter 1 The LiveType Interface 27 Render Selection Markers The In Point and Out Point markers in the frame ruler allow you to determine the portion of your movie you want to render. Using these markers, you can:  Save time rendering previews when you don’t need to see the entire movie  Choose the precise number of frames you want to include in your final output To change the render selection, do one of the following: m Drag the In Point and Out Point markers in the frame ruler. m Position the playhead and press the I key on your keyboard to set the Render Selection In Point, or the O key to set the Render Selection Out Point. The Timeline must be active for these hot keys to work. As you constrain the range of frames to be rendered, the information box in the upperleft corner of the Timeline reflects the modified duration and number of frames. Note: To quickly locate the Out Point marker when it is beyond the end of the visible Timeline, move the Timeline zoom slider all the way to the right. Tracks and Effects Tracks are numbered according to their layer position in the left column of the Timeline, and Track 1 is always the top layer. Effects are depicted as separate bars underneath the track they apply to. An effect may extend for the entire duration of the track, or only a portion of it. One track may have several effects applied to it, in sequence or overlapping. Background Bar Any item that falls below the background bar is a background element. You can drag the background bar up or down to any position between tracks in the Timeline. For more information, see “About the Background Bar” on page 43. Keyframes The basis of most digital animation, keyframes contain the parameters that elements in the Canvas reflect at a specific point in time. When a movie is rendered, LiveType interpolates the movement of the elements in between keyframes for smooth, continuous motion. When an effect is increased in duration, or stretched, the keyframes spread out with it, and the effect takes longer to complete. See Chapter 7, “Working With Effects and Keyframe Animation,” on page 87 to learn more about keyframes. Grouping Buttons Grouping buttons let you group tracks together in the Canvas, locking their relative position while allowing you to move the group as a unit.28 Chapter 1 The LiveType Interface Enable/Disable Buttons The Enable/Disable buttons turn tracks and effects off or on. When a track is disabled, its contents are removed from the Canvas, although the blue track line remains. Disabled tracks are not rendered in previews or movies. Similarly, effects can be disabled. Timeline Zoom Controls Typically you use the zoom controls to adjust the amount of time represented in the Timeline window. This is helpful to do while working with timing a long or complex composition. The main zoom control is the Timeline zoom slider, which zooms in and out around the playhead as you drag. You can also use the Command-+ or Command-– keyboard shortcut to zoom in or zoom out on the playhead when the Timeline is active. Another helpful command is Shift-Z, which adjusts the Timeline to show the entire project at once. LiveType Media Files LiveType includes hundreds of media and effects files, which are the resources available to you within the Media and Template Browsers. Animated files include LiveFonts, objects and textures. Preset effects and templates, as well as various other LiveType resources, are also included with the application. These files are collectively known as LiveType media files. LiveType now uses a single file format for media such as LiveFonts, textures, and objects, but media using the earlier “pair format” is still supported. Shortcuts and Hot Keys The LiveType interface includes numerous menu items and shortcuts to help you use the application easily and efficiently. It’s important to note that the function of these options depends on which LiveType window is active. For example, when the Canvas is active, the arrow keys nudge the active track in small increments. However, when the Timeline is active, the right and left arrow keys advance the playhead or move it back one frame.Chapter 1 The LiveType Interface 29 Locating LiveType Media Files When you install LiveType, a folder hierarchy is placed in the following location: Library Application Support/LiveType/. This is where LiveType looks first for media files such as LiveFonts, objects, textures, effects, images, movies and templates. LiveType media files can also be installed on other hard disks including a network server. You can assign any location for media files from the Preferences dialog. About Installing LiveType Media Files The LiveType installation process allows you to install LiveType media files in any location. For more details concerning installing LiveType and the LiveType media files, see the Installing Your Software document that is enclosed with the installation discs. Managing LiveType Media Files Any element in the LiveFonts, Textures, Objects, and Effects tabs of the Media Browser has a corresponding media file, which contains the components needed to work with LiveType. Once these media files are installed in the /Library/Application Support/ LiveType folder, you can move them to a different disk. To use LiveType media that is located outside the application support folder: m Assign the location of the media files from within LiveType using the Preferences dialog. Choose LiveType > Preferences. Note: To use LiveType media that is located outside the application support folder, assign its location from within LiveType using the Preferences dialog. If you have media installed from a previous version of LiveType, the Media Browser reads “Yes” or “No” in the Installed column for that media which indicates whether or not the media files are installed. This convention only applies to earlier “file pair” formats. New content comes in the form of media files that appear in the Media Browser with a double dash in the Installed column indicating they are installed. The Install and Uninstall buttons in the lower-left corner of the Media Browser do not apply to the newer content, only to LiveType media files from previous versions of the application. Media file types Media filenames Media file contents Effects LTFX Contains effects files and sample movies for each effect. LiveFonts LTLF Animated font characters Objects .LTOB Pre-rendered animations with an alpha channel Templates LTTM LiveType projects Textures LTTX Full screen animated backgrounds. These animations can also be matted to any font character or element on the LiveType Canvas.30 Chapter 1 The LiveType Interface Creating Custom Categories for LiveType Media files You can create custom categories for LiveType Media by simply creating a new folder within the Media folder, such as LiveFont/My folder/My font. LiveType only recognizes one folder level after the original media category. You can move “file pair” media files from previous versions of LiveType to another disk, but they must be in the same folder hierarchy that they were previously located in. Using Imported Files When you use graphics or movies from other sources in your project, LiveType needs to refer to the source files for these external elements. Therefore, once you’ve placed an image or movie, it’s best not to move or rename the source file. The Images folder in the LiveType folder hierarchy is a convenient place to store images associated with your projects.2 31 2 Setting Up a Project The most important step as you begin any LiveType project is to configure the project properties. As tempting as it may be to jump right in and start designing, you should define your output parameters and save the project to disk at the outset, to be sure your titles are generated at the size and resolution you need. If you go ahead and generate a titling movie without initially configuring the project, you’re bound to run into trouble. Although these settings can be changed at any time, a titling composition created for standard broadcast, for example, will fill only a portion of the screen if it’s changed to high definition format after the fact. First, you need to open a new project. You have two options:  Start with a LiveType template.  Start with an existing LiveType project you’ve already created. Templates LiveType includes dozens of templates, which are LiveType project files organized by category. You can use templates in several ways:  As the starting point for your own creations  As repositories of preconfigured elements you can paste into your own projects  As a resource for sparking ideas and seeing what’s possible with LiveType The templates comprise many types of prebuilt projects, all of which you can revise for your own purposes.32 Chapter 2 Setting Up a Project To open a template: 1 Choose File > Open Template. 2 Browse the categories of templates in the Template Browser. 3 In the Template Browser, choose NTSC, PAL, or HD from the Format pop-up menu. 4 Choose a template, then click OK. Whenever you open a template, make sure to set your project properties immediately. See “Setting Project Properties” on page 34. You can save your own projects as templates, so they’re accessible through the Template Browser. To save your project as a template: 1 Place the project file (.ipr) in a folder in this location: /Library/Application Support/ LiveType/Templates. 2 Generate a short QuickTime movie of the project (with the same name but an appropriate movie extension, such as .mov or .mp4). Once you do this, the template will appear in the preview window of the Template Browser. The Template BrowserChapter 2 Setting Up a Project 33 Starting a New Project and Setting Defaults When you open LiveType, an untitled default project appears in the interface. If you want to start a new project when LiveType is already open, you need to open a new default project. To open a new project: m Choose File > New. A new project with an empty Canvas appears and an “Untitled” project tab is added to the Timeline. Note: At least one LiveType project must be open at any time, so if you close the only open project, a new default project automatically opens. You can configure your LiveType interface and save your settings as the default. Default settings include project properties, font and media choices, Canvas options, the tabs that are revealed, and various other settings. This is particularly useful for saving your preferred output format, so you don’t have to reconfigure the project properties each time you open a project. To save your default settings: 1 Set up a LiveType project with the settings and configuration you want. 2 Choose LiveType > Settings > Remember Settings. Every time you subsequently open a new project or open LiveType, the current default settings apply. Content elements in the original project are not saved as part of the default project. It is possible to find yourself with a default configuration that’s undesirable and difficult to get out of. You can easily wipe clean your settings and revert to the original LiveType settings. To erase your project settings: m Choose LiveType > Settings > Clear Settings.34 Chapter 2 Setting Up a Project Setting Project Properties Once you’ve opened a new project and saved it to disk, you need to define the output you want to create. All of the essential project settings are accessed through the Project Properties dialog. To open the Project Properties dialog: 1 Choose Edit > Project Properties. 2 Make the desired changes, then click OK. For detailed information, see the next section, “Settings in the Project Properties Dialog.” The Project Properties dialogChapter 2 Setting Up a Project 35 Settings in the Project Properties Dialog There are various pop-up menus, colors, and settings you can select when specifying the properties for your project. Presets Presets establish the width, height, frame rate, and pixel aspect defined by the selected standard.  Presets: This pop-up menu lists the most common output formats. After you select a different preset, you’ll notice how the settings change. You can also see how selecting different presets affects the shape and size of the Canvas. If none of the presets conform to your project, you can configure the dimensions and frame rate manually. “Custom” will then automatically appear in the Presets field. Note: Web banner and multimedia options are included among the presets, since LiveType is effective for building animations for the web or for Keynote, for example, which imports QuickTime movies directly. Keep in mind that web banners are typically created in GIF format. To create a GIF, you need to use another program to translate your LiveType output.  Width: This is the width, in pixels and inches.  Height: This is the height, in pixels and inches.  Frame Rate: This is the frame rate, in frames per second. Preset properties automatically set the project resolution, frame rate, and so on, for the most common output formats.36 Chapter 2 Setting Up a Project  Field Dominance: When your project is intended for interlaced video output, choose either Upper (Odd) or Lower (Even) for the smoothest animation. After the proper option is chosen, LiveType renders fields with either the upper or the lower field first. Choose None for footage that is non-interlaced. DV footage is typically Lower Field First, while certain video capture cards may need to be rendered with the Upper (Odd) option chosen. In all cases, use the fielding option that matches your video system settings. For more about this, see “Choosing a Field Order” on page 38.  Pixel Aspect: The ratio of width to height of a single pixel, or pixel aspect, can differ from format to format. The pixel aspect is set by preset properties, or you can enter a custom pixel aspect value.  Start Time: You can map the start time of your project to a precise point in your edited video, making it easy to overlay your video at the compositing stage. Start Time units reflect the time format entered in the field to the right.  Time Format: This setting defines how the position of audio or video is marked in time. There are several choices—Frames, SMPTE, and SMPTE Drop. Description Field The description field is a useful place to store notes about the project, as well as a description of any nonstandard output parameters you’ve configured, for future reference. Quality Settings Quality settings can have a big impact on the amount of time you invest in a project. As you design your titling animation and try out different effects, you will preview your movie many times. And each time, your system has to render the movie, frame by frame. Time Format options Quality options for the Canvas, movie renders, and preview moviesChapter 2 Setting Up a Project 37 These settings in this area allow you to configure the quality of three different items:  Canvas: A RAM preview in the Canvas  Movie Render: A full movie render  Preview: A standard preview accessed via the File menu How you preview your movie depends on where you are in the design process. You may find yourself changing these settings several times as you design your titles, particularly if it’s a complex composition that takes considerable time to render. A Wireframe preview, which displays rectangular bounding boxes representing each character, renders very quickly. When you’re focusing on the motion of your Canvas elements, not their visual attributes, previewing in Wireframe mode is highly efficient. The Wireframe, Draft, and Normal settings render your project at increasing resolution levels. Background Settings These settings allow you to select a Canvas color and its opacity level.  Color: This allows you to choose a background color in the Canvas.  Opacity: This setting reflects the opacity of the color selected above. An opacity level of 0 equates to no background color, and the Canvas shows the checkerboard pattern indicating a transparent background.  Render Background: When this checkbox is selected, any background color, as well as other background elements, render in previews or final movies. Important: The Render Background checkbox applies to all elements that fall below the background bar in the Timeline, as well as the background color, which is not represented in the Timeline. If the checkbox is not selected, the background color and other background elements are not rendered in previews or final movie renders. For more information, see Chapter 3, “Adding a Background,” on page 41 which presents a complete explanation of working with backgrounds in LiveType. Ruler and Grid Settings You can display or hide the Canvas rulers and grid, and set the space between gridlines at the bottom of the Project Properties dialog. See “Customizing the Canvas” on page 17 for more about the rulers and grid. The gridline frequency, or grid width, is set in the Project Properties dialog.38 Chapter 2 Setting Up a Project Choosing a Field Order You can run a simple test to determine the proper field order for your system. When you make a movie, the rendering order (upper field first or lower field first) should correspond to the method used by your equipment, or your movie will appear distorted. Note: The field order with which you record to video equipment can be altered by changes in the hardware or software of your production setup. For example, changing your video board, device control software, or VCR after setting the field order can reverse your fields. Therefore, any time you make a change to your setup, you should run this test for field rendering order. To test your system, render two LiveType projects, one rendered with upper field first and one with lower field first. Important: You may need to familiarize yourself with the basic functions of LiveType before you go through these steps. To test the field rendering order: 1 Start a new LiveType project. 2 Choose Edit > Project Properties. 3 In the Project Properties dialog, do the following: a Choose an NTSC or PAL setting from the Presets pop-up menu. b Choose Lower (Even) from the Field Dominance pop-up menu. Do not choose Upper (Odd). In this case, you are rendering the lower field first. 4 In the Inspector, do the following: a Type a capital “O” in a system font on Track 1. b Increase its size to fill most of the Canvas. c Color the letter red, then choose black as your background color. 5 Apply a fast-moving effect to the track, such as Jumpy, in the Caricature effect category in the Media Browser. 6 In the Timing tab, set the speed of the effect to 100 percent. 7 Choose File > Render Movie. 8 Enter lower.mov as the filename in the Save As field, then click Save. The movie renders to your chosen location. 9 Now change the color of the capital “O” to blue. Select Upper (Odd) in the Project Properties dialog. 10 Save the file, naming it upper.mov. 11 Render the second movie.Chapter 2 Setting Up a Project 39 12 Import the rendered files into Final Cut Pro, then play back both movies on an NTSC monitor. One of the two movies will look distorted; the other movie will play correctly, with sharply defined edges. Whenever you render a LiveType movie for that system, use the settings you used for the undistorted output.3 41 3 Adding a Background Backgrounds in LiveType generally serve one of two purposes: Either they are an integral part of the titling composition, or they are used as an aid to position elements and key the timing of the titling movie. Although anything can be a background, a background is usually a uniform color, animated texture, still image, or movie that fills the Canvas. While background images, movies, and textures tend to fill the Canvas and aren’t extensively manipulated, they can be sized, positioned, and transformed in many ways. For more information, see Chapter 6, “Working With Objects, Textures, and Imported Elements,” on page 79. Setting a Background Color The most basic kind of background is a background color, which covers the Canvas and cannot be manipulated except for its opacity level. It’s best to think of the background color as a project property, not an element that can be moved or changed. The default background has an opacity of 0 percent, which means the Canvas displays a transparent background, represented by the white and gray checkerboard pattern. Background settings in the Project Properties dialog42 Chapter 3 Adding a Background To choose a background color: 1 Choose Edit > Project Properties. 2 Click the Color button in the Project Properties dialog. 3 In the Colors window, choose a color, then close the window. Note: Make sure you always close this window after you have selected a color. 4 Drag the opacity slider or enter a value in the field greater than 0 percent. Click OK. The background color appears in the Canvas. Adding a Background Texture LiveType textures make vibrant, animated backgrounds. They are also frequently used with the matte feature, which allows you to apply a textured fill to text or an object. See Chapter 5, “Working With Text,” on page 57 for information about creating a matte. To create a textured background: 1 Click the Textures tab in the Media Browser. 2 Browse the texture categories and select a texture. 3 Click the Apply To New Track button. The texture fills the Canvas and appears as a background track in the Timeline. Selected texture Category pop-up menu Background texture (colored and lightened)Chapter 3 Adding a Background 43 Importing a Background Movie or Still Image You can import images or movies from other sources and use them as backgrounds for your titling project. For any given project, you might choose to use the following:  A single frame or movie clip as a temporary background, to accurately position and time the action of your titles  A movie to embed as part of your titles  A static image or graphic LiveType can import background elements in a variety of formats:  AVI  BMP  DV  GIF  JPEG  MPEG-2 and MPEG-4  Photoshop  PICS  PICT  PLS About the Background Bar The background color is different from background elements in your project. Background elements are represented in the Timeline and can be manipulated in various ways. A project can have many background elements, or none. The only definitive way to distinguish a background element, be it a LiveType object or texture, a movie, or an image, is that it falls below the background bar in the Timeline. You can drag the background bar up or down to any position between tracks in the Timeline. Any element below the background bar is subject to the Render Background checkbox in the Project Properties dialog. Background bar Background texture44 Chapter 3 Adding a Background  PNG  QuickTime image file  QuickTime movie  SGI  Targa  TIFF To import a background movie: 1 Choose File > Place Background Movie. 2 Locate the movie file you want to place, then click Open. Note: When you import a background movie, the project dimensions and frame rate adjust automatically to conform. It’s a good idea to review your project properties when you import a new background movie. To import a background image: 1 Choose File > Place. 2 Locate the file you want to place in the Canvas, then click Open. 3 In the Timeline, do one of the following:  Drag the image track below the background bar.  Drag the background bar above the image track. Note: Placing an image or movie using File > Place does not affect the resolution or frame rate of the project. That is, the image or movie conforms to the project settings. Placed background movieChapter 3 Adding a Background 45 Importing a Background Movie With Timing Markers Final Cut Pro allows you to save movies that include timing markers, which can be useful when setting the precise timing of your titles. When you place a Final Cut Pro movie in LiveType, the markers appear in the frame ruler of the Timeline. Markers cannot be repositioned in the Timeline. Clicking them, however, moves the playhead to the marked frame. Considerations for Rendering the Background You have the option of rendering or not rendering the background in your project. The background settings in the Project Properties dialog—Color, Opacity, and Render Background—provide several options that affect your working environment and the final result of your project. If you’re creating a standalone animation, web banner, or multimedia component, for example, you might want to include a full background as part of your movie output. If you’re creating a titling overlay, you won’t want to render the background in most cases. Consider these options:  To include a background image, movie, or animated texture in your final output, leave the background color opacity at 0 percent and select the Render Background checkbox.  If you want an opaque or semitransparent solid color background in your final output, choose a color and opacity level, and select the Render Background checkbox.  If you don’t want background elements in your final output, deselect the Render Background checkbox. This option allows you to do the following:  Import a background movie or image for placement and timing reference only, without incorporating it into your titling output.  Define a Canvas color other than the default checkerboard pattern, according to your working preferences. Marker imported as part of a Final Cut Pro movie4 47 4 Working With Tracks To create anything in LiveType, you need to be familiar with tracks. Every element of a titling composition is part of a track, and each track can contain one or more lines of text, an image, a movie, or an animated object or texture. Tracks are “containers” of content, represented by dark blue lines in the Canvas with corresponding bars in the Timeline. A track comprises all of the information about its content:  Position, shape, and baseline  Attributes such as color, shadow, font, and spacing  Effects and timing This chapter explains how tracks are moved and shaped in the Canvas, and how they can be manipulated in the Timeline. The next three chapters describe in detail how to apply text, objects, and effects to tracks to assemble your composition. Empty track48 Chapter 4 Working With Tracks Positioning Tracks in the Canvas When you first open LiveType, the default Canvas contains a single empty track with two endpoints. The shape of a track defines the default baseline on which its contents sit. Tracks can be manipulated at any time, whether or not they contain an element. If you’re creating a track along which to slide text, or if you want your text to conform to a specific shape, you might want to shape and position the track before you add text to it. To position a track: m Drag a track to move it anywhere in the Canvas, or partially off the Canvas. Tracks can extend beyond the boundaries of the Canvas, allowing elements to slide in and out of the viewable area. To create a sloping track or to resize it: m Drag one of the track’s endpoints. Note: Hold down the Shift key when you position an item in the Canvas to constrain its horizontal, diagonal, or vertical position. This applies to tracks, endpoints, objects, and characters. Active track, extending off the Canvas Endpoint Title safe boundary Action safe boundary White Canvas background Canvas Zoom pop-up menu set to 50 percent TracksChapter 4 Working With Tracks 49 Creating Angles and Curves Tracks can take any linear path. You can even link the endpoints of a track so that an element can flow around it in a continuous loop. To add an angle to a track, you must add a control point to it. A track can have any number of control points. To create an angle on a track: 1 Hold down the Control key and click the track anywhere between the endpoints, then choose Add Control Point from the shortcut menu. 2 Drag the new control point and the endpoints to create the angle you want. Control points are also necessary for creating curves. If you’re familiar with Bezier curves, this will be a snap. If not, it may take a bit of experimentation.50 Chapter 4 Working With Tracks To create a curved track: 1 Follow steps 1–2 above to create an angle on a track. 2 Hold down the Control key and click the control point, then choose Curve In from the shortcut menu. 3 Drag the Bezier handle to adjust the curve. 4 Hold down the Control key and click the same control point, then choose Curve Out from the shortcut menu. Another Bezier handle appears, and the curve is smooth at the control point. A Bezier handle appears as a small point on the track near the control point.Chapter 4 Working With Tracks 51 Note: You can apply a curve to an endpoint as well, but clicking an endpoint brings up only the Curve In or Curve Out option—not both—since the track extends in only one direction away from the endpoint. Linking Endpoints The Slide parameter, used in several preset effects, allows text to move along a track. If the endpoints are linked, the text can move around the track on a continuous path. See Chapter 7, “Working With Effects and Keyframe Animation,” on page 87 for more about effects and motion paths. To create a motion path that is a continuous loop, you need to link the endpoints of a track. The endpoints do not need to overlap. In fact, they can be positioned at opposite ends of the Canvas, and still be linked. Linking the endpoints allows text or objects to loop immediately from the end to the beginning of the track when an effect using the Slide parameter is applied to them. To link the endpoints of a track: m Control-click one of the endpoints of any track, then choose Link Endpoints from the shortcut menu. You can unlink endpoints using the same method.52 Chapter 4 Working With Tracks Adding, Copying, and Deleting Tracks There are numerous ways to add a track to the Canvas. To add a new, empty track, do one of the following: m Choose Track > New Text Track (or press Command-T). m Choose a font in the Media Browser, then select Apply To New Track. Sometimes it’s useful to create a duplicate track, with the identical position, shape, contents, timing, and effects as a track you’ve already built. To duplicate a track: 1 Select the track you want to duplicate. 2 Choose Track > Duplicate Track (or press Command-D). The duplicate overlays the original track precisely, so at first, you can only tell that a duplicate has been made by the addition of a new track in the Timeline. Overlaying tracks with identical elements but different effects and parameters is a great way to produce sophisticated title animations. Drag the duplicate off the original to see both tracks. To delete a track: 1 Select the track you want to delete. 2 Do one of the following:  Choose Track > Delete Track.  Press the Delete key. You can also copy a track from another project, such as a LiveType template or a project you’ve created previously, into your current project. To copy a track from one project to another: 1 Open both the source and destination projects. A tab for each of the projects appears in the Timeline. 2 In the source project, select the track you want to copy, then choose Edit > Copy. 3 Click the tab of the destination project in the Timeline, then choose Edit > Paste.Chapter 4 Working With Tracks 53 Working With Tracks in the Timeline As you add tracks to the Canvas, they appear as numbered bars in the Timeline. As you apply effects to each track, they appear as unnumbered bars below the track. Adjusting the Timing of a Track When you add a track to the Canvas, by default it begins at the frame indicated by the playhead. The duration of a track varies, depending on its contents. A track containing text in a system font or a static image defaults to a duration of two seconds. The duration of a track containing LiveType media or any imported movie depends on the length of the movie. These basic timing parameters are easily changed in the Timeline window by stretching and moving the track bars. Delay and duration can also be defined in the Timing tab of the Inspector, as can many other timing parameters. Track 2 Track 1 Track 1 Track 254 Chapter 4 Working With Tracks To adjust the duration of a track, do one of the following: m Drag either edge of the track bar to the right or left. Note: Changing the duration of tracks that contain movies or LiveType media changes the speed at which the movie plays. If you shorten the duration of a LiveFont track, for instance, it plays faster. m Select the track and do one of the following:  For static content, adjust the Duration parameter in the Timing tab of the Inspector.  For movies and animated content, adjust the Speed parameter in the Timing tab. If you like, you can make the track contents appear later than the first frame. To delay the appearance of a track, do one of the following: m Click inside the track bar and drag it to the right. m Select the track and adjust the Delay slider in the Timing tab of the Inspector. You can also reposition more than one track at the same time, which is a useful way to maintain the relative position of tracks as you change their delay times. This is known as a ripple drag. To move two or more tracks in the Timeline at once: m Press the Option key, and drag the left-most track (the track with the earliest starting time) of the group you want to move. All tracks to the right of the selected track (tracks with later starting times), including their associated effects, move as a block with the selected track. Playhead Drag either end of a track to adjust its duration or speed. Drag a track to adjust its starting time.Chapter 4 Working With Tracks 55 Layers and Track Order Elements in the Canvas invariably overlap, which is why it’s important to manage track layers. When you create a new track, it is always the top layer. Any content you add to that track is in front of all other elements in the Canvas. Note: In the Timeline, tracks are displayed in front-to-back order, with Track 1 in front. To change a track’s front-to-back position, do one of the following: m Click inside the track bar in the Timeline and drag it up or down, to a new position. m Select the track you want to move, either in the Canvas or in the Timeline, then choose one of the options from the Layout menu: Bring to Front, Send to Back, Bring Forward [one layer], and Send Backward. The tracks renumber to accommodate the new order. Disabling Tracks You can disable tracks, as well as effects applied to tracks, in the Timeline window. This can be useful in reducing clutter in the Canvas, and it saves preview-rendering time when you only need to preview one or a few elements. Deactivating elements is also useful for comparing different design choices. To disable a track or effect: m Click the Enable/Disable button immediately to the left of a track or effect in the Timeline. While the blue baseline of a disabled track remains in the Canvas, its contents no longer appear in the Canvas, are not represented in the Inspector’s Live Wireframe Preview, and do not render when you generate a preview or final movie. Track 1 (top layer) Enable/Disable buttons56 Chapter 4 Working With Tracks Grouping Tracks It is often useful to group two or more tracks together, to maintain their relative position in the Canvas. Grouped tracks can be moved in the Canvas, but they stay together as a group. When tracks are stacked on top of each other, grouping is the only way to move the stack as a unit. For example, you might want to create a two-layer effect where a word fades out to nothing, revealing the same word underneath with an animated texture applied to it. To do this, you have to create a track that precisely overlays the original using the Duplicate Track command in the Track menu. Now, if you want to reposition the tracks in the Canvas, you need to group them together. To group two or more tracks: 1 Make sure you have more than one track in the Canvas. 2 Select a track in the Canvas or Timeline. This is now the active track, and the grouping button to the far left of the track bar is dimmed. 3 Click the grouping button of a different track. The link icon appears, indicating that the track is grouped with the active track (the track you selected in Step 2). 4 In the Canvas, move either of the grouped tracks, and notice that they move together. 5 In the Timeline, click the grouping button of a third track. Now three tracks are grouped together. To ungroup tracks: m Select one of the grouped tracks, then click the grouping button of the track you want to ungroup. The link icon disappears, and the tracks can now be moved independently. Note: Grouped tracks maintain their relative position, but their contents can still be altered and moved. If you drag a grouped track, other tracks belonging to the group move too. However, if you drag a glyph that resides on one of the grouped tracks, the glyph moves independently. Its Offset parameter is being changed while the track itself stays put. Active track Grouping button5 57 5 Working With Text Titles can incorporate all kinds of visual elements, but their traditional function is to display text. This chapter describes how to insert and format text, including manipulating individual characters on the same track. Adding movement to text—that is, beyond the inherent animation of LiveFonts—is covered in Chapter 7, “Working With Effects and Keyframe Animation,” on page 87. Inserting Text Like any Canvas element, text must reside on a track. There are three approaches to adding text, in a particular font, to the Canvas:  Create a track, select a font, and then add text to the track.  Create a track, add text to it, and then apply a font.  Choose a font first, click the Apply To New Track button in the Media Browser, and type in the text.58 Chapter 5 Working With Text The steps below describe the first approach. To add text to the Canvas: 1 Create a new track by choosing Track > New Text Track (or press Command-T). Note: A corresponding track in the Timeline appears. 2 Choose a font: a Click either the LiveFonts or Fonts tab in the Media Browser. LiveType comes with a variety of LiveFonts. Click the Category pop-up menu to access different sets of LiveFonts, including third-party and custom LiveFonts that you can create. b Select a system font or LiveFont. c Click the Apply button. 3 Enter text onto the active track by doing one of the following:  Type into one of the text-entry boxes in the Inspector.  Cut and paste text from another application into a text-entry box. (Formatting from other applications does not carry over into LiveType.) Note: If you add text to a track before selecting a font, the new text appears in the Canvas in the default font, size, color, and spacing. LiveFonts tab System Fonts tab Options for applying fonts UP01103TXT Page 58 Tuesday, March 8, 2005 1:55 PMChapter 5 Working With Text 59 To change the font of an existing text track: 1 Select the text track. 2 Choose a font from the LiveFonts or Fonts tab of the Media Browser. 3 Do one of the following:  Click the Apply button.  Double-click the font name. Note: The Apply option does not cross genres of track content. That is, you cannot apply a texture or object to a track that already has text on it. Likewise, you cannot apply a font to a track that contains a texture, object, image, or movie. Multiple lines of text can exist on a single track. This enables you to create a long text element governed by one set of parameters. If you’re designing credits, for example, you can generate the copy in another program, cut and paste it into the text-entry box, and apply the font and attributes along with a scrolling effect. LiveFonts vs. System Fonts The two kinds of fonts available in LiveType are very different. LiveFonts have more “life” to them, because they are fully designed animations. System fonts, on the other hand, are more like blank slates you can modify to achieve a wide range of appearances. Both kinds of fonts can be transformed using all the parameters described in this chapter, but keep in mind that some parameters will not make much visual sense when applied to LiveFonts. Note: The Use LiveFont Defaults button in the Text tab of the Inspector restores the original attributes of LiveFonts, objects, and textures, including timing, color, and other characteristics. This can be a valuable way to revert to the original design of these LiveType elements when you’re experimenting with different formatting combinations. LiveFonts and system fonts also have several practical differences in LiveType:  You can apply two or more system fonts to the same track, while only one LiveFont can be applied to a track.  LiveFonts are digital movies, and therefore have timing options you can control through the Timing tab of the Inspector. See “Adjusting the Timing of LiveFonts” on page 60.  System fonts are always vector-based, while LiveFonts can either be raster-based or vector-based. So it is possible to use LiveFonts at such a large size (in excess of 500 point) that the edges begin to degrade.  LiveFonts have a much greater impact on previewing and rendering time.60 Chapter 5 Working With Text To apply a second system font to text on a track: 1 Create a text track with one or two words on it, in a system font. 2 Select one or more characters on the track by highlighting them in the text-entry box or selecting them in the Canvas. 3 In the Fonts tab of the Media Browser, choose a system font different from the one you’ve already used. 4 Click the Apply button at the bottom of the Fonts tab. LiveFont Character Set The LiveFonts included in LiveType consist of 127 characters, which include all standard English, French, German, and Spanish characters: To access characters that aren’t represented on your keyboard, use the Keyboard Viewer feature, which you can select in the Input Menu pane in the International pane of System Preferences. Adjusting the Timing of LiveFonts When you create a system font track, its default duration is always two seconds. LiveFonts, on the other hand, have various durations, as shown in the middle column in the LiveFonts tab of the Media Browser. Aa Bb Cc Dd Ee Ff Gg Hh Ii Jj Kk Ll Mm Nn Oo Pp Qq Rr Ss Tt Uu Vv Ww Xx Yy Zz ! # $ % & ( ) , . < > @ + = : ; _ - ? “ ‘/ * 0 1 2 3 4 5 6 7 8 9 Áá Àà Ââ Ää Çç Èè Éé Êê Ëë Îî Íí Ïï Ññ Ôô Öö Óó ß Üü Úú Ùù Ûû €Chapter 5 Working With Text 61 Because they are movies, LiveFonts are subject to several timing parameters, available in the Timing tab of the Inspector. Settings for LiveFonts in the Timing Tab  Random and Sequence: Let you apply the LiveFont movie to each character on the track in a different order, with a variable delay between each letter.  Speed: Allows you to play the LiveFont movie more quickly or slowly. Notice that as you change the speed, the duration of the track in the Timeline increases or decreases. Likewise, if you change the duration of the track in the Timeline, the speed parameter changes in the Timing tab.  Delay: Allows you to set the starting time of the track.  Loop: Determines how many times the LiveFont movie plays through. The default setting is 1, meaning that the LiveFont plays one time. A value of 2 means that it plays through twice. The duration of the track doubles with a Loop value of 2, in most cases. Note: Several LiveFonts, including Burn Barrel, Cool, and Gutter, take advantage of “segmented animation,” which defines beginning, middle, and ending segments of the movie. When you adjust the Loop parameter for these fonts, only the middle segment of the animation is looped.  Duration: Does not apply to LiveFont tracks.  Hold First and Hold Last: Allow you to have the first frame of the LiveFont appear for a designated amount of time before the movie begins to play. Likewise, Hold Last perpetuates the last frame. Settings in the Timing tab62 Chapter 5 Working With Text Formatting Text After you’ve selected the font, you have countless formatting options, available through the Inspector, to change the appearance of the text. As you adjust formatting parameters, the contents of the active track change dynamically in the Canvas, making it easy to see what you’re doing. These options can apply to the entire track or to one or more individual characters on a track. To format any element in the Canvas, you must first select its track. To select the entire track, do one of the following: m Click the blue track line in the Canvas. m Click the corresponding track in the Timeline. Note: If you click the text itself, a bounding box appears around the character you clicked, and your modifications affect only that character. See “Modifying Individual Characters” on page 77. After you have selected the track you want to format, use the Text, Style, and Attributes tabs of the Inspector to specify options such as alignment, size, tracking, leading, and color. Alignment, Size, Tracking, and Leading In the Text tab of the Inspector, you can adjust the size, tracking, and leading of a text track, as well as its horizontal and vertical alignment. Size values are in points, and tracking and leading values are percentages of the font’s default spacing. Formatting options in the Text tab of the InspectorChapter 5 Working With Text 63  Alignment: With the alignment options, text can be set to run horizontally as well as vertically on a track. The Left, Center, and Right Alignment buttons apply to both text orientations. The position of the track itself is not affected by alignment settings. The alignment options are also important to position text appropriately when the track is used with an effect that uses the Slide parameter. For example, if you want to slide text onto the screen from left to right, create a track that begins to the left of the Canvas. The text should be left-aligned, so that it starts from the left end of the track, off the Canvas. Then apply the Slide Right effect from the Motion Path category.  Size: Text size is adjusted by dragging the slider, clicking within the slider track, or entering a value in the box to the right of the slider. Note: Because LiveFonts are raster images made up of pixels, their edges will start to degrade at very large sizes, usually 500 point and larger. System fonts are vectorbased, and therefore retain their integrity at any size.  Tracking: Character spacing is adjusted with the tracking setting. The value for normal character spacing is 100 percent. A setting of 110 percent adds a modest amount of space between letters. When tracking is set to 0 percent, all characters overlay each other.  Leading: Leading sets the amount of space between the baseline of one line of text and the next. This setting only applies to tracks with more than one line of text, not to the spacing between different tracks. The default leading value is 100 percent. At 0 percent, all lines of text on a track overlay each other.64 Chapter 5 Working With Text Color Color options are in the Glyph pane of the Inspector’s Attributes tab. The lower portion of the tab contains the color controls.  Color: The Color parameter replaces existing pixels of color in the selected element with the color indicated in the Color box, while keeping the luminosity values intact. A setting of 100 percent completely replaces the existing colors, whereas a Color setting of 20 percent combines some of the new color with the original. Click the Color box to choose a different color. Note: Once you have selected a color, close the Colors window. You need to reopen the Colors window to make any subsequent color choices.  Hue, Saturation, and Lightness (HSL): These three sliders work together to establish the color of the selected element. Hue defines the shift in color value, in degrees, on a 360-degree spectrum. Saturation defines the intensity or vividness of the color, in percentage points. Lightness defines the intensity along the black and white axis, in percentage points. The default color of a system font is black, which renders the Hue, Saturation, and Lightness sliders ineffective. HSL and Color are useful for adjusting raster-based elements such as LiveFonts while, as a rule of thumb, system fonts and other vectorbased elements should be colored using the Color feature.  Alpha pop-up menu: When you have a clip or imported graphic in a LiveType composition, you can choose a type of alpha channel from this menu. Choose from Premultiply White, Premultiply Black, or Straight. Color settings in the Glyph pane of the Attributes tabChapter 5 Working With Text 65 To change the color of a system font: 1 Select a track that contains a black system font. 2 In the Attributes tab, in the Glyph pane, click inside the Color box and choose a color from the Colors window (preferably a bright, primary color). 3 Set Color to 100 percent. The contents of the track change to this color. LiveFonts are typically built using primary colors, which means that the Hue, Saturation, and Lightness sliders can be used effectively, in addition to the Color parameter. Transforming Text Beyond basic text formatting, LiveType gives you many additional treatments with which to stretch, blur, fade, reposition, and rotate your text. These features are all in the Glyph pane of the Attributes tab.  Opacity: Opacity defines how much of the underlying content shows through. An opacity setting of 0 makes text completely transparent and, in most instances, a setting of 100 makes it opaque. When the blur attribute is off (set to 0), a 50 percent opaque character has sharp edges and is somewhat transparent. Note: LiveType allows opacity values to be higher than 100 percent. This can be desirable when you’re working with glow parameters (Style tab), or LiveFonts and elements that are blurred or partially transparent. For example, the Charge LiveFont still reveals some of the background at 100 percent opacity. At 150 percent (which you must enter in the opacity field), the font reveals very little background. Glyph parameters in the Attributes tab of the Inspector66 Chapter 5 Working With Text  Blur: The blur attribute is similar to opacity, but it fades and expands the outer edges as if the text is out of focus. Blur can be applied equally to the X and Y axes or unequally, for different outcomes. A blur setting of 0 is off, with no blurring effect. The maximum blur setting is 25.  Scale: Scale stretches or squeezes text on the X and Y axes, with 100 being the same size as the original text. Note that scale parameters are applied independently to each character around its pivot point, not to the entire track as a unit. Note: Unlike the Scale parameter, the Size parameter in the Text tab scales text from the baseline, and also takes text tracking into consideration.  Offset: Offset repositions the text relative to its original position on the track. An offset of 0 indicates no position shift on that axis.  Rotate: With the rotation dial, you can position an element not only in the 360 degree range, but configure any number of revolutions in the context of an effect. A glyph sized using the Size parameter keeps its original baseline and tracks with adjacent characters. A glyph sized using the Scale parameter expands around its pivot point without affecting the position of other glyphs on the track.Chapter 5 Working With Text 67 For example, you can set an early keyframe at 45 degrees and set a later keyframe at four revolutions plus 180 degrees. When you play the movie, the element spins clockwise four times plus an additional 135 degrees between those two keyframes. Positive values reflect clockwise motion, and negative values reflect counterclockwise motion. Working with keyframes is defined fully in Chapter 7, “Working With Effects and Keyframe Animation.” Text example set at 35 percent opacity in the Attributes tab Text example using the blur and scale options in the Attributes tab Text example with the default shadow Solid background color68 Chapter 5 Working With Text Enhancing Text With Styles The Style tab in the Inspector offers four options for enhancing your text. Styles allow you to add depth and emphasis to text—as well as to objects—mostly by altering the space around each character. The Shadow, Glow, Outline, and Extrude buttons in the Style tab each reveal the settings applicable to that treatment. Shadow and Glow The Shadow and Glow styles are essentially two “flavors” of the same style, using identical parameters to create quite different looks.  Character: The character setting allows you to make the original element invisible, isolating the style treatment in the Canvas. This can be a helpful way to eliminate clutter as you compose your treatment, or you may choose to leave the original element invisible in the finished product.  Enable: The Enable checkbox allows you to turn a style on or off, without affecting the settings you’ve established. Again, this is a helpful tool for eliminating clutter as you design, or as you compare different styles and combinations of styles.  Layer: The Layer pop-up menu allows you to place the shadow or glow treatment in front of or behind the original element. The In Front Only setting restricts the shadow or glow effect to the boundaries of the original element, without extending beyond the edges of the letters or object. Shadow parameters in the Style tab of the Inspector Reset all Reset Warp parametersChapter 5 Working With Text 69  Opacity: Opacity sets the intensity of the shadow or glow. An opacity setting of 0 makes the shadow or glow completely transparent, that is, invisible, and completely opaque at 100, with no background showing through. If blur is turned off (set to 0), a 50 percent opaque shadow has sharp edges that match the original element, but whatever lies behind the shadow shows through it.  Blur: The blur parameter is similar to the opacity setting, but fades and expands the outer edges as if the glow or shadow is out of focus. A blur setting of 0 is off, with no blurring effect. The maximum blur setting is 25. Blur can be applied independently to the x and y axes. A y-direction blur creates the look of an up and down motion, even in a static image.  Scale: Scale stretches or squeezes the glow or shadow on the X and Y axes, with 100 being the same size as the original element. Note that scale parameters are applied independently to each character of text on a track, not to the entire shadow or glow as a unit.  Offset: Offset repositions the shadow or glow relative to the original element. An offset of 0 indicates no position shift on that axis.  Color: The Color box lets you select the color of the shadow or glow.  Warp: The warp area allows you to stretch and reshape the shadow or glow by dragging the four corner points, or inserting x and y coordinates for each corner. A simple application of the warp feature is to stretch shadows to represent different lighting situations. Text example using glow options in the Style tab Text example with an outline and an enlarged, offset shadow Text example using the Extrude style Solid background color70 Chapter 5 Working With Text Outline This style adds an outline to the contents of any track. See the preceding section for a definition of the opacity, blur, color, and warp parameters. Click the Outline button at the top of the tab to adjust the outline settings.  Weight: The weight value, which defines the thickness of the outline, is set in pixels.  Show Outline Only: Show Outline Only eliminates the original element, creating an outline effect that allows the background to show through. Note: The Show Outline Only setting is used with the character parameter set to Visible. Otherwise, the outline will be rendered invisible along with the character itself. Outline parameters in the Style tab of the InspectorChapter 5 Working With Text 71  Outline Extrusion: When the text has been extruded (see below), selecting this checkbox extends the outline around the extrusion. Extrude Extrude settings consist of direction, length, and color. The direction setting determines which way to “pull” the extrusion and the length determines how far to pull it.  Length: The length value is set in pixels.  Direction: The direction value is set in degrees, from 0 to 360.  Color: The Color box lets you select the color of the extrusion. Text example using the Show Outline Only and Outline Extrusion options in the Style tab Text example using X and Y scaling options in the Attributes tab, with an outline Text example using the invisible character option with the Shadow and Glow styles in the Style tab Solid background color72 Chapter 5 Working With Text Creating a Matte The matte feature in LiveType allows you to reveal a background element in the area defined by a foreground element, seemingly cutting a hole through any layers in between. When you create a matte, every pixel of the foreground element is replaced by a corresponding pixel in the background element. In other words, a matte acts as a window into another layer. In LiveType, you have three options for creating mattes, available in the Matte pane of the Attributes tab in the Inspector. The first option—Matte to Background—allows you either to fill the foreground element with a background element, or to create an empty window, which can remain transparent when you render your titling movie. To create a matte with two project elements: 1 Create the background element by placing a texture, image, movie, or object as the bottom layer reflected in the Timeline. See Chapter 3, “Adding a Background,” on page 41 and Chapter 6, “Working With Objects, Textures, and Imported Elements,” on page 79 for information about placing these kinds of elements in the Canvas. 2 Make sure the background is beneath the background bar in the Timeline, and that no other elements are below the background bar. 3 Create a texture or any other element that obscures the background. 4 Create a foreground element, that is, the text or object you want the background to “fill.” 5 With the foreground track selected, click the Matte button in the Attributes tab of the Inspector. Background bar Background textureChapter 5 Working With Text 73 6 Choose Background from the “Matte to” pop-up menu. The background image appears to fill the foreground element. To create a window into a transparent background: 1 Create a texture or any combination of elements that covers the Canvas. For information about placing textures and other elements in the Canvas, see Chapter 3, “Adding a Background,” on page 41 and Chapter 6, “Working With Objects, Textures, and Imported Elements,” on page 79. 2 Create a foreground element, that is, the text or object that defines the shape of your window. 3 If any elements are below the background bar in the Timeline, drag the bar below all elements. 4 With the foreground track selected, click the Attributes tab of the Inspector, then click the Matte button. 5 Choose Background from the “Matte to” pop-up menu. 74 Chapter 5 Working With Text The transparent Canvas (or background color if defined in the Project Properties dialog) appears to fill the foreground element. The other two matte options, Matte to Movie or Image and Matte to Texture, differ because the background doesn’t appear as a discrete project element that’s reflected in the Timeline. And, there’s no need for a layer that the matte has to “punch through.” Instead, the track contents are simply filled with the designated image. These two matte alternatives have scale, speed, and sequencing options in the Matte pane. Scale adjusts the size of the background image, speed adjusts the speed of the background movie or texture, and sequencing allows you to offset the timing of the background for each letter residing on the foreground track. Note: When you matte a word to a movie or image, LiveType calculates which portion of the image “underlies” each letter, imitating a true window into a lower layer. When you reposition the letters on the track, they retain the same image content. This feature can create an interesting look as you apply movement to the text, particularly when the matte is with a movie. Texture scaled down on the y axis Transparent Canvas background Text track matted to the transparent backgroundChapter 5 Working With Text 75 To fill the track contents with an image or movie: 1 Select a track that contains the text or object you want to fill with an image or movie. 2 Click the Attributes tab of the Inspector. 3 In the Matte pane, choose Movie or Image from the “Matte to” pop-up menu. 4 Locate the file in the Choose Movie or Image dialog, then click Open. The track contents fill with the background movie or image. Note: If you want to reveal a specific portion of the image or movie within your foreground element, you may find that this matte option is not appropriate, since you cannot adjust the relative position of the image and your foreground element. If this is the case, you must use the Matte to Background option described above, which allows you to position the two components independently. Text track matted to a movie76 Chapter 5 Working With Text To fill the track contents with a texture: 1 Select a track that contains the text or object you want to fill with a texture. 2 Choose a texture from the Media Browser, then click the Apply to Matte button. The default texture fills the contents of the active track. Note: A variety of mattes is available for use in the Objects tab of the Media Browser from the Category pop-up menu. The blue areas of a LiveType matte define the area where the texture will play back. Text or glyphs from the character palette may also be used as mattes. Note: Individual characters on the same track can be matted to different textures, movies, or images. See “Modifying Individual Characters,” next. You can get a nice effect by combining the matte function with the outline style. To fill an invisible element’s outline: 1 Create or select a text track. 2 In the Style tab of the Inspector, click the Visible button. 3 In this case, make sure the Enable checkbox is deselected for the Shadow, Glow, and Extrude styles. 4 Click the Outline button in the Style tab, and select the Enable checkbox. 5 Select the Show Outline Only checkbox, and increase the weight of the outline, so it’s fairly thick. 6 Choose a texture to fill the outline, then click the Apply to Matte button. The outline is now filled with the texture.Chapter 5 Working With Text 77 Modifying Individual Characters You can also assign attributes to individual characters on a track. All of the attributes discussed in this chapter can apply to only one, or more than one, character on the same track. This is a powerful option in LiveType, particularly because it allows you to reposition individual characters, or glyphs, without breaking their relationship to the track. For example, with individual character adjustments, you could make one word in a phrase float above the track, expand and glow, and then return to the track. This would take only a few moments to animate. Or you could make a series of characters do similar transformations, one after the other. To modify one or more characters on a track, try the following steps: 1 Select or create a track that contains some text. 2 Select one or more letters by doing one of the following:  Select one of the letters in the Canvas and, to modify more than one character at a time, hold down the Shift key while selecting additional, contiguous letters. A bounding box appears around the selected letter(s), with a handle in each of the upper corners.  Highlight one or more letters in one of the text-entry areas of the Inspector. Bounding boxes appear around the selected letters in the Canvas.  Marquis-select one or more letters in the Canvas by dragging a box with the cursor. Any letter that touches the marquis area is selected, and reveals its bounding box. 3 Click inside the bounding box and drag the letter anywhere in the Canvas. 4 Drag the upper-left handle to rotate the letter. Rotation handle Sizing handle All of the characters reside on a single track. Glyph bounding box78 Chapter 5 Working With Text 5 Drag the upper-right handle to change the letter’s size. Note: You can restore a letter to its original size and placement by choosing Layout > Reset Position. 6 Change the letter’s attributes in the Attributes tab or the Style tab of the Inspector. 7 Click in the Canvas away from the track. The bounding box around the character disappears, but the track is still selected. 8 Reposition the track in the Canvas and modify its attributes. Note: These adjustments affect the entire track, including the letter you’ve just modified. Disabling Fonts in Mac OS X In Mac OS X v10.3, you can use the Font Book application to disable fonts. However, LiveType requires that certain fonts—Geneva and Helvetica—are always available, so these two fonts should not be disabled. If you disable these fonts, you may experience unpredictable behavior in LiveType. 6 79 6 Working With Objects, Textures, and Imported Elements Titling compositions often center around words, but all kinds of additional elements are used to frame, enhance, and accompany them. For the purpose of this manual, these elements fall into three categories:  Objects included with LiveType  Textures included with LiveType  Static images and movies originating from other sources All of these elements are modified and moved around in the Canvas in the same way. They do not rest on a linear track, as text does, unless you add multiple, identical objects to a track (see “Creating Strings or Stacks of Elements” on page 84). Instead, when selected, they display a bounding box, like a single character selected on a text track. At the upper-right and upper-left corners of the bounding box are scale and rotation handles. 80 Chapter 6 Working With Objects, Textures, and Imported Elements Working With LiveType Objects Objects in LiveType are graphical elements with an alpha channel, designed to frame or emphasize text. Most of them are animated and, much like LiveFonts, can be sized, rotated, colored, and stretched. You can add a shadow, glow, or an extrusion. And you can apply effects to them. Objects placed in the Canvas are represented as tracks in the Timeline, like any other titling element. To add a LiveType object to the Canvas: 1 Click the Objects tab in the Media Browser. 2 Browse the categories of objects displayed in the Category pop-up menu, and select an object in the Name column of the Objects tab. 3 Click the Apply To New Track button. The object appears in the Canvas, and a corresponding track appears in the Timeline. LiveType objects available in the Media BrowserChapter 6 Working With Objects, Textures, and Imported Elements 81 Working With LiveType Textures Textures in LiveType are colorful animated patterns that can be used as full-screen or partial backgrounds, or as animated fills when used with the matte function, described in Chapter 5, “Working With Text,” on page 57. Textures are versatile, and can be transformed in the same ways an object is transformed, particularly if the texture is reduced in size to take up only a portion of the Canvas. To add a texture to the Canvas: 1 Click the Textures tab in the Media Browser. 2 Browse the categories of textures displayed in the Category pop-up menu, and select a texture in the Name column of the Textures tab. 3 Click the Apply To New Track button at the bottom of the Textures tab. The texture fills the Canvas, and a track appears in the Timeline, just above the background bar. Textures available in the Media Browser82 Chapter 6 Working With Objects, Textures, and Imported Elements Importing Graphics, Images, and Movies Graphical elements in a wide range of formats can be incorporated into a LiveType project. Scanned images, photos, and illustrations, as well as movies and animations, can be used as part of your titling composition. And, like objects and textures, they can be modified and placed in numerous ways. LiveType can import elements in a variety of formats. To import a graphic, image, or movie: 1 Choose File > Place. 2 Locate the file, then click Open. The element appears in the Canvas, and a corresponding track appears in the Timeline. LiveType import formats AVI JPEG PICT QuickTime movie BMP MPEG-2 and MPEG-4 PLS SGI DV Photoshop PNG Targa GIF PICS QuickTime image file TIFF Imported movie, scaled and rotated LiveType texture, scaled and rotatedChapter 6 Working With Objects, Textures, and Imported Elements 83 Transforming Objects, Textures, and Imported Elements Imported elements can be positioned, changed, and animated as easily as text. A photo can be made to bounce around the Canvas, fade in and out, grow and shrink, or take on a purple hue, for example. Sizing and Positioning Objects, Textures, and Imported Elements When you first place a movie or texture in the Canvas, its position is locked by default. These types of elements are frequently used as full-screen background elements that don’t need to be sized or moved. However, you can unlock them easily. To unlock the position of a texture or imported movie: 1 Select the track you want to unlock. (Sometimes this is easiest to do in the Timeline.) 2 Choose Layout > Lock Position. The checkmark next to Lock Position disappears, and the bounding box handles on the element are now active. When you select a non-text element in the Canvas, a bounding box appears around it, the same as an individual character on a text track. If you select a full-screen element, it’s easier to see the bounding box if you zoom out in the Canvas. To resize, rotate, and reposition a non-text element in the Canvas: m Drag the bounding box and its upper-left and upper-right handles. Non-text elements can also be transformed with any of the attributes available to text characters: shadow, color, blur, and so on.84 Chapter 6 Working With Objects, Textures, and Imported Elements Creating Strings or Stacks of Elements In a way, LiveType looks at textures, objects, and imported elements as special kinds of glyphs, or text characters. More to the point, individual elements are treated like fonts whose character set consists of only one glyph. This allows you to do an unusual thing in LiveType: You can create strings, or multiple copies, of these elements on what, for all intents and purposes, amounts to a text track. Anything you can do with one letter of a text font, you can do with objects, textures, and imported elements. Note: Objects cannot, however, be formatted as multiple lines on one track. To create a string of elements on one track: 1 Add an object, texture, or imported element to the Canvas. 2 Make the object a reasonably small size to duplicate in the Canvas: a Click the Attributes tab of the Inspector, then click the Glyph button. b Make sure the lock icon next to the Scale sliders is closed, or locked, for proportional scaling. If it appears to be unlocked, click the icon to lock it. c Adjust the Scale sliders, or enter a value in one of the Scale fields. 3 Click inside the text-entry box in the upper-left corner of the Inspector. Note that a single bullet is in the window, representing the object as a single glyph. One imported graphic duplicated on one trackChapter 6 Working With Objects, Textures, and Imported Elements 85 4 With the blinking cursor in the text-entry box, press the Space bar or type any key. A second bullet appears in the text-entry box, and now two identical objects are on a linear track in the Canvas. Add as many objects as you like. 5 Adjust the tracking and alignment in the Text tab of the Inspector, and any other attributes that you might apply to a string of letters, including formatting individual elements on the track separately. Changing Attributes and Styles Just as non-text elements can be treated as glyphs, they can take on all of the same styles and attributes available to text. Chapter 5, “Working With Text,” on page 57 describes all the transformations available in the Inspector. You might want to try the following for a digital image or movie:  Reduce the size of the image, position it in the Canvas, and rotate it 20 degrees, using the Scale, Offset, and Rotate controls in the Glyph pane of the Attributes tab in the Inspector.  Add a shadow, outline, or extrusion to the image, using Style functions.  Shift the color of the image using the Color controls in the Glyph pane of the Attributes tab in the Inspector.  Apply a preset effect to the image, or animate it yourself by building your own effect. Chapter 7, “Working With Effects and Keyframe Animation,” on page 87 explains how to do this. Other imported elements—logos, line art, or simple graphical elements—are even more versatile. They might lend themselves to a matte treatment, glow or blur, or an outline with the original element rendered invisible. There’s no end to the possibilities. Replacing Media in a Track You can easily replace any movie or image in a track on the Timeline at any time. To replace any movie or image with new content: 1 Control-click the chosen track, then choose > Reconnect Media from the shortcut menu. 2 Navigate to the new file in the Open dialog, then click Open. The existing media is replaced with a new movie or image.7 87 7 Working With Effects and Keyframe Animation Effects are what make your Canvas elements move and transform. They are “packages” of animation, encapsulating the parameters that govern motion and timing, as well as an element’s attributes in any given frame. The key ideas about effects are as follows:  All motion and transformations built into your titling movie are controlled by effects, whether you create your own or take advantage of the preconfigured effects in LiveType.  Effects are applied to tracks. They appear in the Timeline as bars underneath the track they’re applied to.  More than one effect can be applied to the same track, even at the same time.  You can change an effect once it is applied to a track, and you can save the modified effect so it’s available to use on other tracks and in other projects.  When a track and an effect have conflicting parameters, the effect parameter overrides the track parameter. For example, if an effect specifically turns the Glow style off, the track’s glow settings are irrelevant.  When a track and an effect have complementary parameters, the two values are combined. For example, if a track has an opacity of 50 percent, applying an effect with 50 percent opacity will result in an opacity of 25 percent in the Canvas, or half of the track’s 50 percent opacity.  You can edit or build an effect outside of the LiveType interface. Instructions for writing EffectScript code are in Appendix B, “Creating and Editing EffectScripts,” on page 135.  Effects can be applied to individual characters on a track and managed from the Effects tab in the Inspector.88 Chapter 7 Working With Effects and Keyframe Animation Preset Effects Following is a table listing all of the 41 available preset effects found in LiveType. These effects are located in the Effects tab of the Media Browser. Applying Preset Effects The preset effects in LiveType add personality to your titles and can be used to set the tone of your composition. Browse through the available effects in the Media Browser to get a sense of what they can do. To apply a preset effect to a track: 1 Create a track that contains text or any other kind of element. Make sure it is the active track. 2 Click the Effects tab in the Media Browser. The Category pop-up menu reveals the categories of installed effects, corresponding to the subfolders on your computer located at /Library/Application Support/LiveType/Effects. LiveType effects Caricature  Baffle  Bounce Track  Bounce  Bouncy  Elastica  Quick Twist Fades  Cluster  Foreground  Pile Up Fantasy  Chatter Out  Dispersion  Expand  Ideas  Invent  Send Off  Z Space Glows  Blue Light  Combustible  Exhale  Follower  Impression  Light Beam In  Light Beam Out  Morph  Peace  Rising Sun  Slider  Spectral  Zapper Grunge  Text Static Mechanical  Assembly  Buzz Saw  TV Off Motion Path  Departing  Escape  Parting  Random Drop  Slide Hang  Vent Shadows  Pit Stop Zooms  ViewpointChapter 7 Working With Effects and Keyframe Animation 89 3 Choose an effect in the Name column of the Effects tab. The Browser preview depicts how the effect works, the Duration column shows the default effect length, and the Description field contains notes about how best to apply the effect. 4 Do one of the following:  Select the effect name and click Apply.  Double-click the effect.  Drag the effect into the Effects tab of the Inspector. When you apply an effect, a bar appears under the active track in the Timeline, labeled with the effect name. If the effect includes motion, you immediately see the movement reflected in the Live Wireframe Preview in the Inspector. Depending on the position of the playhead in the Timeline, the Canvas itself may or may not change noticeably. Move the playhead to see how the track elements change at different points in time, or click the Play button in the Canvas to view a RAM render. The effect also appears in the Effects tab of the Inspector. The Effects tab shows a list of the effects that have been applied to a track. The stack order does not affect the sequence, or timing, of any effect. To disable an effect: m Click the Enable/Disable button in the Timeline next to the effect. The Effects tab of the Media Browser90 Chapter 7 Working With Effects and Keyframe Animation To disable an effect for one or more glyphs on a track: 1 Select the track. 2 Select the character(s) that you don’t want the effect to apply to, either by highlighting them in the text-entry box or selecting them in the Canvas. Because you cannot select noncontiguous characters at the same time, you may have to do these steps more than once. 3 In the Effects tab of the Inspector, deselect the checkbox next to the effect you want to turn off for the selected characters. Adjusting the Timing of an Effect All of the timing parameters applicable to tracks that contain LiveType media elements also apply to effects. Just as you can drag a track in the Timeline to adjust its starting point and duration, you can do the same with an effect. Settings in the Timing tab of the Inspector determine how an effect is applied to each letter on a track, how fast it runs, and how many times the effect repeats. The Timing tab includes an array of options that allow you to orchestrate the movement of your Canvas elements. Timing tab settings apply to the effect that is selected in the Timeline. Canvas elements reflect the current frame, as determined by the playhead position. The Effects tab of the Inspector shows the effects applied to the active track. Preset effects are applied from the Effects tab of the Media Browser. Playhead Effects TrackChapter 7 Working With Effects and Keyframe Animation 91 To adjust an effect’s timing parameters: 1 In the Timeline, select an effect that has been applied to a track. 2 Click the Timing tab in the Inspector. The current timing parameters for the selected effect are reflected. The Timing tab contains the following timing options:  Random: A randomized effect treats each character on a track separately, as opposed to applying the effect parameters to the entire track at once. With this setting, the effect transforms each character in a random order, separated by the designated number of frames or seconds. The Seed field allows you to select alternative random orders, up to 255, if the order doesn’t look quite right.  Sequence: A sequenced effect starts by transforming one character, then moves to the next adjacent character, and so on. A sequence value of 0 indicates that the effect plays simultaneously for all characters. With a value of 25, the effect begins to transform the first character, then when the effect is 25 percent into the transformation, it begins to transform the next character, and so on. The Start pop-up menu below the Sequence slider defines the direction from which the sequence begins.  Speed: You can change the speed of an effect, as a percentage of its default speed. Increasing an effect’s speed decreases its duration. The Start pop-up menu below the Speed slider allows you to run the effect in reverse.  Delay: The delay setting sets the start or end time of the effect in relation to the beginning or endpoint of the track. Using this setting is an alternative to positioning an effect directly in the Timeline. The Timing tab of the Inspector92 Chapter 7 Working With Effects and Keyframe Animation  Loop: The loop setting determines how many times the effect will repeat. A loop value of two doubles the duration of the effect. The To End checkbox makes the effect loop continuously for the duration of the track.  Duration: This setting governs the duration of a track containing system font text or any other static element. The duration of effects, as well as tracks containing dynamic elements such as a movie clip or LiveFont, is adjusted with the Speed slider.  Hold First: With this option, the parameters for the first frame of an effect are maintained for the designated period of time before the effect kicks in. For example, if you want a track to fade in after two seconds, you can choose Hold First for two seconds, during which time the track is invisible (or 0 percent opaque), before the Fade In effect begins.  Hold Last: This works the same way as Hold First, but at the end of the effect, to extend the final-frame parameters of the effect for a designated amount of time. To adjust the timing of an effect in the Timeline, do one of the following:  Drag in the middle of the effect bar (but not on a keyframe) to position the effect without changing its speed. This affects the effect’s Delay value, as seen in the Timing tab of the Inspector. Note: You can position an effect so that it extends beyond the boundaries of the track, in which case the extraneous effect parameters aren’t used.  Drag either edge of the effect to adjust its speed. An effect that is shorter in duration runs through its motions more quickly. Resizing from the left edge of an effect whose Delay Start value is set “From Start” changes the effect’s speed and delay. The same is true when resizing the right edge of a “From End” effect. Repositioning Groups of Effects Within a Track to Adjust Timing If you have multiple effects in a single track, you can move them in unison to adjust their timing. To reposition groups of effects within a track: m In a track with multiple effects, hold down the Option key as you drag any single effect. The effects move up and down the Timeline in unison. All effects in the track maintain their relative positions but occur at an earlier or later point in time.Chapter 7 Working With Effects and Keyframe Animation 93 Changing the Order of Effects In a track with more than one effect, you can change the order (precedence) of an effect by dragging it vertically. If the effect has timing information, its position in the new track may be adjusted. To change the order an effect: m Drag the effect up or down within the track. The order of the effects has now been changed. Duplicating Effects and Tracks You can easily duplicate effects and tracks, including duplicating an effect from one track to another. To duplicate an effect or track: In the Timeline, Option-drag an effect or track to the new location or track. Holding down the Option key while dragging an effect makes a copy of the effect in a new effects track. Modifying a Preset Effect In addition to adjusting the timing parameters of an effect, you can change what the effect actually does; that is, how it transforms the track it’s applied to. Altering an effect used in your project does not alter the original preset effect. Once you have applied the effect, you are free to adjust it, and the changes you make are saved as part of your project.94 Chapter 7 Working With Effects and Keyframe Animation Keyframes and Sequencing Markers Computer animation is based on the concept of keyframes. Animators define a graphical element’s parameters—position, color, size, shape, and so on—at periodic intervals, and the software interpolates the parameters for each frame in between. Keyframes are represented in the Timeline as diamond-shaped markers in effects. When you select a keyframe, the playhead moves to that frame, and the Canvas reveals the state of the project elements at that point in time. To view the parameters defined by a keyframe: 1 Select the keyframe in the Timeline. 2 Click the Effects tab in the Inspector. The parameters defined by that keyframe appear in the Active Parameters window. The Clockwise effect, for example, only has one active parameter for its keyframes. Regardless of the track attributes or other effects that may affect the track, the Clockwise effect is concerned only with making the letters on the track spin. The sequence timing parameter for the effect applies the rotation to each character on the track one after the other, from left to right. Sequencing markers, vertical lines in the light purple area of an effect bar, show when each glyph starts to be acted on by the effect. The number of sequencing markers, including the first frame of the effect and the beginning keyframe of the effect (depicted by half diamonds), always equals the number of glyphs on the track. Note: Not all effects are sequenced or randomized; therefore, not all effects have sequencing markers. Sequencing marker KeyframeChapter 7 Working With Effects and Keyframe Animation 95 Adjusting Keyframe Parameters To change what an effect does, you have to alter its keyframes. While you can change an effect’s parameters through the Effects tab by entering numeric values, it is usually easier to make changes in a more visual way, using the full LiveType interface. To adjust a keyframe by changing parameters in the Inspector: 1 Select a keyframe in the Timeline. The playhead moves over the keyframe and the Canvas reflects the appearance of the composition at that frame. Note: If you change an effect parameter when the playhead is not over a keyframe, a new keyframe is added at the current playhead position. 2 Adjust the attributes of the track. The LED indicators in the Inspector indicate which attributes can be changed in the context of an effect—they are all in the Text, Style, and Attributes tabs of the Inspector. 3 Click the Play button in the Canvas to see the results of your modification. LED indicators appear when an effect is selected.96 Chapter 7 Working With Effects and Keyframe Animation To adjust a keyframe by changing parameters in the Canvas: 1 Select a keyframe in the Timeline. 2 Click a letter or the object to reveal its bounding box. 3 Manipulate the selected glyph to change its position, rotation, or scale. When you drag the glyph, the entire word moves with it, and a motion path with small incremental dots appears. Each dot on the motion path represents the pivot point of the selected letter at every frame of the movie. Notice that if you select a different letter, a slightly different motion path appears, representing the center position of that letter for each frame. 4 Click the Play button in the Canvas to see the results of your modification. LED Indicators in the Inspector When you select an effect or keyframe, the Text, Style, and Attributes tabs in the Inspector reveal small round lights, or LEDs, to the left of all attributes that can be modified in an effect. The LEDs serve three purposes:  They indicate which parameters in the tab are active in the selected effect, allowing you to see the pertinent values at a glance.  They allow you to activate a new parameter for an effect.  They let you apply an attribute evenly across all keyframes in the effect. This is a very useful feature, as it lets you make global changes to an effect without having to select and modify each keyframe. To apply an attribute evenly across all keyframes in an effect: 1 Select the effect. 2 Change an attribute in the Text, Style, or Attributes tab. 3 Hold down the Option key and click the LED indicator next to the attribute you just changed.Chapter 7 Working With Effects and Keyframe Animation 97 Active Parameters The Active Parameters area of the Effects tab of the Inspector is a valuable resource for identifying which parameters are active in an effect, and what their values are at any point in time, as defined by the playhead position. Active parameters are displayed with the values associated with the current frame. Parameter variables are further described in Appendix B, “Creating and Editing EffectScripts,” on page 135. To change a parameter value in the Effects tab: 1 Select a keyframe. 2 Double-click a parameter in the Active Parameters stack. 3 Enter a value in the parameter dialog, then click OK. Note: If you change a parameter when the playhead is not at a keyframe, a keyframe is added to the effect at the playhead position. The Parameter pop-up menu lists all of the keyframe parameters. To add a new parameter to the Active Parameters stack of an effect: 1 Select the effect. 2 Do one of the following:  Click the LED next to the parameter in the Text, Style, or Attributes tab of the Inspector. The selected LED illuminates.  In the Effects tab of the Inspector, make a selection from the Parameter pop-up menu and click the + button. The parameter appears in the Active Parameters stack.  Change the parameter for one keyframe in the Text, Style, or Attributes tab of the Inspector.  Add Offset, Rotation, or Scale to the stack by modifying a glyph of the active track using the bounding box handles or dragging the glyph to a new position.98 Chapter 7 Working With Effects and Keyframe Animation Example: Modifying an Effect The following example shows how easy it is to change an effect and create a dramatically different look. In this case, you want to add motion to the Fade In effect. 1 Set up a new project as follows: a Choose File > New. b Type “Adventure” into one of the text-entry boxes in the Inspector to add the word to the track. c Apply any system font to the track, for simplicity. d Set the Render Selection Out Point by positioning the playhead at one second, then pressing the O key. 2 Apply the Fade In effect to the track, which is in the Fades category in the Effects tab of the Media Browser. Notice the following changes:  The effect is immediately represented in the Live Wireframe Preview of the Inspector.  If your playhead is on the first frame, the text disappeared in the Canvas when you applied the effect. That’s because the Fade In effect begins with an opacity of 0. 3 If your playhead is not at the first frame, move it there. For this example, start with a simple text track in a system font.Chapter 7 Working With Effects and Keyframe Animation 99 4 In the text-entry box of the Inspector, highlight the “A” of Adventure. Even though the text is invisible in the Canvas, a bounding box appears, allowing you to adjust the glyph. Notice also that the first keyframe of the effect is now at the first frame, with the sequencing markers behind, representing the other letters in the word. 5 Modify the glyph in the Canvas as follows, and watch the results in the Live Wireframe Preview as you go: a Drag the sizing handle in the upper-right corner of the bounding box to make the glyph quite large, about one-third of the width of the Canvas. b Using the rotation handle in the upper-left corner of the bounding box, tilt the glyph about 45 degrees counterclockwise. c Drag the glyph so its pivot point is in the lower-left corner of the Canvas, allowing part of the glyph to extend off the Canvas. Make sure the playhead is on the first frame. Position the Render Selection Out Point at one second. Sequencing markers representing the end of the effect for each remaining letter in the word. Duration of the Fade In effect on the “A” glyph Resize, rotate, and reposition the bounding box of the “A” glyph. UP01103EFF Page 99 Tuesday, March 8, 2005 1:56 PM100 Chapter 7 Working With Effects and Keyframe Animation The stack (in the Effects tab of the Inspector) has been changed automatically. The Scale, Rotate, and Offset parameters now apply to this effect, in addition to the original Opacity parameter. 6 Click the Play button in the Canvas or press the Space bar to play a RAM preview. Moving, Deleting, Adding, and Copying Keyframes The more you experiment with effects, the more you’ll want to create and change them to suit your own tastes. For example, you can change the placement of keyframes in an effect to make it play out differently. Or you can add or delete a keyframe entirely. To move the position of a keyframe in the Timeline: m Drag the keyframe marker left or right within the effect bar. To delete a keyframe: 1 Select the keyframe you want to remove. 2 Choose Track > Delete Keyframe. Note: If you select a keyframe and press the Delete key, the entire effect is deleted.Chapter 7 Working With Effects and Keyframe Animation 101 To add a keyframe to an effect: 1 Select the effect you want to add a keyframe to. 2 Drag the playhead to the frame where you want to insert a keyframe, or click that frame’s position in the frame ruler. 3 Do one of the following:  With the playhead in position and the effect selected, choose Track > Add Keyframe (or press Command-K).  Change any parameter in the Text, Style, or Attributes tab of the Inspector, or adjust a glyph of the active track using the bounding box handles or dragging the glyph to a new position. A keyframe marker appears in the effect bar. To copy a keyframe: 1 Select the keyframe you want to copy, then choose Edit > Copy. 2 Position the playhead over the frame where you want to insert the duplicate keyframe, then choose Edit > Paste. You can copy and paste keyframes from other effects, even in other projects. Copying and Pasting Keyframes, Effects, and Tracks Between Projects You can easily copy and paste keyframes, effects, and tracks from one project to another. To copy and paste a keyframe between projects: 1 Open the project you want to copy the keyframe from. 2 In the Timeline, do one of the following:  Select a keyframe, then choose Edit > Copy Keyframe (or press Command-C).  Control-click the keyframe, then choose Copy Track, Copy Effect, or Copy Keyframe from the shortcut menu. 3 Open the second project, and in the Timeline, position the playhead where you want the new keyframe to appear. 4 Do one of the following:  Choose Edit > Paste (or press Command-V).  Control-click the track, then choose Paste from the shortcut menu. The keyframe is copied into the second project.102 Chapter 7 Working With Effects and Keyframe Animation To copy and paste effects or tracks between projects: 1 Open the project you want to copy from. 2 In the Timeline, do one of the following:  Select the effect or track, then choose Edit > Copy (or press Command-C).  Control-click the effect or track, then choose Copy Track or Copy Effect from the shortcut menu. 3 Open the second project, click in the Timeline, then do one of the following:  Choose Edit > Paste (or press Command-V).  Control-click a track, then choose Paste from the shortcut menu. The new effect or track is copied into the second project. Renaming and Saving Modified Effects When you change an effect in a LiveType project, it no longer has the same attributes as the preset effect accessed through the Media Browser. You might even use different versions of the same preset effect in one project. There are two ways to keep track of these changes: rename effects within your project to distinguish them from the original preset effects, or save them as new effects you can use any time. To rename an effect within a project: 1 Select the effect or the track it’s applied to. 2 Click the Effects tab of the Inspector. 3 Select the effect whose name you want to change, and edit the name. The new name is reflected in the Timeline. To save a new or modified effect: 1 Select the effect. 2 Choose Track > Save Effect. 3 In the Save Effect dialog, name the effect and select the category you want to save it into, or create a new category. The effect appears in the Effects tab of the Media Browser. Note: For the Media Browser to display a preview of the saved effect, you must create a preview clip at 160 x 120 pixels, and give it the same name as the effect with the appropriate extension. Preview clips can be in any QuickTime format, but if you’re planning to create a lot of these, MPEG-4 is a good format choice, as it saves considerable disk space. Save preview clips into the Effects folder located at /Library/ Application Support/LiveType, where saved effects are stored.Chapter 7 Working With Effects and Keyframe Animation 103 Creating a New Effect From Scratch Sometimes the most efficient way to create the effect you want is to start from scratch, as opposed to changing an existing one. The workflow goes like this: 1 Decide, roughly, what you want to animate and how you want it to move and transform. 2 Create a track that contains the element you want to animate, preferably in its firstframe state. 3 Configure the timing of the track—its starting frame and duration. 4 Add a new, blank effect to the track. 5 Add keyframes to the effect, and adjust the parameters for each. 6 Save the effect, if desired, for use with other tracks or in other projects. To add a new, blank effect to a track: 1 Select the track you want to add the effect to. 2 Choose Track > New Effect (or press Command-E). A new effect appears in the Timeline below the active track. 3 Name the new effect, if you want, by double-clicking the New Effect name in the Effects tab of the Inspector.104 Chapter 7 Working With Effects and Keyframe Animation Example: Creating a New Effect The following example demonstrates how to build a new effect. In this case, part of the text on a single track will bounce around the Canvas. This example highlights how motion paths are built into an effect. 1 Start a new default project, and add a few words of text to the empty track in any font. One of these words is going to move around the screen, independent of the other word(s) on the track. 2 Position the track in the Canvas as you like. This will not affect the movement of the bouncing word. 3 Set the duration of the track by dragging its right edge in the Timeline. Two or three seconds is plenty. 4 Create a new, blank effect, which enables you to apply movement to the text. Make the effect duration match the track duration in the Timeline. 5 Now, even before you build the effect, make it apply to only one word on the track. That is, turn off the effect, as described in “Preset Effects” on page 88, for the words that won’t be moving. 6 Add the first of three or four keyframes spaced evenly across the effect. a Click in the frame ruler to position the playhead. b With the playhead in position and the effect selected, choose Track > Add Keyframe (or press Command-K).Chapter 7 Working With Effects and Keyframe Animation 105 7 The next step is to position the word at the point of its first “bounce.” You’re adding x and y offset parameters to the effect. a With the keyframe selected, select one of the letters you want to move. A bounding box appears around it. b Drag the letter to a new position in the Canvas. The entire word, or all the letters that the effect applies to in this case, moves with the selected letter. Notice that the motion path appears. c If you like, change the size, color, or any other attribute of the text for this keyframe. 8 Create a second keyframe, and drag the text to another location. Now the motion path is a triangle. Create a few more “bounces” for the word. Motion paths can also be curves. The process is similar to creating curved tracks as described in Chapter 4, “Working With Tracks,” on page 47. 9 Add curves to your motion path by doing the following: a With the effect selected in the Timeline, click a keyframe, or move the playhead over a keyframe. b Select one glyph in the “bouncing” word. c Hold down the Control key and drag the pivot point of the glyph, which is over the keyframe point in the motion path. Keyframe point Motion path of the “u” glyph106 Chapter 7 Working With Effects and Keyframe Animation d Bezier handles extend away from the point, allowing you to adjust the curve. 10 Click Play in the Canvas, or press the Space bar, to see the results. Creating Effects for Individual Glyphs A unique and powerful feature of LiveType lies in the ability to instantly assign an effect to individual or selected groups of glyphs, or characters, on a track. This method can be used with new effects, as well as preset effects. To assign an effect to individual characters: m Select the character or characters you want to apply the effect to, then do one of the following:  Choose a preset effect from the Effects tab of the Media Browser, then click Apply.  Create your own effect by choosing Track > New Effect, then applying effects from the parameter menu in the Effects tab of the Inspector. The selected effect is automatically turned off for all characters that are deselected. If the track is selected and no characters are selected, the effect is applied to the entire group of characters on the track. You can reassign all characters to a single effect at any time. Hold down the Option key and drag a keyframe point to pull out Bezier handles, creating a curved motion path.Chapter 7 Working With Effects and Keyframe Animation 107 Finding Effects and Media Using the Timeline You can quickly find effects, movies, and images from the Timeline using a shortcut menu. To find effects and media using the Timeline: m In the Timeline, Control-click a track, then do one of the following:  To find effects, choose Reveal in Media Browser from the shortcut menu.  To find movies or images, choose Reveal in Finder from the shortcut menu. The effect is selected in the appropriate tab in the Media Browser, or a Finder window appears with the movie or image selected.8 109 8 Previewing and Fully Rendering Your Titling Movie As your project progresses, you’ll want to view the results of your changes every step of the way, until you’re ready to generate the final output. LiveType offers several modes and choices for managing the time it takes to render previews. Previewing Your Work Viewing a frame of your titling movie is as simple as moving the playhead in the Timeline to any frame marker and looking at the Canvas elements. Obviously, you also need to be able to see the action of your movie. LiveType offers several ways to do this. Live Wireframe Preview The Live Wireframe Preview window in the upper-right corner of the Inspector continually scrolls through your animation, with small bounding boxes indicating the movement of each character or object. This feature gives you an indication of your project’s motion and timing at any moment. To freeze or unfreeze the Live Wireframe Preview: m Click inside the Live Wireframe Preview window in the Inspector. Live Wireframe Preview in the Inspector110 Chapter 8 Previewing and Fully Rendering Your Titling Movie RAM Preview in the Canvas The transport controls at the bottom of the Canvas allow you to play a preview of your titling movie right in your working environment. A RAM preview displays all elements that are visible and enabled in the Canvas, as well as the Canvas guides, rulers, and so on. Thus, it differs from a preview movie, which reflects the final movie output more closely. To play a RAM preview, do one of the following: m Click the Play button in the transport controls at the bottom of the Canvas. m Press the Space bar. At first, the frames are rendered and loaded into memory one by one; then the preview plays in real time. The Pause button is displayed during this process. The right-most transport control, the Loop button, is a toggle that sets the RAM preview to either play once through or continually loop through the movie. When the Loop button is activated, or blue, the RAM preview loops until you click anywhere in the LiveType interface. To stop a looping RAM preview: m Click anywhere in the LiveType interface. To pause a RAM preview: m Click the Pause button at any time during a RAM preview. The Play button appears and the RAM preview stops. The RAM preview resumes when you click the Play button once more. Transport controlsChapter 8 Previewing and Fully Rendering Your Titling Movie 111 Preview Movie A preview movie is basically a limited render of your titling movie. To render a preview movie: 1 Choose File > Render Preview, then choose Wireframe or Normal. The Normal setting renders your preview at the level defined in the Project Properties dialog. LiveType looks for the .afd files in your /Library/Application Support/LiveType/ LiveType Data folder if you have used any LiveType media in your composition. If the data files have not yet been installed, the Missing AFD dialog appears, giving you the option to install the full data files or to use proxy frames (from the corresponding .afp files) in the preview. 2 Do one of the following:  Select “Install missing LiveType Data now.” This allows you to install the .afd files at a location other than the LiveType Data folder, but still access them to render previews and final movies. See “Managing LiveType Media Files” on page 29 for instructions.  Select “Use Poster Frames for Tracks with missing Data.” The preview appears in a separate window. You can save a preview movie by choosing File > Save As. Otherwise, LiveType deletes the preview movie when you close the window.112 Chapter 8 Previewing and Fully Rendering Your Titling Movie Optimizing Preview Performance LiveType works with bitmapped elements that consist of pixels of information, as opposed to vector-based data. While this format is what makes possible the wide range of effects offered in LiveType, file sizes are inevitably large, and the time it takes to render a preview can become lengthy. Rendering time is affected by each layer of complexity added to a project, including the number and file size of project elements, the number of effects applied to each element, and the duration of the movie (that is, the number of frames to render). Quality Settings for Previews and Movie Output LiveType offers four levels of rendering quality, set in the Project Properties dialog, to help you manage the amount of time you spend generating previews. Naturally, a lower-quality preview takes less time to render. A wireframe-quality preview represents each element as an empty bounding box, much like the small Live Wireframe Preview in the Inspector. Draft, Normal, and High Quality settings differ only in the resolution of the preview. A draft-quality Canvas appears slightly grainy at 100 percent zoom. A draft-quality preview movie appears small on the screen. You may find that, as you build your project, it is useful to adjust the quality settings several times, to suit your preferences. To adjust the quality settings for viewing the Canvas, generating preview movies, and rendering a final movie: 1 Choose Edit > Project Properties. 2 In the Quality area of the dialog, choose the quality level for each of the three modes.Chapter 8 Previewing and Fully Rendering Your Titling Movie 113 Strategies for Improving Render Times In addition to the quality settings, LiveType offers numerous strategies to avoid excessive waiting for frames and previews to render:  The Render Selection markers in the frames ruler of the Timeline limit the number of frames that are rendered in preview movies and in the final output.  The Selected Only option in the View menu reveals only the contents of the active track in the Canvas, in preview movies, and in final movie output. This can be useful when you’re focusing on the movement of a single element.  The Enable/Disable buttons in the Timeline allow you to temporarily disable effects and remove tracks from the Canvas. This is another way to reduce complexity when you only require a partial preview.  The file size of imported elements affects system performance. For example, instead of importing a large movie as a background for keying titles, consider importing a single frame or small clip. If an imported element is to be used in your final output, generate the original file at or near the needed resolution, as opposed to bringing in a large image and shrinking it down in LiveType.  The amount of RAM memory on your system may be a factor. If saving time is critical, you may want to consider increasing your available RAM. Rendering, Saving, and Exporting Your Titling Movie There are a couple of different ways to handle rendering, saving, and exporting your LiveType project once you have completed it. The most practical method to choose largely depends on whether you are going to work with your project within Final Cut Pro or in another application.  If you are working with Final Cut Pro, import the LiveType project directly into Final Cut Pro for final rendering.  If you are working with another application, render within LiveType first, then import the rendered movie to the application. Importing a LiveType Project Into Final Cut Pro for Rendering Typically, a saved LiveType project file is imported into Final Cut Pro for rendering. This saves time as, unlike third-party applications, you do not have to render the file in LiveType prior to importing it. To import a LiveType project into Final Cut Pro for rendering: 1 Choose File > Import > Files (or press Command-I), select the LiveType project file, then click Choose. The LiveType movie is imported into Final Cut Pro, appearing as a clip. 2 Edit the clip into a Final Cut Pro sequence. 3 Render the movie as you would any other clip.114 Chapter 8 Previewing and Fully Rendering Your Titling Movie Making Changes to a LiveType Movie from Final Cut Pro If you have imported a LiveType movie into Final Cut Pro and need to make a change, you can make the change in LiveType and have it update in Final Cut Pro. To make changes to a LiveType movie already imported into Final Cut Pro: 1 Select the LiveType clip in the Final Cut Pro Timeline. 2 Control-click the clip, then choose > Open in Editor from the shortcut menu. LiveType opens with the movie ready for adjustment. 3 In LiveType, make any changes you want, then choose File > Save. The change immediately updates in Final Cut Pro. Note: You will have to re-render any changes that you have made within Final Cut Pro. Rendering a LiveType Movie for Export When working with a third-party application, you will need to render your movie within LiveType prior to importing it. To render a full-resolution movie of your project for export: 1 Choose File > Render Movie. 2 Choose a filename and location in the Save dialog, then click “Create new movie file.” 3 Just as with preview movies, LiveType requests that you install any missing LiveType Data files. Do one of the following:  Select “Install missing LiveType Data now.” This allows you to install LiveType media files to a location other than the LiveType Data folder, but still access them to render previews and final movies. See “Managing LiveType Media Files” on page 29 for instructions.  Select “Use Poster Frames for Tracks with missing Data.” Note: By default, a QuickTime movie with an alpha channel is created in the Animation codec. If another codec is preferred, use the options from File > Export Movie instead.Chapter 8 Previewing and Fully Rendering Your Titling Movie 115 Once LiveType has finished rendering your project, it appears in a new window. LiveType Export Formats LiveType natively generates QuickTime movies with the Animation 32-bit codec for proper keying to your video. If your NLE or compositing program imports QuickTime 4 or later movies, you should be able to import these movies directly. You can also export to a variety of motion and still-image formats. Keep in mind, however, that if you want to retain the alpha channel, you must use a format that supports the 32-bit format, such as Photoshop, Targa, TIFF, and AVI. Your rendered movie appears in a separate viewer. (In this example, the background movie was rendered with the titles.) QuickTime codecs QuickTime codecs (cont.) Image sequence formats Other formats Animation Motion JPEGA BMP AVI BMP Motion JPEGB JPEG DV Stream Cinepak None (No compression) JPEG 2000 Image FLC Component Video PhotoJPEG MacPaint Heuris MPEG DV/DVCPRO-NTSC Planar RGB Photoshop MPEG-2 DVCPRO-PAL PNG PICT MPEG-4 Graphics Sorenson Video PNG H.261 Sorenson Video 3 QuickTime image H.263 TGA SGI image Intel Indeo Video r3.2 TIFF TGA Intel Raw Video TIFF116 Chapter 8 Previewing and Fully Rendering Your Titling Movie To export a rendered LiveType movie to a new format: 1 Open your movie output so it appears in the viewer window. If you just rendered your project, the QuickTime movie will be open already. If you previously rendered and saved the movie, open it using File > Open. 2 Choose File > Export Movie. The dialog prompts you for a new name and file location, and offers a variety of file formats to export to. 3 In the Export pop-up menu, choose the category of output you want to create. 4 In the Use pop-up menu, choose the appropriate file format or protocol. 5 Click the Options button to reveal additional settings pertinent to the format you selected.9 117 9 Advanced Design Techniques The key to designing great titles is to combine the capabilities and media in LiveType in creative ways. A few “recipes” for interesting looks are included in this chapter. The following examples assume a general familiarity with the basic functions of LiveType. Because each step is not explained in great detail, you may need to refer to earlier chapters to perform some of the tasks. Words Within Words The Matte to Background option can be used to create some very interesting titling compositions. Unlike Matte to Texture and Matte to Movie or Image, this matting option creates a “window” into any background—even backgrounds composed of several elements. For example, you can create words inside of words. In this case, foreground text defines a window into background text, which slides right to left behind it. Follow these steps: 1 Create the foreground text to define the shape of the matte. a Add a text track in a heavy system font such as Helvetica Bold. b Type a word onto the track, then set the size so the word fills the width of the Canvas. 2 Create an intermediate layer to obscure the background. a Choose a texture from the Media Browser, then click Apply To New Track.118 Chapter 9 Advanced Design Techniques b Make sure the texture is underneath the text, but above the background bar in the Timeline. 3 Create a dynamic background that’s visible through the window created by the foreground word. a Add a new text track to the Canvas, then enter some text that’s smaller than the foreground word you created in step 1. b Position the track over the foreground text, and format the background text as you like. c Apply a crawl or slide effect to the track, to make the text move right to left. d Drag the track below the background bar in the Timeline. e Temporarily disable the texture and the foreground text track in the Timeline. f Define a background color in the Project Properties dialog, or place a different background behind the background text. g Enable the texture and the foreground text track in the Timeline. Foreground text to define the shape of the “window” into the background Texture to obscure the background elements Background elements Foreground elementsChapter 9 Advanced Design Techniques 119 4 Select the foreground text and choose Background from the “Matte to” pop-up menu in the Matte pane of the Attributes tab of the Inspector. Warping Shadows and Glows The Warp feature in the Style tab of the Inspector can be used to create a surprising variety of shapes to enhance your titles. This section describes how the Needle Drop effect takes advantage of the Warp parameter, in combination with several other parameters, to create a unique look. 1 Open a new project, and enter some text in a system font onto the track. 2 Change the text to a bright color in the Glyph pane of the Attributes tab of the Inspector, then close the Colors window. 3 In the Project Properties dialog, change the background color to black, at 100 percent opacity. Matted text reveals a moving word in the background.120 Chapter 9 Advanced Design Techniques 4 Apply the Needle Drop effect to the track, which is in the Glows category in the Effects tab of the Media Browser. 5 Set the track and effect durations to 1 second, set the Render Selection Out Point at 1 second, then click the Play button to render a RAM preview. 6 With the playhead over the effect in the Timeline, click the Effects tab of the Inspector to view the active parameters. The essential parameters used to create the Needle Drop effect are as follows:  Glyph settings: At the beginning keyframe, the glyphs on the track are small, transparent, and blurred. At the ending keyframe, the characters are normal. The middle keyframe simply makes the letters larger than normal.  Glow settings: At the beginning keyframe, the glow is invisible, with a 0 percent opacity, and has a vertical offset of –200 pixels. At the middle keyframe, the glow opacity is set at 500 percent, with some Scale and Blur adjustments and no offset. And at the ending keyframe, the glow is invisible again, and the vertical offset is 200 pixels.  Shadow settings: The shadow is what creates the “needles.” The shadow color is set to white, and the scale is set to 10 percent on the x axis, making the shadows very thin. The warp settings accentuate the narrow tips of the needles. And the shadow blur is set to 2 percent, which is essential for this effect. At the ending keyframe, the shadow goes to 0 percent opacity.  Timing settings: The Random parameter in the Timing tab is used to make this effect apply to each glyph in a random order. The Needle Drop effect applied to text on a black Canvas backgroundChapter 9 Advanced Design Techniques 121 One track, one effect, three keyframes—it’s actually fairly easy to re-create this effect. And even with the numerous parameters involved, the LiveType Timeline is remarkably clean, since one keyframe encapsulates all the parameters at a point in time. For another example showing an effective use of Warp parameters, take a look at the Screech effect, in the Caricature category of the Effects tab in the Media Browser. This effect is created by making the glyphs invisible and using the glow channel to display the letters, which are distorted using Warp parameters. Track Curves Using a Slide effect along a curved track can create a three-dimensional effect. This example explains how to combine these features to send text into a vortex in only a few steps. 1 Open a new project, and enter some text onto the track. 2 Left-justify the text on the track. 3 Move the track up toward the top of the Canvas. 4 Add a control point in the middle of the track by holding down the Control key and clicking the track line in the Canvas. Control-click the control point to choose Curve Out. You want to leave the left half of the track more or less in the same position, and create a curved path arcing down and around clockwise from that point. Only a couple of additional control points are needed. See Chapter 4, “Working With Tracks,” on page 47 for more about making curved tracks.122 Chapter 9 Advanced Design Techniques 5 Add a new effect to the track. 6 Select the ending keyframe of the effect. 7 In the Effects tab of the Inspector, add the Slide parameter to the Active Parameters stack. Double-click the Slide parameter and set the value to 100, which is a percentage of the track’s length. When you assign the Slide value to the ending keyframe, the beginning keyframe defaults to a Slide value of 0. 8 While you’re still on the ending keyframe, set the Size parameter to 0. 9 In the Timing tab of the Inspector, set the Sequence value to 10, and choose From Right from the Start pop-up menu. Add parameters using the Parameter pop-up menu and the + button. Change parameter values by double-clicking an active parameter.Chapter 9 Advanced Design Techniques 123 10 Adjust the ending keyframe Slide value as needed for the right look, which can vary depending on the length of the track and the text sliding on it. Creative Use of Special Characters Symbols and other kinds of special characters can be very useful and convenient as titling elements. Because these characters are vector-based shapes, they have very small file sizes, and no upper limit to their size in the Canvas. Plus, they’re easy to access. This example shows you how to create a pattern of boxes, covering the Canvas, which randomly change colors and fade away to reveal a message or image behind them. 1 Open a new LiveType project, and click in one of the text-entry areas. 2 Open the Character Palette.  If your Character Palette is enabled in your Mac OS X System Preferences, it appears as a small icon on the right side of the LiveType menu. The text appears to spiral down into a vortex.124 Chapter 9 Advanced Design Techniques  To enable the Character Palette, open System Preferences, click International, click the Input Menu button, and select Character Palette.  Alternatively, in LiveType, you can Control-click inside one of the text-entry boxes in the Inspector, then choose Font > Show Fonts from the shortcut menu. The Font dialog appears. Choose Characters from the Extras pop-up menu located on the bottom-left corner of the Font dialog. 3 Choose a solid square character, then click Insert to add the character to the text-entry box. Insert three lines of four boxes on the same text track. Enable the Character Palette in your System Preferences.Chapter 9 Advanced Design Techniques 125 4 Adjust the Size, Tracking, and Leading parameters in the Text tab of the Inspector to create a panel of evenly spaced squares. 5 In the Style tab, disable the shadow, and add a white outline thick enough for the outlines of each square to touch each other, obscuring the Canvas background. 6 Add a new effect, and set the duration of both the track and the effect to 1 second in the Timeline. 7 Select the beginning keyframe of the effect, and choose a glyph color in the Attributes tab of the Inspector. Change the ending keyframe to a different color. Then position the playhead at several intermittent points, changing the color each time. If you change an effect parameter when the playhead is not on a keyframe, a new keyframe is automatically added to the effect under the playhead. This step shows how automatic keyframe insertion can be a convenient time saver. 8 Set the glyph opacity to 0 percent at the ending keyframe, so that squares fade out at the end. First Keyframe Ending Keyframe126 Chapter 9 Advanced Design Techniques 9 In the Timing tab, set the Random setting to 15. 10 Add text or another element behind the panel of squares, so it is gradually revealed as the squares fade away. LiveFonts and Layers Several LiveFonts that come with LiveType are designed to work in tandem with other fonts. One of these is the Nitro font, which can make text look like it explodes. These steps explain how to use such fonts effectively. Note: You need to install the Nitro data file to follow this example. See “Managing LiveType Media Files” on page 29 for information on installing LiveType media. 1 Create a text track, and apply a system font with any basic formatting you like. 2 Choose Track > Duplicate Track to create a copy positioned directly over the original track. 3 In the Timeline, lock the two tracks together using the grouping buttons. 4 Select Track 1, and apply the Nitro LiveFont. 5 In the Style tab of the Inspector, disable the shadow for Track 1. 6 In the Timing tab, set Sequence at 5 percent, then shorten the duration of the track, either by dragging the end of the track in the Timeline or by adjusting the Speed parameter in the Timing tab.Chapter 9 Advanced Design Techniques 127 7 Apply the Fade Out effect to Track 2, since you want the letters to disappear once they’ve exploded. The trick is that you want the letters to fade out just as they explode, and because they are exploding in sequence, you need to align the timing of the sequencing markers for the two tracks. 8 Using the sequencing markers in the Timeline as your guide, adjust the speed of the Fade Out effect to line up the sequencing markers of Track 1 and the Track 2 effect. Align the sequencing markers of the Nitro LiveFont and the Fade Out effect. The combination of the top layer of text in the Nitro LiveFont, and the underlying text that fades away, makes the letters appear to explode from left to right.128 Chapter 9 Advanced Design Techniques Creating Scrolls and Crawls Scroll and crawl effects are used to create credit rolls, or to slide strings of text across the screen like a stock ticker. These two kinds of effects use the Canvas Offset parameter to create a vertical or horizontal motion path long enough to move text onto the Canvas and fully off the opposite side. The offset value, which defines the length of the motion path, is based on the length of the element that’s scrolling or crawling. So it’s best to enter and format the text before applying the effect, so you don’t have to reposition the starting point of the element multiple times. Note: From a design standpoint, scrolls and crawls are best used with system fonts, as opposed to LiveFonts. If you do choose to scroll a LiveFont, you’ll need to work with the font’s timing parameters, including speed and the Hold First and Hold Last options in the Timing tab of the Inspector, to coordinate the LiveFont animation with the scrolling or crawling movement. To create scrolling text: 1 Enter several lines of text onto a new track, using the Return key to create line breaks in the text-entry box. 2 Format the text, paying particular attention to any parameter affecting the total vertical length of the lines of text—font, size, leading, and so on. 3 Apply a scroll effect from the Scrolls and Crawls category in the Effects tab of the Media Browser. 4 Adjust the speed of the scroll, which is now visible in the Live Wireframe preview, by dragging the right edge of the effect bar in the Timeline, or by changing the Speed setting in the Timing tab of the Inspector. 5 In the Timeline, drag the right edge of the track to match the duration of the effect. 6 Move the text to its starting position. a Make sure the playhead is over the first frame in the Timeline. b Set the Canvas zoom to 25 percent, to see outside the boundaries of the Canvas. c Drag the track in the Canvas to set the starting position of the scrolling text. (Text that is beyond the edge of the Canvas is represented by blue bounding boxes.) Hold down the Shift key as you drag to constrain the horizontal position of the track. If you are using the Scroll Up effect, for example, you might want to set the starting position of the first line of text just below the bottom edge of the Canvas. Chapter 9 Advanced Design Techniques 129 To create crawling text: 1 Enter text—typically several words or a sentence on one line—onto a new track. 2 Format the text, paying particular attention to any parameter affecting the total horizontal length of the text—font, size, tracking, and so on. 3 Apply a crawl effect from the Scrolls and Crawls category in the Effects tab of the Media Browser. 4 Adjust the speed of the crawl, which is now visible in the Live Wireframe preview, by dragging the right edge of the effect bar in the Timeline, or by changing the Speed setting in the Timing tab of the Inspector. 5 In the Timeline, drag the right edge of the track to match the duration of the effect. 6 With the playhead over the first frame, drag the track in the Canvas to adjust its starting position. 131 A Appendix A Solutions to Common Problems and Customer Support If you run into problems while working with LiveType, there are several resources you can use to find a solution.  This appendix: This appendix includes information about some of the most frequent issues users encounter.  Late-Breaking News: A late-breaking news page in the LiveType Help menu provides last-minute information that didn’t make it into the manual. Be sure to consult this help page as soon as you install LiveType.  AppleCare Knowledge Base: AppleCare Support maintains a database of common support issues that is updated and expanded to include new issues as they arise. This is an excellent, free resource for LiveType users. To access the AppleCare Knowledge Base, go to the AppleCare support page at http://www.apple.com/support.  AppleCare Support: There are a variety of support options available to LiveType customers. For more information, see the Apple Professional Software Service & Support Guide that comes with your Final Cut Pro documentation. Frequently Asked Questions Some fonts appear to shake in the preview movie.  Because LiveType uses a small sampling for low-quality previews, some images may be missing pixel data that provides smoother movement. Increase your quality settings in the Project Properties dialog to produce smooth results. My images appear pixelated.  If you render a preview, your movie is displayed in low resolution and appears a bit pixelated. Also, if you size your elements beyond their original size, some pixelization may occur.132 Appendix A Solutions to Common Problems and Customer Support LiveType doesn’t open anymore.  It is possible to save a set of default settings that prevents LiveType from opening. Try erasing your default settings file: /Library/Preferences/LiveType Pro Defaults.dat. Your configuration reverts to the original LiveType settings. This is essentially the same as choosing LiveType > Settings > Clear Settings within the application. The motion is not smooth on my NTSC monitor.  Use the fielding option for the smoothest motion. When I bring titles into my nonlinear editor (NLE) or compositing program, the characters appear squashed or the aspect is wrong.  Make sure to set the project properties in LiveType according to the size and pixel aspect your NLE uses. Some NLEs require the correct frame size even if the title doesn’t use the entire frame. Square pixels take a value of 1, and NTSC pixels take a value of .9. A keyframe is “stuck” at the beginning or end of an effect, and I can’t select it, move it, or delete it.  Try increasing the Timeline zoom, to see whether you can select and drag the keyframe away from the beginning or end of the effect. Because the beginning and ending keyframes cannot be deleted, it is possible to slide an internal keyframe to the far end of the effect so that it cannot be moved, regardless of the Timeline magnification. To select an obscured keyframe, do the following: 1 Select the beginning or ending keyframe that’s obscuring the other keyframe. 2 Choose View > Go To > Next Keyframe (or Previous Keyframe) to select the “lost” keyframe. 3 Choose Edit > Cut. 4 Move the playhead and choose Edit > Paste. When I apply an effect with a glow or shadow change, I don’t see a change in the Live Wireframe Preview in the Inspector.  The wireframe boxes in the preview show the basic shape of each glyph, and aren’t changed by Style settings. Other preview options, such as a RAM preview or preview movie, reveal shadows and glows.Appendix A Solutions to Common Problems and Customer Support 133 When I change certain attributes of a track, they don’t seem to have any effect.  Because effect parameters override track parameters, you may be trying to adjust a parameter that is being overridden. Disable the effects associated with that track to see whether the attributes become active again. If so, the solution is to change the effect parameters. I can’t select an element or character.  Make sure that Lock Position is not selected in the Layout menu. When you add a texture or background movie, it is locked by default.  Also, you may be clicking an element that uses the entire Canvas. Try zooming out to view beyond the edge of the Canvas to reveal its bounding box.  Consider the layer order, too, when you want to select an element on the Canvas. If one element gets in the way of selecting another, use the Timeline to select the track underneath, and highlight glyphs in one of the text-entry boxes in the Inspector. I keep accidentally selecting the texture, image, or movie I created on a track as a background image.  Choose Layout > Lock Position to prevent the element from being selected. When I have a lot of elements in the Canvas, everything slows down.  See Chapter 8, “Previewing and Fully Rendering Your Titling Movie,” on page 109 for ways to optimize preview performance. Apple Applications Page for Pro Apps Developers The Apple Developer Connection website includes an Apple Applications page that is a one-stop destination for developers creating content or extensions for professional applications. On this page, developers can find late-breaking news of interest and technical resources such as developer documentation, special articles, and SDKs. Developers can also sign up for a new Pro Apps Developer mailing list. The URL is http://developer.apple.com/appleapplications. UP01103SOL Page 133 Tuesday, March 8, 2005 1:57 PM134 Appendix A Solutions to Common Problems and Customer Support Calling AppleCare Support Included in your LiveType package is documentation about the support options available from Apple. Several levels of support are available, depending on your needs. Note: There are certain support situations in which AppleCare may require information about both your computer and how this particular application is configured. Choosing Help > Create Support Profile creates a file that contains the necessary information and can be emailed to AppleCare. You would not normally use this feature unless directed to by an AppleCare representative. Whatever your issue, it’s a good idea to have the following information immediately available. The more of this information you have ready to give to the support agents, the faster they will be able to address your issue.  The Support ID number that came with Final Cut Pro. This number is different from the software serial number that is used to activate your copy of LiveType.  Which version of Mac OS X you have installed. This information is available by choosing About This Mac from the Apple menu.  The version of LiveType you have installed, including updates if applicable. The version number can be viewed by choosing LiveType > About LiveType.  The model of computer you are using  How much RAM is installed in your computer, and how much is available to LiveType. You can find out how much RAM is installed by choosing About This Mac from the Apple menu in the Finder.  What other third-party hardware is connected to or installed in the computer, and who are the manufacturers. Include hard disks, video cards, and so on.  Any third-party plug-ins or other software installed along with LiveType AppleCare Support can be reached online at http://www.apple.com/support/livetype/ index.html. 135 B Appendix B Creating and Editing EffectScripts Effects in LiveType are based on the EffectScript language. Effects consist of a plain text file and a representative QuickTime movie, which appears in the Media Browser in the LiveType interface. Each line of an EffectScript consists of a command followed by a set of command arguments. Tabs and spaces are skipped. In any command, two hyphens (--) can be followed by a comment. Comments are ignored by the EffectScript interpreter. Header The following header commands should appear at the beginning of each EffectScript. EffectScript 1.0  Use 1.0. as the EffectScript specification version number. Name "effect name"  Name the effect. Quotation marks can be any non-space delimiter (", ', /, and so on). Desc "description"  Describe the effect. The description may be a long string; the text wraps when displayed. Default Timing After the header, an EffectScript should have default timing settings. DefOffset a b c  a is a numeric value.  b is %, Seconds, or Frames.  c is Start or End. DefReverse a  a is 0 for forward, or 1 for reverse.136 Appendix B Creating and Editing EffectScripts DefSequence a b c  a is 0 for Off or 1 for On.  b is a numeric % value, may be floating point.  c is L for left first, or R for right first. DefRandStart a b c  a is 0 for Off or 1 for On.  b is a numeric value, may be floating point.  c is %, Seconds, or Frames. DefLoop a  a is a numeric value, must be an integer (use a large number like 9999 to loop forever). DefSpeed a  a is a numeric % value, may be floating point. Keyframes After the header, an EffectScript defines a number of keyframes. A keyframe starts with a Time command: Time t  t is the time of the keyframe in seconds. Each Time command is followed by parameter commands. For example, here is a keyframe: Time 0.0 Scale 50 Track -50 This keyframe means that at time Zero seconds, each glyph scales by 50 percent, and its tracking decreases by 50 percent. The first keyframe must be at time 0.0, and there must be at least one other keyframe after that. All keyframes must be listed in order. All keyframes in a given effect should have the same set of parameter commands. The following parameter commands are valid in a keyframe: Accelerate n  n is a percentage of acceleration. This affects how all the other keyframe parameter values are interpolated between this keyframe and the next. 0 means no acceleration, 100% means speed up, -100% means slow down.Appendix B Creating and Editing EffectScripts 137 Blur x [y]  x is the blur radius in pixels. If y is given, then the horizontal and vertical blur amounts are distinct. CanvasOffset x y  x and y are the horizontal and vertical offsets, in percentage of the Canvas dimensions. This is the parameter used for scrolls and crawls. Color r g b [n]  r, g, and b are color values, in [0..255].  n is optional, and is an opacity percentage. DoExtrude x  x is 0 for no extrusion, 1 for extrude. DoGlow n  n is 0 for No or 1 for Yes. DoShadow n  n is 0 for No or 1 for Yes. ExtrudeDirection n  n is an angle in degrees, 0 for up, 90 for right, and so on. ExtrudeLength n  n is the extrusion length in pixels. ExtrudeColor r g b  r, g, and b are the extrusion color, in [0..255]. ExtrudeOutline n  n is 0 for no outline, 1 to outline the extrusion. GlowBlur n  n is the glow blur radius in pixels. GlowColor r g b  r, g, and b are the glow color, in [0..255]. GlowLayer n  n is 0 for behind all, 1 is behind track, 2 is in front, 3 is in front matted to glyph. GlowOffset x y  x and y are the glow offsets in pixels. GlowOpacity n  n is the glow opacity percentage.138 Appendix B Creating and Editing EffectScripts GlowScale x y  x and y are the glow scale percentages. GlowWarp x1 y1 x2 y2 x3 y3 x4 y4  x, y pairs are the four Warp points. HideChar n  n is 0 for Show the glyph in addition to lighting effects (Outline, Shadow, Glow, Extrude), or 1 to Hide it. HSL h s l  h is the hue angle adjustment in degrees; 0 means no change.  s is the saturation adjustment in percent; 0 means no change.  l is the lightness adjustment in percent; 0 means no change. Leading n  n is a % that adjusts the position of the next line in the track (for example, use 0 to put the next line on top of this one, 100 to leave it unchanged, or 200 to double it). Matte n  n is 0 or 1 for Off or On. Offset x y  x and y are the horizontal and vertical offsets, in pixels. Opacity n  n is the opacity percentage. This is multiplied by the glyph’s opacity. Outline n  n is the pixel width of the outline. Rotate n  n is the rotation angle, in clockwise degrees. Scale x y  x and y are the horizontal and vertical scale percent multipliers. Scaling is done about the glyph pivot point. SetOutlineColor r g b  r, g, and b are the outline color, in [0..255]. SetOutlineBlur n  n is a number of pixels. SetOutlineOnly n  n is 0 for No or 1 for Yes.Appendix B Creating and Editing EffectScripts 139 SetOutlineWarp x1 y1 x2 y2 x3 y3 x4 y4  x, y pairs are the four Warp points. ShadBlur n  n is the shadow blur radius in pixels. ShadColor r g b  r, g, and b are the shadow color, in [0..255]. ShadLayer n  n is 0 for behind all, 1 for behind track, 2 for in front, 3 for in front matted to glyph. ShadOffset x y  x and y are the shadow offsets in pixels. ShadOpacity n  n is the shadow opacity percentage. ShadScale x y  x and y are the shadow scale percentages. ShadWarp x1 y1 x2 y2 x3 y3 x4 y4  x, y pairs are the four Warp points. Size n  n is percent modifier for the glyph size. This affects not only the size of the glyph but also the leading and tracking, which are based on the size. Glyphs are sized about the glyph center on the baseline. Slide n  n is the amount an element slides along its track, in percent of the track’s length. Tracking n  n is a % that adjusts the position of the next glyph (for example, use 0 to put the next glyph on top of this one, 100 to leave it unchanged, or 200 to double it).140 Appendix B Creating and Editing EffectScripts Sample EffectScripts You can view an EffectScript simply by opening one of the effect files stored in the /Library/Application Support/LiveType/Effects folder of your hard drive. Some simple EffectScripts follow: Zoom In EffectScript 1.0 -------------------------------------------------------------------- -- "Zoom In" example Name "Zoom In" Desc "Zoom In each glyph linearly from zero to normal from its anchor point. Simultaneously increase the kerning from zero to normal." DefOffset 0 % Start DefSequence 0 0 L DefRandStart 0 0 % DefLoop 1 DefSpeed 100 Time 0.0 Tracking -100 -- -100% tracking, means zero tracking. Scale 0 0 -- 0% scale Time 2.0 Tracking 0 -- 0% tracking, means normal. Scale 100 100 -- 100% scale. -------------------------------------------------------------------- Zoom Out EffectScript 1.0 -------------------------------------------------------------------- -- "Zoom Out" example Name "Zoom Out" Desc "Zoom Out each glyph linearly from normal to zero from its anchor point. Simultaneously decrease the kerning from normal to zero." DefOffset 0 % End DefSequence 0 0 L DefRandStart 0 0 % DefLoop 1 DefSpeed 105 Time 0.0 Tracking 0 Scale 100 100 Time 2.0 Tracking -100 Scale 0 0 --------------------------------------------------------------------Appendix B Creating and Editing EffectScripts 141 Tinted Rotate EffectScript 1.0 -------------------------------------------------------------------- -- "Tinted Rotate" example Name "Tinted Rotate" Desc "Rotate each glyph around its anchor point at 1 rev/sec. For fun, simultaneously mess around with the color" DefOffset 0 % Start DefSequence 0 0 L DefRandStart 1 100 % -- note large loopCount so that it will loop through the whole duration. DefLoop 9999 DefSpeed 100 Time 0 Color 255 0 0 -- Tint Red (R=255, G=0, B=0) Rotate 0 Time 1 Color 0 255 0 -- Tint Green (R=0, G=255, B=0) Rotate 120 Time 2 Color 0 0 255 -- Tint Blur (R=0, G=0, B=255) Rotate 240 Time 3 Color 255 0 0 -- Tint Red (R=255, G=0, B=0) Rotate 0 -------------------------------------------------------------------- 143 Glossary Glossary 16:9 A widescreen aspect ratio for video. The ratio of the width to the height of the visible area of the video frame, also called the picture aspect ratio, is 16:9, or 1.78. alpha channel An additional image channel used to store transparency information for compositing. Alpha channels are often 8-bit, but some applications support 16-bit alpha channels. Only certain formats, such as PICT and the QuickTime Animation codec, support alpha channels. aspect ratio A video frame’s width-to-height ratio on your viewing screen. The most common aspect ratio is 4:3, used for common television screens. AVI Acronym for Audio-Video Interleaved, Microsoft’s standard format for digital video. Bezier handles Two-direction handles that control or influence the curve of the line segment between the handle and the next point on either side. The farther a direction handle is pulled out from its vertex point, the more force it applies to its line segment to bend or curve it. Direction handles are moved by dragging them. bin In Final Cut Pro, the window that contains your clips, transitions, effects, and generators. The bin lets you organize all of these elements, sort them, add comments, rename items, and so on. Canvas One of the four main windows in the LiveType interface, where you position text and objects, create motion paths, and view the results as you design. channels May refer to color channels or alpha channels. Color and transparency information for video and graphics clips is divided into individual channels. CMYK Abbreviation for Cyan Magenta Yellow Black. The color space commonly used for images that will be printed with 4-color ink on offset presses. codec Short for compressor/decompressor. A software component used to translate video or audio between its uncompressed form and the compressed form in which it is stored. Sorenson Video and Cinepak are common QuickTime video codecs. Also referred to as a compressor.144 Glossary compositing The process of combining two or more video or electronic images into a single frame. This term can also describe the process of creating various video effects. compression The process by which video, graphics, and audio files are reduced in size by the removal of redundant or less important data. See also codec. decompression The process of creating a viewable image for playback from a compressed video, graphics, or audio file. digital A description of data that is stored or transmitted as a sequence of ones and zeros. Most commonly, this means binary data represented using electronic or electromagnetic signals. QuickTime movie files are digital. digital video Refers to the capturing, manipulation, and storage of video using a digital format, such as QuickTime. A digital video camcorder, for example, is a videocamera that captures and stores images on a digital medium such as DV. Video can then be easily imported. duration The length of time that a track or effect exists in the Timeline. DVD A DVD disc looks much like a CD-ROM or audio disc, but uses higher density storage methods to significantly increase its capacity. effect In LiveType, a set of attribute and timing parameters that animate an element. element In LiveType, anything that is placed on a track: an individual character, a block of text on a single track, an object, a movie, a texture, or an image. field Half of an interlaced video frame consisting of the odd or the even scan lines. Alternating video fields are drawn every 1/60th of a second in NTSC video to create the perceived 30 frames per second video. There are two fields for every frame, an upper field and a lower field. FireWire The Apple trademark name for the IEEE 1394 standard. FireWire is a fast and versatile interface used to connect DV cameras to computers. FireWire is well suited to applications that move large amounts of data, and can also be used to connect hard disks, scanners, and other kinds of computer peripherals. font A complete set of a single typeface. See also LiveFont. frame Video consists of a number of still-image frames which, when they play back over time, give the illusion of motion. NTSC video plays back 29.97 frames per second, and PAL video plays back 25 frames per second. Each broadcast video frame is made up of two fields, which is different from the way film handles frames. A film frame is a single photographic image, and does not have separate fields. glyph A single character on a track. A glyph frequently refers to a letter or symbol, but an object, texture, or imported element can also be referred to as a glyph in LiveType.Glossary 145 importing The process of bringing files of various types into a project in LiveType. Imported files have usually been created or captured in another application. Inspector One of the four main windows in the LiveType interface, which is used to insert text and apply attributes, styles, and effect parameters to titling elements. keyframe A special-purpose marker that denotes a value change of one or more parameters in an applied effect. When two keyframes are set in LiveType, the application calculates a smooth transition based on their values. LiveFont LiveFonts are sets of 32-bit characters. Most LiveFonts are computer-based animations. However, they may also be composed of video footage or still photographs. LiveType media The collective term for LiveFonts, textures, and objects in LiveType, all of which are built using the 32-bit .afd format for animated fonts. markers In Final Cut Pro, markers refer either to the edit points that define the Start and End points of a clip, or to points of reference you can use to denote places of interest in your clips and sequences. Media Browser One of the four main windows in the LiveType interface, which is used for selecting fonts, objects, textures, and effects. NTSC format NTSC stands for National Television Standards Committee, the organization that defines North American broadcast standards. The term “NTSC video” refers to the video standard defined by the committee, which has a specifically limited color gamut, is interlaced, and is approximately 720 x 480 pixels, 29.97 frames per second. object In LiveType, objects are single 32-bit elements. Like LiveFonts, they may be computer-based animations, real-world video, or still photographs, as well as other elements such as lower thirds. PAL format Acronym for Phase Alternating Line format. A 25 fps (625 lines per frame) interlaced video format used by many European countries. PICT A still-image file format developed by Apple. PICT files can contain both vector images and bitmap images, as well as text and an alpha channel. PICT is a ubiquitous image format on Mac OS computers. pixel One dot in a video or still image. A typical low-resolution computer screen is 640 pixels wide and 480 pixels tall. Digital video movies are often 320 pixels wide and 240 pixels tall. pixel aspect ratio The ratio of width to height for the pixels that compose the image. NTSC pixels are square (1:1 ratio), but D-1 pixels are nonsquare.146 Glossary postproduction The process of editing film or video after acquiring the footage. QuickTime The Apple cross-platform multimedia technology. Widely used for CD-ROM, web video, editing, and more. RAID Acronym for Redundant Array of Independent Disks. A method of providing nonlinear editors with many gigabytes of high-performance data storage by teaming together a group of slower, smaller, cheaper hard disks. RAM Acronym for random-access memory. Your computer’s memory capacity, measured in bytes, which determines the amount of data the computer can process and temporarily store at any moment. render In LiveType, the process of combining project elements with any applied effects, one frame at a time. Once rendered, your titling sequence can be played in real time. RGB Abbreviation for Red Green Blue. A color space commonly used on computers. Each color is described by the strength of its red, green, and blue components. This color space directly translates to the red, green, and blue phosphors used in computer monitors. The RGB color space has a very large gamut, meaning it can reproduce a very wide range of colors. SECAM Acronym for Sequential Couleur Avec Memoire. The French television standard for playback. Similar to PAL, the playback rate is 25 fps. sequencing An effect treatment in which each glyph on a track is transformed individually. A sequenced effect starts by transforming one character, then moves to the adjacent character, and so on. texture In LiveType, textures are full-screen animations, useful as backgrounds, texture mattes, or borders. TIFF Acronym for Tagged Image File Format. A widely used bitmapped graphics file format, developed by Aldus and Microsoft, that handles monochrome, grayscale, 8- and 24-bit color. timecode A method of associating each frame of film or video in a clip with a unique, sequential unit of time. The format is hours:minutes:seconds:frames. Timeline One of the four main windows in the LiveType interface, which shows the timing of project elements and the effects applied to them. title safe area The part of the video image that is guaranteed to be visible on all televisions. The title safe area is 80 percent of the screen.Glossary 147 track In LiveType, a track is what contains an element and its attributes. In the Canvas, a track appears as a dark blue line, usually at the base of the text, object, or image it contains. A track can be shaped to form a motion path for the track’s contents to move along. In the Timeline, a track is represented by a numbered bar, often with applied effects underlying it. widescreen Widescreen format is a way of shooting and projecting a movie in theaters. The original footage doesn’t get cut off because of the 4:3 aspect ratio. With the advent of high definition video, widescreen 16:9 video is coming into more popular use. wireframe The most elementary preview mode in LiveType, representing characters and objects as bounding boxes. Wireframe previews are useful because they render very quickly, showing the motion of elements. x Used to refer to the x coordinate in Cartesian geometry. The x coordinate describes horizontal placement. y Used to refer to the y coordinate in Cartesian geometry. The y coordinate describes vertical placement in motion effects. 149 Index Index 16:9 aspect ratio 143 A action safe guidelines 15 active parameters 97 Active Parameters window 94 .afd extension 111 .afp extension 111 aligning text tracks 63 alpha channels 7, 64, 80, 143 Alpha pop-up menu 64 angles, track 49 animation importing files 82 keyframes and 94 segmented LiveFont animation 61 types of animated items 8 Animation codec 114, 115 Apple Applications website 133 AppleCare Knowledge Base 131 AppleCare Support 131, 134 Apple Developer Connection website 133 Apple Store 12 Apple websites 11–12, 131, 133 aspect ratio 132, 143 attributes assigning 24 digital images 85 imported elements 85 movies 85 objects 85 tracks 133 Attributes tab 24, 64 AVI format 43, 82, 115, 143 B background bar 27, 43 background movies 43–45 backgrounds 41–45 adding 41–45 color 37, 41, 43 importing images for 44 overview 14–15 purpose of 41 rendering 37, 45 revealing in mattes 72 settings 37 textured 42 transparent 73–74 banners, web 35 baselines of text 66 Bezier handles 50, 143 bins 143 blur attributes 24 non-text elements 85 settings 66 text elements 69 BMP codec 115 BMP format 43, 82, 115 C Canvas 14–19 adding objects to 80 adding text to 58 adding textures to 81 adjusting keyframes in 96 background. See backgrounds color 37 customizing 17–19 described 13, 14, 143 grids 18 guides 18 illustrated 13, 14 isolating tracks in 19 positioning tracks in 48 RAM preview in 110 rulers 17 transport controls 16 using tracks in 15, 147 zoom controls 16 Canvas Offset parameter 128 categories Media Browser 30 preset effects 88 channels 143150 Index Character Palette 123 characters assigning effects to 106 character sets 60 elements as 84 locked 133 modifying individual characters 77–78 motion paths 96 problems selecting 133 removing effects from 90 spacing 63 special character effects 123–126 squashed 132 style settings 68 character sets 60 Cinepak codec 115 clips, importing 43 CMYK color space 143 codecs described 143 exporting movies and 114 QuickTime 115, 143 color attributes 24 background 37, 41, 43 Canvas 37 CMYK 143 extrude settings 71 glow or shadows 69 imported images 85 RGB 146 text 64–65 Color parameter 64 color space 143, 146 Component Video codec 115 compositing 144 compression 144 compressors. See codecs control points 15, 49–51 copying items effects 93, 101 keyframes 101 tracks 52, 101 crawling text 129 crosshair guides 18 curved tracks 50–51, 121–123 D data, digital 144 data files 30 decompression 144 delaying items effects 91 LiveFonts effects 61 track contents 54 Delay slider 54 deleting items keyframes 100 tracks 52 description field 36 digital data 144 digital images 85 See also images digital video 144 direction of extrusions 71 draft quality 112 duplicating items effects 93, 101 keyframes 101 strings of elements 84 tracks 101 duration described 144 effects 89, 91, 92 tracks 54 DV codec 115 DV format 43, 82 DV Stream format 115 DVCPRO-NTSC codec 115 DVCPRO-PAL codec 115 DVD discs 144 E editing LiveType movies in Final Cut Pro 114 effects adding keyframes to 101 adjusting timing 90–92 applying to tracks 88–89 considerations 87 copying keyframes in 101 creating custom 103–106 described 22, 87, 135, 144 disabling 28, 55, 89, 113 duplicating 93 duration of 144 effects files 28 examples of 98–100 finding 107 groups of 92 using individual characters or glyphs 106 list of 88 in LiveType 8 modifying 93–100 order of 93 preset. See preset effects previewing 89, 102 renaming 102 saving 102 sequenced 146 storing 29, 140Index 151 in Timeline 27 timing settings 23, 90–92 EffectScripts 135–141 Effects folder 29, 140 Effects tab 22, 87 elements See also objects creating strings of 84–85 described 144 imported. See imported elements locked 133 problems selecting 133 repositioning 83 resizing 83 rotating 66–67, 83 types of 8 Enable/Disable buttons 28 endpoints linking 51 tracks 15 exploding fonts 126 exporting items 113, 114, 115–116 extruded outlines 71 Extrude style 22, 71, 85 F Fade In effect 98–100 FAQs (frequently asked questions) 131–133 Field Dominance option 36 fields described 144 description field 36 rendering order 36, 38–39 "file pair" formats 29 files data 30 effects files 135 imported 30 included with LiveType 28 project files 32 support profiles 134 file sizes 113, 123 filling elements. See matte feature Final Cut Pro editing LiveType movies from 114 importing LiveType movies 113 rendering LiveType projects in 113 finding items using the Timeline 107 FireWire 144 FLC format 115 folders 29–30 FontBook application 78 Font dialog 124 FontMaker utility 8 fonts See also text animated 8 changing for text tracks 59 choosing 58 color 64–65 described 144 disabling 78 problems with 131 required fonts 78 system 59, 60, 65 formats See also specific formats export 115–116 import 43, 82 formatting text 62–71 frame rates 35 frame ruler 26 frames See also keyframes described 144 poster frames 111 proxy frames 19, 111 time settings 36 viewing in Timeline 26 freezing previews 109 frequently asked questions (FAQs) 131–133 G GIF format 35, 43, 82 Glow style 22, 68–69, 85, 119–121, 132 Glyph pane 64–67 glyphs 132, 144 assigning effects to 106 color 64–65 elements as 84 modifying individual glyphs 77–78 motion paths 96 removing effects from 90 special character effects 123–126 transformation options 65–67 graphics. See images Graphics codec 115 grid customizing 18 hiding 37 settings 18, 37 showing 17, 37 grouping items effects 92 tracks 27, 56 guides, Canvas 18 H H.261 and 263 codecs 115 handles on objects 79152 Index hardware field rendering and 38 noting for support calls 134 headers in EffectScripts 135 height settings for GIFs 35 help, onscreen 10–11 Heuris MPEG format 115 hidden elements 76 hidden keyframes 132 high quality 112 Hold First and Hold Last options 61, 92 holes in layers. See matte feature hot keys 28 HSL (Hue, Saturation, and Lightness) 64 Hue values 64 I IEEE 1394 (FireWire) 144 images background 44 digital 85 filling track contents with 75 finding 107 importing 44, 82 pixelated 131 still 44 storing 30 Images folder 30 imported elements See also elements creating strings of 84–85 resizing 83 size considerations 113 system performance and 113 transforming 83–85 importing items background images 44 background movies 43–45 data files 30 described 145 LiveType projects into Final Cut Pro 113 formats for 43, 82 images 44, 82 movies 43–45, 82 still images 44 into third-party applications 114 In Point/Out Point markers 27 Inspector 20–24 described 13, 20, 145 formatting options 62 illustrated 13, 20 LEDs in 96 Live Wireframe Preview 21 tabs in 21–24 text-entry boxes 20, 58 Intel Indeo Video r3.2 format 115 Intel Raw codec 115 interlaced video 36 invisible elements 76 invisible keyframes 132 .ipr extension 32 J JPEG 2000 Image format 115 JPEG format 43, 82, 115 K keyboard shortcuts 28 Keyboard Viewer 60 Key Cap utility. See Keyboard Viewer keyframes 94–96 adding to effects 101 adjusting parameters 95–96 copying 101 deleting 100 described 27, 145 EffectScripts 136–139 hidden 132 modifying all keyframes 96 moving 100 overview 94 rotation and 66 “stuck” 132 viewing parameters 94 L layers advanced techniques 126–127 assigning to treatments 68 holes in. See matte feature track 55 leading, text 63 LEDs, Inspector 96 length of extrusions 71 Lightness values 64 linking endpoints 51 LiveFonts adjusting timing 60–61 advanced techniques 126–127 character sets 60 described 8, 145 vs. system fonts 59 LiveFonts folder 29 LiveType effects files 28 introduction to 7–12 media files 28, 145 onscreen help 10–11 restoring default layout of 14 unable to open 132Index 153 website 11 LiveType projects. See projects Live Wireframe Preview 20, 21, 109, 132, 147 locked items characters 133 elements 133 movies 83 textures 83 tracks 27 Loop button 110 looping items effects 92 LiveFonts effects 61 .ltfx extension 29 .ltlf extension 29 .ltob extension 29 .lttm extension 29 .lttx extension 29 M Mac OS, version of 134 MacPaint format 115 magnifying view. See zoom controls mailing lists for Pro Apps Developers 133 markers described 145 In Point 27 Out Point 27 Render Selection 27, 113 sequencing 94–96 timing 45 matte feature 24, 72–76, 85, 117–119 Media Browser categories in 30 described 13, 25, 145 media files in 29 media files described 145 importing 30 included with LiveType 28 older versions of 29 memory 113, 134, 146 Missing AFD dialog 111 Motion JPEGA and JPEGB codecs 115 motion paths adding to effects 104 described 96 scrolling or crawling text 128 unsmooth 132 movies See also titling movies attributes 85 background 43–45 filling track contents with 75 finding using the Timeline 107 importing 43–45, 82 looping through 110 output quality 112 preview. See preview movies rendering 114–116 styles 85 unlocking position of 83 moving items grouped tracks 56 groups of effects 92 MPEG-2 format 43, 82, 115 MPEG-4 format 43, 82, 102, 115 N names of modified effects 102 Needle Drop effect 119–121 Nitro LiveFont 126 normal quality 112 NTSC format 145 NTSC monitors 132 NTSC pixels 132 NTSC video 144, 145 O objects See also elements adding to Canvas 80 animated 8 attributes 85 creating strings of 84–85 described 8, 145 resizing 83 storing 29 styles 85 transforming 83–85 treatments 22 working with 80 Objects folder 29 Objects tab 80 offsetting items attributes for 24 Canvas Offset parameter 128 glow or shadows 69 non-text elements 85 text 66 opacity attributes 24 backgrounds 41 glow or shadows 69 mattes. See matte feature rendering backgrounds and 45 transparent text 65 order of effects 93 Outline extrusion option 71 Outline style 22, 70–71, 85154 Index P PAL format 145 PAL video 144 Parameter pop-up menu 97 parameters active parameters 97 of keyframes 94 LED indicators 96 Particles objects 19 pausing previews 110 performance, preview 112–113, 133 PhotoJPEG codec 115 Photoshop format 43, 82, 115 PICS format 43, 82 PICT files 145 PICT format 43, 82, 115, 145 picture aspect ratio 143 pixel aspect ratio 36, 145 pixelated images 131 pixels 36, 132, 145 Planar RGB codec 115 playhead 26 PLS format 43, 82 PNG codec 115 PNG format 44, 82, 115 poster frames 111 postproduction process 146 precedence order of effects 93 Preferences dialog 29 preset effects See also effects applying to tracks 88–89 disabling 89 for individual characters or glyphs 106 list of 88 modifying 93–102 order of 93 previewing 89 timing settings 90–92 presets, project 35 previewing items effects 102 freezing previews 109 Live Wireframe Preview 20 pausing previews 110 preset effects 89 RAM preview 17, 37, 110 titling movies 109–111 Wireframe previews 21, 37, 109, 132, 147 preview movies described 111 quality of 112 rendering 111 saving 111 preview performance 112–113, 133 Project Properties dialog 34–37, 112 projects backgrounds. See backgrounds defaults 33 presets for 35 properties 31, 34–37 quality settings 36 rendering 114–116 rendering in Final Cut Pro 113 saving as templates 32 setting up 31–37 starting 33 timing in. See timing settings project tabs, Timeline 26 properties, project 31, 34–37, 112 proxy frames 19, 111 Q quality settings 36 QuickTime codecs 115, 143 described 146 effects and 102 preview clips and 102 QuickTime image format 44, 82, 115 QuickTime movie format 44, 82, 114 R RAID storage 146 RAM 113, 146 RAM preview 17, 37, 110 Random option for effects 91 for LiveFonts effects 61 removing guides 18 Render Background setting 37 rendering 114–116 backgrounds 37, 45 changing selection 27 described 146 field rendering order 36, 38–39 in Final Cut Pro 113 improving speed of 113 preview movies 111 quality of 112 quality settings 37 settings for 112 time requirements 112 Render Selection markers 27, 113 reordering effects 93 replacing elements in tracks 85 resources Apple websites ??–12 for troubleshooting 131 restoring character positions 78Index 155 Reveal in Finder command 107 Reveal in Media Browser command 107 RGB color space 146 ripple drags 54 rotating items attributes for 24 EffectScript for 141 non-text elements 85 nontext elements 83 text 66–67, 77 rotation handles 79 rulers hiding 37 settings 37 showing 17, 37 S Saturation values 64 saving modified effects 102 scale handles 79 Scale parameter 66 scaling items attributes for 24 glow or shadow effects 69 non-text elements 85 non-text items 83 text elements 66 Screech effect 121 scripts (EffectScripts) 135–141 scrolling text 128 searching for items using the Timeline 107 SECAM format 146 segmented LiveFont animation 61 sequenced effects 61, 91, 146 sequencing markers 94–96 SGI format 44, 82, 115 Shadow style 22, 68–69, 85, 119–121, 132 shortcuts 28 Show Outline only setting 70 Size parameter 66 Slide effect 121 Slide parameter 51 SMPTE Drop time setting 36 SMPTE time format 36 Sorenson Video and Video 3 codecs 115 source files 30 spacing in text 63 speed adjusting for track contents 54 of effects 91 LiveFont movies 61 square pixels 132 still images 44 styles digital images 85 imported elements 85 movies 85 objects 85 text 68–71, 119–121 Style tab 22, 68–71 support profiles 134 symbols 123–126 system fonts changing color of 65 described 59 duration of 60 vs. LiveFonts 59 in scrolling or crawling text 128 T Targa format 44, 82 Template Browser 32 templates 31–32 described 9 opening 32 saving projects as 32 storing 29 Templates folder 29 terminology 143–147 testing field rendering 38 text 57–78 aligning 63 baselines 66 color 64–65 entering and adjusting 21, 58 extrusion effect 71 fonts. See fonts formatting 62–71 glow effect 68–69, 119–121 inserting 57–59 leading 63 matte effect 24, 72–76, 117–119 modifying characters 77–78 outline effect 70–71 rotating 66–67, 77 scrolls and crawls 128–129 shadow effect 68–69, 119–121 size of 63, 66 special characters 123–126 styles 68–71, 119–121 tracking 63 transforming 65–67 treatments 22 warping 69, 119–121 words within words effect 117–119 text-entry boxes 20, 58 Text tab 21, 62 text tracks aligning 63 changing fonts 59156 Index curved 121–123 modifying characters on 77–78 text in 57 textures adding to Canvas 81 animated 8 background 42 creating strings of 84–85 described 8, 146 filling track contents with 76 resizing 83 storing 29 transforming 83–85 unlocking position of 83 working with 81 Textures folder 29 Textures tab 81 TGA codec 115 TGA format 115 third-party applications 114 TIFF codec 115 TIFF format 44, 82, 146 timecode 26, 146 Timeline 26–28 background bar 43 described 13, 26, 146 effects in 27 illustrated 13, 26 moving keyframes in 100 playhead 26 project tabs 26 searching for items with 107 timecode 26 viewing frames in 26 working with tracks in 27, 53–56, 147 Timeline zoom slider 28 timing markers 45 timing settings controlling 23 effects 90–92 EffectScripts 135–136 LiveFonts 60–61 start time 36 time format 36 in Timing tab 61 Timing tab 23, 61, 90–92 Tinted Rotate EffectScript 141 title safe area 15, 146 title safe guidelines 15 titling movies See also movies creation workflow 9 previewing 109–111 rendering 114–116 titling process 7, 8 tracking options for text 63 tracks 47–56 adding 52 applying preset effects to 88–89 attributes 95, 133 control points 49–51 creating strings of elements on 84–85 curved 50–51, 121–123 delaying appearance 54 deleting 52 described 15, 47, 147 disabled 28, 55, 113 duplicating 52 duplicating effects in 93 duration 54, 144 empty 52 endpoints 15 entering text onto 58 filling with images 75 filling with movies 75 filling with texture 76 grouping 27, 56 groups of effects in 92 illustrated 48 isolating in Canvas 19 layers 55 linking endpoints 51 locking position 27 moving 48, 54 numbering 27 rendering previews 113 reordering 55 replacing elements in 85 resizing 48 selecting 62 shape of 48 sloping 48 text. See text tracks timing 23, 53–54 turning on/off 28 ungrouping 56 working with in Canvas 15, 147 working with in Timeline 147, 53–56 transforming items imported elements 83–85 text 65–67 transparency backgrounds 41 rendering and 45 text 65 transparent backgrounds 73–74 transport controls 16 troubleshooting 131–134 common problems 131–133 disabling fonts 78 distorted field rendering 38 LiveFonts in large text sizes 63Index 157 positions of matte and background images 75 resources for 131 scrolling or crawling text 128 U ungrouping tracks 56 V video digital 144 interlaced 36 NTSC 144 PAL 144 Video codec 115 W warping items 69, 119–121 web banners 35 websites Apple Applications 133 AppleCare 131, 134 Apple Developer Connection 133 Apple Service and Support 11 Apple Store 12 FireWire 12 LiveType 11 weight of outlined text 70 widescreen format 147 width of GIF images 35 Wireframe previews 21, 37, 109, 132, 147 words within words effect 117 workflows, title creation 9 X x coordinate 147 Y y coordinate 147 Z zoom controls in Canvas 16 EffectScripts for 140 Timeline zoom slider 28 Zoom In EffectScript 140 Zoom Out EffectScript 140 iPod nano User Guide2 2 Contents Preface 4 About iPod nano Chapter 1 6 iPod nano Basics 6 iPod nano at a Glance 7 Using iPod nano Controls 11 Using iPod nano Menus 13 Disabling iPod nano Controls 14 Connecting and Disconnecting iPod nano 17 About the iPod nano Battery Chapter 2 20 Setting Up iPod nano 20 Using iTunes 21 Importing Music into Your iTunes Library 24 Organizing Your Music 25 Using Genius in iTunes 26 Purchasing or Renting Videos and Downloading Video Podcasts 27 Adding Music to iPod nano 31 Adding Videos to iPod nano Chapter 3 34 Listening to Music 34 Playing Music and Other Audio 38 Using Genius on iPod nano 39 Setting iPod nano to Shuffle Songs 42 Watching and Listening to Podcasts 43 Listening to Audiobooks 43 Listening to FM Radio Chapter 4 44 Watching Videos 44 Watching and Listening to Videos on iPod nano 45 Watching Videos on a TV Connected to iPod nano Chapter 5 47 Photo Features 47 Importing PhotosContents 3 50 Viewing Photos Chapter 6 53 More Settings, Extra Features, and Accessories 53 Using iPod nano as an External Disk 54 Using Extra Settings 58 Syncing Contacts, Calendars, and To-Do Lists 60 Storing and Reading Notes 60 Recording Voice Memos 61 Using Spoken Menus for Accessibility 62 Learning About iPod nano Accessories Chapter 7 64 Tips and Troubleshooting 64 General Suggestions 69 Updating and Restoring iPod Software Chapter 8 70 Safety and Cleaning 70 Important Safety Information 72 Important Handling Information Chapter 9 73 Learning More, Service, and Support Index 76Preface 4 About iPod nano Congratulations on choosing iPod nano. With iPod nano, you can take your music, video, and photo collections with you wherever you go. To use iPod nano, you put music, videos, photos, and other files on your computer and then add them to iPod nano. Read this guide to learn how to:  Set up iPod nano to play music, music videos, movies, TV shows, podcasts, audiobooks, and more.  Use iPod nano as your pocket photo album, portable hard drive, alarm clock, game console, and voice memo recorder.  View video and photo slideshows on your TV.  Get the most out of all the features in iPod nano.Preface About iPod nano 5 What’s New in iPod nano  Genius, which automatically creates playlists of songs from your library that go great together  A motion sensor that lets you control certain functions by rotating or shaking iPod nano  Full-screen photo viewing in portrait or landscape format  Quick browsing for songs based on the album or artist you’re listening to  Direct access to more options from the Now Playing screen  New voice recording options  Improved accessibility with spoken menus1 6 1 iPod nano Basics Read this chapter to learn about the features of iPod nano, how to use its controls, and more. iPod nano at a Glance Get to know the controls on iPod nano: Dock connector Menu Previous/Rewind Play/Pause Hold switch Headphones port Click Wheel Next/Fast-forward Center buttonChapter 1 iPod nano Basics 7 Using iPod nano Controls The controls on iPod nano are easy to find and use. Press any button to turn on iPod nano. The main menu appears. Use the Click Wheel and Center button to navigate through onscreen menus, play songs, change settings, and get information. Move your thumb lightly around the Click Wheel to select a menu item. To choose the item, press the Center button. To go back to the previous menu, press Menu on the Click Wheel. Here’s what else you can do with iPod nano controls. To Do this Turn on iPod nano Press any button. Turn off iPod nano Press and hold Play/Pause (’). Turn on the backlight Press any button or use the Click Wheel. Disable the iPod nano controls (so nothing happens if you press them accidentally) Slide the Hold switch to HOLD (an orange bar appears). Reset iPod nano (if it isn’t responding) Slide the Hold switch to HOLD and back again. Press the Menu and Center buttons at the same time for about 6 seconds, until the Apple logo appears. Choose a menu item Scroll to the item and press the Center button. Go back to the previous menu Press Menu. Go directly to the main menu Press and hold Menu. Browse for a song From the main menu, choose Music. Browse for a video From the main menu, choose Videos. Play a song or video Select the song or video and press the Center button or Play/Pause (’). iPod nano must be ejected from your computer to play songs and videos. Pause a song or video Press Play/Pause (’) or unplug your headphones. Change the volume From the Now Playing screen, use the Click Wheel.8 Chapter 1 iPod nano Basics Play all the songs in a playlist or album Select the playlist or album and press Play/Pause (’). Play all songs in random order From the main menu, choose Shuffle Songs. You can also shuffle songs by shaking iPod nano. Enable or disable Shake for shuffling songs Choose Settings > Playback, choose Shake, and then select Shuffle or Off. Skip to any point in a song or video From the Now Playing screen, press the Center button to show the scrubber bar (a diamond icon on the bar shows the current location), and then scroll to any point in the song or video. Skip to the next song or chapter in an audiobook or podcast Press Next/Fast-forward (‘). Start a song or video over Press Previous/Rewind (]). Play the previous song or chapter in an audiobook or podcast Press Previous/Rewind (]) twice. Fast-forward or rewind a song Press and hold Next/Fast-forward (‘) or Previous/Rewind (]). Create a Genius playlist Play or select a song, and then press and hold the Center button until a menu appears. Select Start Genius, and then press the Center button (Start Genius appears only if there is Genius data for the song). Save a Genius playlist Create a Genius playlist, select Save Playlist, and then press the Center button. Play a saved Genius playlist From the Playlist menu, select a Genius playlist, and then press Play/Pause (’). Add a song to the On-The-Go playlist Play or select a song, and then press and hold the Center button until a menu appears. Select “Add to On-The-Go,” and then press the Center button. Access additional options Press and hold the Center button until a menu appears. Find the iPod nano serial number From the main menu, choose Settings > About and press the Center button until you see the serial number, or look on the back of iPod nano. To Do thisChapter 1 iPod nano Basics 9 Browsing Music Using Cover Flow You can browse your music collection using Cover Flow, a visual way to flip through your library. Cover Flow displays your albums alphabetically by artist name. You can activate Cover Flow from the main menu, any music menu, or the Now Playing screen. To use Cover Flow: 1 Rotate iPod nano 90 degrees to the left or the right. Cover Flow appears. 2 Use the Click Wheel to move through your album art. 3 Select an album and press the Center button. 4 Use the Click Wheel to select a song, and then press the Center button to play it. You can also browse quickly through your albums in Cover Flow by moving your thumb quickly on the Click Wheel. Note: Not all languages are supported. To browse quickly in Cover Flow: 1 Move your thumb quickly on the Click Wheel, to display a letter of the alphabet on the screen. 2 Use the Click Wheel to navigate through the alphabet until you find the first letter of the artist you’re looking for.10 Chapter 1 iPod nano Basics Albums by various artists and by artists beginning with a symbol or number appear after the letter “Z.” 3 Lift your thumb momentarily to return to normal browsing. 4 Select an album and press the Center button. 5 Use the Click Wheel to select a song, and then press the Center button to play it. Scrolling Quickly Through Long Lists You can scroll quickly through a long list by moving your thumb quickly on the Click Wheel. Note: Not all languages are supported. To scroll quickly: 1 Move your thumb quickly on the Click Wheel, to display a letter of the alphabet on the screen. 2 Use the Click Wheel to navigate through the alphabet until you find the first letter of the item you’re looking for. Items beginning with a symbol or number appear after the letter “Z.” 3 Lift your thumb momentarily to return to normal scrolling. 4 Use the Click Wheel to navigate to the item you want. Searching Music You can search iPod nano for songs, playlists, album titles, artist names, audio podcasts, and audiobooks. The search feature doesn’t search videos, notes, calendar items, contacts, or lyrics. Note: Not all languages are supported. To search for music: 1 From the Music menu, choose Search. 2 Enter a search string by using the Click Wheel to navigate the alphabet and pressing the Center button to enter each character. iPod nano starts searching as soon as you enter the first character, displaying the results on the search screen. For example, if you enter “b,” iPod nano displays all music items containing the letter “b.” If you enter “ab,” iPod nano displays all items containing that sequence of letters. To enter a space, press the Next/Fast-forward button. To delete the previous character, press the Previous/Rewind button. 3 Press Menu to display the results list, which you can now navigate. Items appear in the results list with icons identifying their type: song, video, artist, album, audiobook, or podcast. Chapter 1 iPod nano Basics 11 To return to Search (if Search is highlighted in the menu), press the Center button. Using iPod nano Menus When you turn on iPod nano, you see the main menu. Choose menu items to perform functions or go to other menus. Icons along the top of the screen show iPod nano status. Adding or Removing Items on the Main Menu You might want to add often-used items to the iPod nano main menu. For example, you can add a Songs item to the main menu, so you don’t have to choose Music before you choose Songs. To add or remove items on the main menu: 1 Choose Settings > General > Main Menu. 2 Select each item you want to appear in the main menu. A checkmark indicates which items have been added. Display item Function Menu title Displays the title of the current menu. Lock icon The Lock icon appears when the Hold switch (on the top of iPod nano) is set to HOLD. This indicates that the iPod nano controls are disabled. Play icon The Play (“) icon appears when a song, video, or other item is playing. The Pause (1) icon appears when the item is paused. Battery icon The Battery icon shows the approximate remaining battery charge. Menu items Use the Click Wheel to scroll through menu items. Press the Center button to choose an item. An arrow next to a menu item indicates that choosing it leads to another menu or screen. Preview panel Displays album art, photos, and other items and information relating to the menu item selected. Menu title Lock icon Battery icon Menu items Preview panel Play icon12 Chapter 1 iPod nano Basics Turning Off the Preview Panel The preview panel at the bottom of the main menu, which displays album art, photo thumbnails, available storage, and other information, can be turned off to allow more space for menu items. To turn off the preview panel: m Choose Settings > General > Main Menu > Preview Panel and then choose Off. To turn the preview panel on again, choose Settings > General > Main Menu > Preview Panel, and then choose On. The preview panel only displays art for a category if iPod nano contains at least four items with art in the category. Setting the Font Size in Menus iPod nano can display text in two different sizes, standard and large. To set the font size: m Choose Settings > General > Font Size, and then press the Center button to select Standard or Large. Setting the Language iPod nano can be set to use different languages. To set the language: m Choose Settings > Language, and then choose a language from the list. Setting the Backlight Timer You can set the backlight to turn on and illuminate the screen for a certain amount of time when you press a button or use the Click Wheel. The default is 10 seconds. To set the backlight timer: m Choose Settings > General > Backlight Timer, and then choose the time you want. Choose “Always On” to prevent the backlight from turning off (choosing this option decreases battery performance). Setting the Screen Brightness You can adjust the brightness of the iPod nano screen by moving a slider. To set the screen brightness: m Choose Settings > General > Brightness, and then use the Click Wheel to move the slider. Moving it to the left dims the screen; moving it to the right increases the screen brightness. You can also set the brightness during a slideshow or video. Press the Center button to display or dismiss the brightness slider.Chapter 1 iPod nano Basics 13 Turning Off the Click Wheel Sound When you scroll through menu items, you can hear a clicking sound through the headphones and through the iPod nano internal speaker. If you like, you can turn off the Click Wheel sound through the headphones, the speaker, or both. To turn off the Click Wheel sound: m Choose Settings > General and set Clicker to Off. To turn the Click Wheel sound on again, set Clicker to Speaker, Headphones, or Both. Getting Information About iPod nano You can get details about your iPod nano, such as the amount of space available, the number of songs, videos, photos, and other items, and the serial number, model, and software version. To get information about iPod nano: m Choose Settings > About, and press the Center button to cycle through the screens of information. Resetting All Settings You can reset all the items on the Settings menu to their default setting. To reset all settings: m Choose Settings > Reset Settings, and then choose Reset. Disabling iPod nano Controls If you don’t want to turn iPod nano on or activate controls accidentally, you can disable them with the Hold switch. The Hold switch disables all Click Wheel controls, and also disables functions that are activated by movement, such as shaking to shuffle and rotating to enter or exit Cover Flow. To disable iPod nano controls: m Slide the Hold switch to HOLD (so you can see the orange bar).14 Chapter 1 iPod nano Basics If you disable the controls while using iPod nano, the song, playlist, podcast, or video that’s playing continues to play. To stop or pause, slide the Hold switch to enable the controls again. Connecting and Disconnecting iPod nano You connect iPod nano to your computer to add music, videos, photos, and files, and to charge the battery. Disconnect iPod nano when you’re done. Important: The battery doesn’t charge when your computer is in sleep mode. Connecting iPod nano To connect iPod nano to your computer: m Plug the included iPod Dock Connector to USB 2.0 cable into a high-powered USB 2.0 port on your computer, and then connect the other end to iPod nano. If you have an iPod Dock, you can connect the cable to a USB 2.0 port on your computer, connect the other end to the Dock, and then put iPod nano in the Dock. Note: The USB port on most keyboards doesn’t provide enough power. Connect iPod nano to a USB 2.0 port on your computer. By default, iTunes syncs songs on iPod nano automatically when you connect it to your computer. When iTunes is finished, you can disconnect iPod nano. You can sync songs while your battery is charging. If you connect iPod nano to a different computer and it’s set to sync music automatically, iTunes prompts you before syncing any music. If you click Yes, the songs and other audio files already on iPod nano will be erased and replaced with songs and other audio files on the computer iPod nano is connected to. For information about adding music to iPod nano and using iPod nano with more than one computer, see Chapter 2, “Setting Up iPod nano,” on page 20.Chapter 1 iPod nano Basics 15 Disconnecting iPod nano It’s important not to disconnect iPod nano while it’s syncing. You can see if it’s OK to disconnect iPod nano by looking at the iPod nano screen. Important: Don’t disconnect iPod nano if you see the “Connected” or “Synchronizing” messages. You could damage files on iPod nano. If you see one of these messages, you must eject iPod nano before disconnecting it. If you set iPod nano to manage songs manually (see “Managing iPod nano Manually” on page 29) or enable iPod nano for disk use (see “Using iPod nano as an External Disk” on page 53), you must always eject iPod nano before disconnecting it. If you see one of these messages, you must eject iPod nano before disconnecting it If you see the main menu or a large battery icon, you can disconnect iPod nano.16 Chapter 1 iPod nano Basics To eject iPod nano: m Click the Eject (C) button next to iPod nano in the list of devices in the iTunes source list. If you’re using a Mac, you can also eject iPod nano by dragging the iPod nano icon on the desktop to the Trash. If you’re using a Windows PC, you can also eject iPod nano in My Computer or by clicking the Safely Remove Hardware icon in the Windows system tray and selecting iPod nano. To disconnect iPod nano: 1 Unplug the headphones if they’re attached. 2 Disconnect the cable from iPod nano. If iPod nano is in the Dock, simply remove it. You can safely disconnect iPod nano while either of these messages is displayed.Chapter 1 iPod nano Basics 17 About the iPod nano Battery iPod nano has an internal, non-user-replaceable battery. For best results, the first time you use iPod nano, let it charge for about three hours or until the battery icon in the status area of the display shows that the battery is fully charged. If iPod nano isn’t used for a while, the battery might need to be charged. Note: iPod nano continues to use battery power after it’s been turned off. The iPod nano battery is 80-percent charged in about one and a half hours, and fully charged in about three hours. If you charge iPod nano while adding files, playing music, watching videos, or viewing a slideshow, it might take longer. Charging the iPod nano Battery You can charge the iPod nano battery in two ways:  Connect iPod nano to your computer.  Use the Apple USB Power Adapter, available separately. To charge the battery using your computer: m Connect iPod nano to a USB 2.0 port on your computer. The computer must be turned on and not in sleep mode. If the battery icon on the iPod nano screen shows the Charging screen, the battery is charging. If it shows the Charged screen, the battery is fully charged. If you don’t see the Charging screen, iPod nano might not be connected to a high-power USB port. Try another USB port on your computer. 18 Chapter 1 iPod nano Basics Important: If a “Charging, Please Wait” or “Connect to Power” message appears on the iPod nano screen, the battery needs to be charged before iPod nano can communicate with your computer. See“If iPod nano displays a “Connect to Power” message” on page 66. If you want to charge iPod nano when you’re away from your computer, you can purchase the Apple USB Power Adapter. To charge the battery using the Apple USB Power Adapter: 1 Connect the AC plug adapter to the power adapter (they might already be connected). 2 Connect the iPod Dock Connector to USB 2.0 cable to the power adapter, and plug the other end of the cable into iPod nano. 3 Plug the power adapter into a working electrical outlet. WARNING: Make sure the power adapter is fully assembled before plugging it into an electrical outlet. AC plug adapter (The plug on your Power Adapter may look different.) USB Power Adapter iPod Dock Connector to USB 2.0 CableChapter 1 iPod nano Basics 19 Understanding Battery States When iPod nano isn’t connected to a power source, a battery icon in the top-right corner of the iPod nano screen shows approximately how much charge is left. If iPod nano is connected to a power source, the battery icon changes to show that the battery is charging or fully charged. You can disconnect and use iPod nano before it’s fully charged. Note: Rechargeable batteries have a limited number of charge cycles and might eventually need to be replaced. Battery life and number of charge cycles vary by use and settings. For information, go to www.apple.com/batteries. Improving Battery Performance with Energy Saver Energy Saver can extend the time between battery charges by turning off the iPod nano screen when you aren’t using the controls. To turn Energy Saver on or off: m Choose Settings > Playback > Energy Saver, and then select On or Off. Battery less than 20% charged Battery about halfway charged Battery fully charged Battery charging (lightning bolt) Battery fully charged (plug)2 20 2 Setting Up iPod nano To set up iPod nano, you use iTunes on your computer to import, buy, and organize your music, video, podcasts, audiobooks, games, and other media content. Then you connect iPod nano to your computer and sync it to your iTunes library. Using iTunes iTunes is the software application you use with iPod nano. iTunes can sync music, audiobooks, podcasts, and more with iPod nano. When you connect iPod nano to your computer, iTunes opens automatically. This guide explains how to use iTunes to download songs and other audio and video to your computer, create personal compilations of your favorite songs (called playlists), sync them to iPod nano, and adjust iPod nano settings. iTunes also has a feature called Genius that creates instant playlists of songs from your iTunes library that go great together. You can sync Genius playlists that you create in iTunes to iPod nano, and you can create Genius playlists on iPod nano. To use Genius, you need iTunes 8.0 or later and an iTunes Store account. iTunes has many other features. You can burn your own CDs that play in standard CD players (if your computer has a recordable CD drive); listen to streaming Internet radio; watch videos and TV shows; rate songs according to preference; and much more. For information about using these iTunes features, open iTunes and choose Help > iTunes Help. If you already have iTunes 8.0 installed on your computer and you’ve set up your iTunes library, you can skip ahead to the next section, “Syncing iPod nano.” To learn how to set up Genius in iTunes, see “Using Genius in iTunes” on page 25.Chapter 2 Setting Up iPod nano 21 Importing Music into Your iTunes Library To listen to music on iPod nano, you first need to get that music into iTunes on your computer. There are three ways of getting music and other audio into iTunes:  Purchase music, audiobooks, and videos, or download podcasts online from the iTunes Store.  Import music and other audio from audio CDs.  Add music and other audio that’s already on your computer to your iTunes library. Purchasing Songs and Downloading Podcasts Using the iTunes Store If you have an Internet connection, you can easily purchase and download songs, albums, audiobooks, and videos online using the iTunes Store. You can also subscribe to and download podcasts. To purchase music online using the iTunes Store, you set up an Apple account in iTunes, find the songs you want, and then buy them. If you already have an Apple account, or if you have an America Online (AOL) account (available in some countries only), you can use that account to sign in to the iTunes Store and buy songs. You don’t need an iTunes Store account to download or subscribe to podcasts. To sign in to the iTunes Store: m Open iTunes and then:  If you already have an iTunes account, choose Store > Sign In.  If you don’t already have an iTunes account, choose Store > Create Account and follow the onscreen instructions to set up an Apple account or enter your existing Apple account or AOL account information. You can browse or search the iTunes Store to find the album, song, or artist you’re looking for. Open iTunes and select iTunes Store in the source list.  To browse the iTunes Store, choose a category (for example, Music) on the left side of the main page in the iTunes Store. You can choose a genre, look at new releases, click one of the featured songs, look at Top Songs and more, or click Browse under Quick Links in the main iTunes Store window.22 Chapter 2 Setting Up iPod nano  To browse for podcasts, click the Podcasts link on the left side of the main page in the iTunes Store.  To search the iTunes Store, type the name of an album, song, artist, or composer in the search field.  To narrow your search, type something in the search field, press Return or Enter on your keyboard, and then click links in the Search Bar at the top of the results page. For example, to narrow your search to songs and albums, click the Music link.  To search for a combination of items, click Power Search in the Search Results window.  To return to the main page of the iTunes Store, click the Home button in the status line at the top of the window. To buy a song, album, music video, or audiobook: 1 Select iTunes Store in the source list, and then find the item you want to buy. You can double-click a song or other item to listen to a portion of it and make sure it’s what you want. (If your network connection is slower than 128 kbps, choose iTunes > Preferences, and in the Store pane, select the “Load complete preview before playing” checkbox.) 2 Click Buy Song, Buy Album, Buy Video, or Buy Book. The song or other item is downloaded to your computer and charged to the credit card listed on your Apple or AOL account. To download or subscribe to a podcast: 1 Select iTunes Store in the source list. 2 Click the Podcasts link on the left side of the main page in the iTunes Store. 3 Browse for the podcast you want to download.  To download a single podcast episode, click the Get Episode button next to the episode.  To subscribe to a podcast, click the Subscribe button next to the podcast graphic. iTunes downloads the most recent episode. As new episodes become available, they are automatically downloaded to iTunes when you connect to the Internet. For more information, see “Adding Podcasts to iPod nano” on page 30 and “Watching and Listening to Podcasts” on page 42. Adding Songs Already on Your Computer to Your iTunes Library If you have songs on your computer encoded in file formats that iTunes supports, you can easily add the songs to iTunes. To add songs on your computer to your iTunes library: m Drag the folder or disk containing the audio files to Library in the iTunes source list (or choose File > Add to Library and select the folder or disk). If iTunes supports the song file format, the songs are automatically added to your iTunes library.Chapter 2 Setting Up iPod nano 23 You can also drag individual song files to iTunes. Note: Using iTunes for Windows, you can convert nonprotected WMA files to AAC or MP3 format. This can be useful if you have a library of music encoded in WMA format. For more information, open iTunes and choose Help > iTunes Help. Importing Music From Your Audio CDs Into iTunes Follow these instructions to get music from your CDs into iTunes. To import music from an audio CD into iTunes: 1 Insert a CD into your computer and open iTunes. If you have an Internet connection, iTunes gets the names of the songs on the CD from the Internet (if available) and lists them in the window. If you don’t have an Internet connection, you can import your CDs and, later, when you’re connected to the Internet, choose Advanced > Get CD Track Names. iTunes will bring in the track names for the imported CDs. If the CD track names aren’t available online, you can enter the names of the songs manually. See “Entering Song Names and Other Details” below. With song information entered, you can browse for songs in iTunes or on iPod by title, artist, album, and more. 2 Click to remove the checkmark next to any song you don’t want to import. 3 Click the Import button. The display area at the top of the iTunes window shows how long it will take to import each song. By default, iTunes plays songs as they are imported. If you’re importing a lot of songs, you might want to stop the songs from playing to improve performance. 4 To eject the CD, click the Eject (C) button. You cannot eject a CD until the import is done. 5 Repeat these steps for any other CDs with songs you want to import. Entering Song Names and Other Details To enter CD song names and other information manually: 1 Select the first song on the CD and choose File > Get Info. 2 Click Info. 3 Enter the song information. 4 Click Next to enter information for the next song. 5 Click OK when you finish.24 Chapter 2 Setting Up iPod nano Adding Lyrics You can enter song lyrics in plain text format into iTunes so that you can view the song lyrics on iPod nano while the song is playing. To enter lyrics into iTunes: 1 Select a song and choose File > Get Info. 2 Click Lyrics. 3 Enter song lyrics in the text box. 4 Click Next to enter lyrics for the next song. 5 When you finish, click OK. For more information, see “Viewing Lyrics on iPod nano” on page 35. Adding Album Artwork Music you purchase from the iTunes Store includes album artwork, which iPod nano can display. You can add album artwork automatically for music you’ve imported from CDs, if the CDs are available from the iTunes Store. You can add album artwork manually if you have the album art on your computer. To add album artwork automatically: m Choose Advanced > Get Album Artwork. You must have an iTunes Store account to add album artwork automatically. To add album artwork to iTunes manually: 1 Select a song and choose File > Get Info. 2 Click Artwork. 3 Click Add, navigate to the artwork file, and click Choose. 4 Use the slider to adjust the size of the artwork. 5 Click Next to add artwork for the next song or album. 6 Click OK when you finish. For more information, see “Viewing Album Artwork on iPod nano” on page 36. Organizing Your Music Using iTunes, you can organize songs and other items into lists, called playlists, in any way you want. For example, you can create playlists with songs to listen to while exercising, or playlists with songs for a particular mood. You can also create Smart Playlists that update automatically based on rules you define. When you add songs to iTunes that match the rules, they automatically get added to the Smart Playlist.Chapter 2 Setting Up iPod nano 25 You can create as many playlists as you like using any of the songs in your iTunes library. Adding a song to a playlist or later removing it doesn’t remove it from your library. To create a playlist in iTunes: 1 Click the Add (+) button or choose File > New Playlist. 2 Type a name for the playlist. 3 Click Music in the Library list, and then drag a song or other item to the playlist. To select multiple songs, hold down the Shift key or the Command (x) key on a Mac, or the Shift key or the Control key on a Windows PC, as you click each song. To create a Smart Playlist: m Choose File > New Smart Playlist and define the rules for your playlist. Note: To create playlists on iPod nano when iPod nano isn’t connected to your computer, see “Creating On-The-Go Playlists on iPod nano” on page 37. Using Genius in iTunes Genius automatically creates playlists containing songs in your library that go great together. To use Genius on iPod nano, you first need to set up Genius in iTunes. Genius is a free service, but an iTunes Store account is required (if you don’t have one, you can set one up when you turn on Genius). To set up Genius: 1 In iTunes, choose Store > Turn On Genius. 2 Follow the onscreen instructions. iTunes collects anonymous information about your library and compares it with all songs available at the iTunes Store and with the libraries of other iTunes Store customers. The amount of time this takes can vary according to the size of your library, connection speed, and other factors. 3 Connect and sync iPod nano. You can now use Genius on iPod nano (see page 38). To create a Genius playlist in iTunes: 1 Click Music in the Library list or select a playlist. 2 Select a song. 3 Click the Genius button at the bottom of the iTunes window.26 Chapter 2 Setting Up iPod nano 4 To change the maximum number of songs included in the playlist, choose a number from the pop-up menu. 5 To save the playlist, click Save Playlist. You can add and remove items from a saved Genius playlist. You can also click Refresh to create a new playlist based on the same original song. Genius playlists created in iTunes can be synced to iPod nano like any iTunes playlist. See “Syncing Music From Selected Playlists to iPod nano” on page 28. Purchasing or Renting Videos and Downloading Video Podcasts To purchase videos—movies, TV shows, and music videos—or rent movies online from the iTunes Store (part of iTunes and available in some countries only), you sign in to your iTunes Store account, find the videos you want, and then buy or rent them. A rented movie expires 30 days after you rent it or 24 hours after you begin playing it, whichever comes first. Expired rentals are deleted automatically. Note: These terms apply to U.S. rentals. Rental terms vary among countries. To browse videos in the iTunes Store: 1 In iTunes, select iTunes Store in the source list. 2 Click an item (Movies, TV Shows, or Music Videos) in the iTunes Store list on the left. You can also find some music videos as part of an album or other offer. You can view movie trailers or TV show previews. Videos in iTunes and in the iTunes Store have a display ( ) icon next to them. To buy or rent a video: 1 Select iTunes Store in the source list, and then find the item you want to buy or rent. 2 Click Buy Video, Buy Episode, Buy Season, Buy Movie, or Rent Movie. Purchased videos appear when you select Movies or TV Shows (under Library) or Purchased (under Store) in the source list. Rented videos appear when you select Rented Movies (under Library). Some items have other options, such as TV shows that let you buy a season pass for all episodes. To download a video podcast: Video podcasts appear alongside other podcasts in the iTunes Store. You can subscribe to them and download them just as you would other podcasts. You don’t need an iTunes Store account to download podcasts. See “Purchasing Songs and Downloading Podcasts Using the iTunes Store” on page 21.Chapter 2 Setting Up iPod nano 27 Converting Your Own Videos to Work with iPod nano You can view other video files on iPod nano, such as videos you create in iMovie on a Mac or videos you download from the Internet. Import the video into iTunes, convert it for use with iPod nano, if necessary, and then add it to iPod nano. iTunes supports all the video formats that QuickTime supports. For more information, choose Help > QuickTime Player Help from the QuickTime Player menu bar. To import a video into iTunes: m Drag the video file to your iTunes library. Some videos may be ready for use with iPod nano after you import them to iTunes. If you try to add a video to iPod nano (see “Syncing Videos Automatically” on page 31), and a message says the video can’t play on iPod nano, then you must convert the video for use with iPod nano. To convert a video for use with iPod nano: 1 Select the video in your iTunes library. 2 Choose Advanced > “Convert Selection to iPod.” Depending on the length and content of a video, converting it for use with iPod nano can take several minutes to several hours. When you convert a video for use with iPod nano, the original video remains in your iTunes library. For more about converting video for iPod nano, go to www.info.apple.com/kbnum/n302758. Adding Music to iPod nano After your music is imported and organized in iTunes, you can easily add it to iPod nano. To set how music is added from your computer to iPod nano, you connect iPod nano to your computer, and then use iTunes preferences to choose iPod nano settings.28 Chapter 2 Setting Up iPod nano You can set iTunes to add music to iPod nano in three ways:  Sync all songs and playlists: When you connect iPod nano, it’s automatically updated to match the songs and other items in your iTunes library. Any other songs on iPod nano are deleted.  Sync selected playlists: When you connect iPod nano, it’s automatically updated to match the songs in playlists you select in iTunes.  Manually add music to iPod nano: When you connect iPod nano, you can drag songs and playlists individually to iPod nano, and delete songs and playlists individually from iPod nano. Using this option, you can add songs from more than one computer without erasing songs from iPod nano. When you manage music yourself, you must always eject iPod nano from iTunes before you can disconnect it. Syncing Music Automatically By default, iPod nano is set to sync all songs and playlists when you connect it to your computer. This is the simplest way to add music to iPod nano. You just connect iPod nano to your computer, let it add songs, audiobooks, videos, and other items automatically, and then disconnect it and go. If you added any songs to iTunes since the last time you connected iPod nano, they are synced with iPod nano. If you deleted songs from iTunes, they are removed from iPod nano. To sync music with iPod nano: m Simply connect iPod nano to your computer. If iPod nano is set to sync automatically, the update begins. Important: The first time you connect iPod nano to a computer, a message asks if you want to sync songs automatically. If you accept, all songs, audiobooks, and videos are erased from iPod nano and replaced with songs and other items from that computer. If you don’t accept, you can still add songs to iPod nano manually without erasing any of the songs already on iPod nano. While music is being synced from your computer to iPod nano, the iTunes status window shows progress, and you see a sync icon next to the iPod nano icon in the source list. When the update is done, a message in iTunes says “iPod update is complete.” Syncing Music From Selected Playlists to iPod nano Setting iTunes to sync selected playlists to iPod nano is useful if the music in your iTunes library doesn’t all fit on iPod nano. Only the music in the playlists you select is synced to iPod nano. To set iTunes to sync music from selected playlists to iPod nano: 1 In iTunes, select iPod nano in the source list and click the Music tab. 2 Select “Sync music” and then choose “Selected playlists.”Chapter 2 Setting Up iPod nano 29 3 Select the playlists you want. 4 To include music videos and display album artwork, select those options. 5 Click Apply. If “Sync only checked songs and videos” is selected in the Summary pane, iTunes syncs only items that are checked. Managing iPod nano Manually Setting iTunes to let you manage iPod nano manually gives you the most flexibility for managing music and video on iPod nano. You can add and remove individual songs (including music videos) and videos (including movies and TV shows). Also, you can add music and video from multiple computers to iPod nano without erasing items already on iPod nano. Setting iPod nano to manually manage music and video turns off the automatic sync options in the Music, Movies, and TV Shows panes. You cannot manually manage one and automatically sync another at the same time. To set iTunes to let you manage music and video on iPod nano manually: 1 In iTunes, select iPod nano in the source list and click the Summary tab. 2 In the Options section, select “Manually manage music and video.” 3 Click Apply. When you manage songs and video yourself, you must always eject iPod nano from iTunes before you disconnect it. To add a song, video, or other item to iPod nano: 1 Click Music or another Library item in the iTunes source list. 2 Drag a song or other item to iPod nano in the source list. To remove a song, video, or other item from iPod nano: 1 In iTunes, select iPod nano in the source list. 2 Select a song or other item on iPod nano and press the Delete or Backspace key on your keyboard. If you manually remove a song or other item from iPod nano, it isn’t deleted from your iTunes library. To create a new playlist on iPod nano: 1 In iTunes, select iPod nano in the source list, and then click the Add (+) button or choose File > New Playlist. 2 Type a name for the playlist. 3 Click an item, such as Music, in the Library list, and then drag songs or other items to the playlist.30 Chapter 2 Setting Up iPod nano To add songs to or remove songs from a playlist on iPod nano: m Drag a song to a playlist on iPod nano to add the song. Select a song in a playlist and press the Delete key on your keyboard to delete the song. If you set iTunes to manage music manually, you can reset it later to sync automatically. To reset iTunes to sync all music automatically on iPod nano: 1 In iTunes, select iPod nano in the source list and click the Music tab. 2 Select “Sync music” and then choose “All songs and playlists.” 3 Click Apply. The update begins automatically. If “Only sync checked items” is selected in the Summary pane, iTunes syncs only items that are checked in your Music and other libraries. Adding Podcasts to iPod nano The settings for adding podcasts to iPod nano are unrelated to the settings for adding songs. Podcast update settings don’t affect song update settings, and vice versa. You can set iTunes to automatically sync all or selected podcasts, or you can add podcasts to iPod nano manually. To set iTunes to update the podcasts on iPod nano automatically: 1 In iTunes, select iPod nano in the source list and click the Podcasts tab. 2 In the Podcasts pane, select “Sync … episodes” and choose the number of episodes you want in the pop-up menu. 3 Click “All podcasts” or “Selected podcasts.” If you click “Selected podcasts,” also select the podcasts in the list that you want to sync. 4 Click Apply. When you set iTunes to sync iPod nano podcasts automatically, iPod nano is updated each time you connect it to your computer. Note: If “Only sync checked items” is selected in the Summary pane, iTunes syncs only items that are checked in your Podcasts and other libraries. To manually manage podcasts: 1 In iTunes, select iPod nano in the source list and click the Summary tab. 2 Select “Manually manage music and videos” and click Apply. 3 Select the Podcasts library in the source list and drag the podcasts you want to iPod nano.Chapter 2 Setting Up iPod nano 31 Adding Videos to iPod nano You add movies and TV shows to iPod nano much the same way you add songs. You can set iTunes to sync all movies and TV shows to iPod nano automatically when you connect iPod nano, or you can set iTunes to sync only selected playlists. Alternatively, you can manage movies and TV shows manually. Using this option, you can add videos from more than one computer without erasing videos already on iPod nano. Note: Music videos are managed with songs, under the Music tab in iTunes. See “Adding Music to iPod nano” on page 27. Important: You can view a rented movie on only one device at a time. For example, if you rent a movie from the iTunes Store and add it to iPod nano, you can only view it on iPod nano. If you transfer the movie back to iTunes, you can only view it there and not on iPod nano. All standard time limits apply to rented movies added to iPod nano. Syncing Videos Automatically By default, iPod nano is set to sync all videos when you connect it to your computer. This is the simplest way to add videos to iPod nano. You just connect iPod nano to your computer, let it add videos and other items automatically, and then disconnect it and go. If you added any videos to iTunes since the last time you connected iPod nano, they are added to iPod nano. If you deleted videos from iTunes, they are removed from iPod nano. You can set iPod nano to sync videos automatically when you connect it to your computer. To sync videos to iPod nano: m Simply connect iPod nano to your computer. If iPod nano is set to sync automatically, the syncing begins.32 Chapter 2 Setting Up iPod nano Important: The first time you connect iPod nano to a different computer and have the automatic sync option set, a message asks if you want to sync songs and videos automatically. If you accept, all songs, videos, and other items are deleted from iPod nano and replaced with the songs, videos, and other items in the iTunes library on that computer. If you don’t accept, you can still add videos to iPod nano manually without deleting any of the videos already on iPod nano. iTunes includes a feature to sync purchased items from iPod nano to another computer. For more information, see iTunes Help. While videos are being synced from your computer to iPod nano, the iTunes status window shows progress and the iPod nano icon in the source list flashes red. When the update is done, a message in iTunes says “iPod update is complete.” Syncing Selected Videos to iPod nano Setting iTunes to sync selected videos to iPod nano is useful if you have more videos in your iTunes library than will fit on iPod nano. Only the videos you specify are synced with iPod nano. You can sync selected videos or selected playlists that contain videos. To set iTunes to sync unwatched or selected movies to iPod nano: 1 In iTunes, select iPod nano in the source list and click the Movies tab. 2 Select “Sync movies.” 3 Select the movies or playlists you want. Unwatched movies: Select “… unwatched movies” and choose the number you want from the pop-up menu. Selected movies or playlists: Click “Selected …,” choose “movies” or “playlists” from the pop-up menu, and then select the movies or playlists you want. 4 Click Apply. If “Only sync checked items” is selected in the Summary pane, iTunes syncs only movies that are checked. To set iTunes to sync most recent episodes or selected TV shows to iPod nano: 1 In iTunes, select iPod nano in the source list and click the TV Shows tab. 2 Select “Sync … episodes” and choose the number of episodes you want from the pop-up menu. 3 Click “Selected …” and choose “TV shows” or “playlists” from the pop-up menu. 4 Select the movies or playlists you want to sync. 5 Click Apply. If “Only sync checked items” is selected in the Summary pane, iTunes syncs only TV show that are checked.Chapter 2 Setting Up iPod nano 33 Managing Videos Manually Setting iTunes to let you manage iPod nano manually gives you the most flexibility for managing videos on iPod nano. You can add and remove movies, TV shows, and other items individually. You can also add videos from multiple computers to iPod nano without removing videos already on iPod nano. See “Managing iPod nano Manually” on page 29. If you set iTunes to manage movies and TV shows manually, you can reset iTunes later to sync them automatically. If you set iTunes to sync automatically after you’ve been manually managing iPod nano, you lose any items on iPod nano that aren’t part of your iTunes library. To set iTunes to sync all movies automatically on iPod nano: 1 In iTunes, select iPod nano in the source list and click the Movies tab. 2 Select “Sync movies” and then select “All movies.” 3 Click Apply. If “Only sync checked items” is selected in the Summary pane, iTunes syncs only movies that are checked. To set iTunes to sync all TV shows automatically on iPod nano: 1 In iTunes, select iPod nano in the source list and click the TV Shows tab. 2 Select “Sync … episodes” and choose “all” from the pop-up menu. 3 Select “All TV shows.” 4 Click Apply. If “Only sync checked items” is selected in the Summary pane, iTunes syncs only TV shows that are checked. Adding Video Podcasts to iPod nano You add video podcasts to iPod nano the same way you add other podcasts (see “Adding Podcasts to iPod nano” on page 30). If a podcast has a video component, the video plays when you choose it from Podcasts.3 34 3 Listening to Music After you set up iPod nano, you can listen to songs, podcasts, audiobooks, radio, and more. Read this chapter to learn about listening on the go. Playing Music and Other Audio Use the Click Wheel and Center button to browse for a song or music video. To browse for and play a song: m Choose Music, browse for a song or music video, and press the Play/Pause button. Note: When you browse for music videos in the Music menu, you only hear the music. When you browse for them in the Videos menu, you also see the video. When a song is playing, the Now Playing screen appears. The following table describes the elements on the Now Playing screen of iPod nano. Now Playing screen item Function Shuffle (¡) icon Appears if iPod nano is set to shuffle songs or albums. Repeat (⁄) icon Appears if iPod nano is set to repeat all songs. The Repeat Once (!) icon appears if iPod nano is set to repeat one song. Album art Shows the album art, if it’s available. Song information (click the Center button to see the scrubber bar, Genius or shuffle slider, song rating and lyrics) Album art Shuffle icon Repeat iconChapter 3 Listening to Music 35 To change the playback volume: m When you see the progress bar, use the Click Wheel to change the volume. To listen to a different part of a song: 1 Press the Center button until you see the scrubber bar. 2 Use the Click Wheel to move the diamond along the scrubber bar. To return to the previous menu: m From any screen, press the Menu button to return to the previous menu. Viewing Lyrics on iPod nano If you enter lyrics for a song in iTunes (see “Adding Lyrics” on page 24) and then add the song to iPod nano, you can view the lyrics on iPod nano. Lyrics will not appear if you did not enter them. To view lyrics on iPod nano while a song is playing: m On the Now Playing screen, press the Center button until you see the lyrics. You can scroll through the lyrics as the song plays. Rating Songs You can assign a rating to a song (from 1 to 5 stars) to indicate how much you like it. You can use song ratings to help you create Smart Playlists automatically in iTunes. To rate a song: 1 Start playing the song. 2 From the Now Playing screen, press the Center button until the five rating bullets appear. 3 Use the Click Wheel to choose a rating (represented by stars). Note: You cannot assign ratings to video podcasts. Song information Displays the song title, artist, and album title. Song time progress bar Shows the elapsed and remaining times for the song that’s playing. Scrubber bar Allows you to quickly navigate to a different part of the track. Genius slider Creates a Genius playlist based on the current song (doesn’t appear if Genius information isn’t available for the current song). Shuffle slider Allows you to shuffle songs or albums directly from the Now Playing screen. Song Rating Displays stars if you rate the song. Lyrics Displays the lyrics of the song that’s playing (doesn’t appear if you didn’t enter the song’s lyrics). Now Playing screen item Function36 Chapter 3 Listening to Music Viewing Album Artwork on iPod nano By default, iTunes displays album artwork on iPod nano. If the artwork is available, you’ll see it on iPod nano in Cover Flow, in the album list, and when you play music from the album. To set iTunes to display album artwork on iPod nano: 1 Connect iPod nano to your computer. 2 In iTunes, select iPod nano in the source list and click the Music tab. 3 Select “Display album artwork on your iPod.” To see album artwork on iPod nano: m Hold iPod nano horizontally to view Cover Flow, or play a song that has album artwork. For more information about album artwork, open iTunes and choose Help > iTunes Help. Accessing Additional Commands Some iPod nano commands can be accessed directly from the Now Playing screen and some menus. To access additional commands: m Press and hold the Center button until a menu appears, select a command, and then press the Center button again. Browsing Songs by Album or Artist When you’re listening to a song, you can browse more songs by the same artist or all the songs in the current album. To browse songs by album: 1 From the Now Playing screen, press and hold the Center button until a menu appears. 2 Choose Browse Album, and then press the Center button. You see all the songs from the current album that are on iPod nano. You can select a different song or return to the Now Playing screen.Chapter 3 Listening to Music 37 To browse songs by artist: 1 From the Now Playing screen, press and hold the Center button until a menu appears. 2 Choose Browse Artist, and then press the Center button. You see all the songs by that artist that are on iPod nano. You can select a different song or return to the Now Playing screen. Creating On-The-Go Playlists on iPod nano You can create playlists on iPod nano, called On-The-Go Playlists, when iPod nano isn’t connected to your computer. To create an On-The-Go playlist: 1 Select a song, and then press and hold the Center button until a menu appears. 2 Choose “Add to On-The-Go.” 3 To add more songs, repeat steps 1 and 2. 4 Choose Music > Playlists > On-The-Go to browse and play your list of songs. You can also add a group of songs. For example, to add an album, highlight the album title, press and hold the Center button until a menu appears, and then choose “Add to On-The-Go.” To play songs in the On-The-Go playlist: m Choose Music > Playlists > On-The-Go, and then choose a song. To remove a song from the On-The-Go playlist: 1 Select a song in the playlist and hold down the Center button until a menu appears. 2 Choose “Remove from On-The-Go,” and then press the Center button. To clear the entire On-The-Go playlist: m Choose Music > Playlists > On-The-Go > Clear Playlist, and then click Clear. To save the On-The-Go playlist on iPod nano: m Choose Music > Playlists > On-The-Go > Save Playlist. The first playlist is saved as “New Playlist 1” in the Playlists menu. The On-The-Go playlist is cleared. You can save as many playlists as you like. After you save a playlist, you can no longer remove songs from it. To copy On-The-Go playlists from iPod nano to your computer: m If iPod nano is set to update songs automatically (see “Syncing Music Automatically” on page 28) and you create an On-The-Go playlist, the playlist is automatically copied to iTunes when you connect iPod nano. The new On-The-Go playlist appears in the list of playlists in iTunes. You can rename, edit, or delete the new playlist, just as you would any playlist.38 Chapter 3 Listening to Music Using Genius on iPod nano When iPod nano isn’t connected to your computer, Genius can still automatically create instant playlists of songs that go great together. To use Genius, you need to set up Genius in the iTunes Store, and then sync iPod nano to iTunes. You can also create Genius playlists in iTunes and add them to iPod nano. To set up Genius in iTunes, see “Using Genius in iTunes” on page 25. To make a Genius playlist with iPod nano: 1 Select a song, and then press and hold the Center button until a menu appears. You can select a song from a menu or playlist, or you can start from the Now Playing screen. 2 Choose Start Genius, and then press the Center button. The new playlist appears. Start Genius doesn’t appear if any of the following apply:  You haven’t set up Genius in iTunes and then synced iPod nano to iTunes.  Genius doesn’t recognize the song you’ve selected.  Genius recognizes the song but there aren’t at least ten similar songs in your library. 3 To keep the playlist, choose Save Playlist. The playlist is saved with the song title and artist of the song you used to make the playlist. 4 To change the playlist to a new one based on the same song, choose Refresh. If you refresh a saved playlist, the new playlist replaces the previous one. You can’t recover the previous playlist. You can also start Genius from the Now Playing screen by pressing the Center button until you see the Genius slider, and then using the Click Wheel to move the slider to the right. The Genius slider won’t appear if Genius doesn’t recognize the song that’s playing. Genius playlists saved on iPod nano sync to iTunes when you connect iPod nano to your computer. To play a Genius playlist: m Choose Music > Playlists and choose the playlist.Chapter 3 Listening to Music 39 Setting iPod nano to Shuffle Songs You can set iPod nano to play songs, albums, or your entire library in random order. To set iPod nano to shuffle and play all your songs: m Choose Shuffle Songs from the iPod nano main menu. iPod nano begins playing songs from your entire music library in random order, skipping audiobooks and podcasts. To set iPod nano to always shuffle songs or albums: 1 Choose Settings from the iPod nano main menu. 2 Set Shuffle to either Songs or Albums. When you set iPod nano to shuffle songs by choosing Settings > Shuffle, iPod nano shuffles songs within the list (for example, album or playlist) you choose to play. When you set iPod nano to shuffle albums, it plays all the songs on an album in order, and then randomly selects another album in the list and plays through it in order. You can also set iPod nano to shuffle songs directly from the Now Playing screen by clicking the Center button until the shuffle slider appears, and then using the Click Wheel to set iPod nano to shuffle songs or albums. To shuffle songs while a song is playing or paused: m Shake iPod nano from side to side. A new song starts to play. Shaking to shuffle doesn’t change your shuffle settings, whether you set them by choosing Settings > Shuffle or by using the shuffle slider.40 Chapter 3 Listening to Music To disable shaking: m Choose Settings > Playback > Shake and select Off. To turn shaking on again, choose Settings > Playback > Shake, and then select On. Shaking is also disabled when the Hold switch is in the HOLD position, or if the display is off. If iPod nano is off, you can’t turn it on by shaking it. Setting iPod nano to Repeat Songs You can set iPod nano to repeat a song over and over, or repeat songs within the list you choose to play. To set iPod nano to repeat songs: m Choose Settings from the iPod nano main menu.  To repeat all songs in the list, set Repeat to All.  To repeat one song over and over, set Repeat to One. Customizing the Music Menu You can add items to or remove them from the Music menu, just as you do with the main menu. For example, you can add a Compilations item to the Music menu, so you can easily choose compilations that are put together from various sources. To add or remove items in the Music menu: 1 Choose Settings > General > Music Menu. 2 Select each item you want to appear in the Music menu. A checkmark indicates which items have been added. To revert to the original Music menu settings, choose Reset Menu. Setting the Maximum Volume Limit You can set a limit for the maximum volume on iPod nano and assign a combination to prevent the setting from being changed. To set the maximum volume limit for iPod nano: 1 Choose Settings > Playback > Volume Limit. The volume control shows the current volume. 2 Use the Click Wheel to select the maximum volume limit. 3 Press the Center button to set the maximum volume limit. A triangle on the volume bar indicates the maximum volume limit.Chapter 3 Listening to Music 41 To require a combination to change the maximum volume: 1 After setting the maximum volume, use the Click Wheel to select Lock and then press the Center button. 2 In the screen that appears, enter a combination. To enter a combination:  Use the Click Wheel to select a number for the first position. Press the Center button to confirm your choice and move to the next position.  Use the same method to set the remaining numbers of the combination. You can use the Next/Fast-forward button to move to the next position and the Previous/Rewind button to move to the previous position. Press the Center button in the final position to confirm the entire combination. The volume of songs and other audio may vary depending on how the audio was recorded or encoded. See “Setting Songs to Play at the Same Volume Level” on page 42 for information about how to set a relative volume level in iTunes and on iPod nano. Volume level may also vary if you use different earphones or headphones. With the exception of the iPod Radio Remote, accessories that connect through the iPod Dock Connector don’t support volume limits. If you set a combination, you must enter it before you can change or remove the maximum volume limit. To change the maximum volume limit: 1 Choose Settings > Playback > Volume Limit. 2 If you set a combination, enter it by using the Click Wheel to select the numbers and pressing the Center button to confirm them. 3 Use the Click Wheel to change the maximum volume limit. 4 Press the Play/Pause button to accept the change. To remove the maximum volume limit: 1 If you’re currently listening to iPod nano, press Pause. 2 Choose Settings > Playback > Volume Limit. 3 If you set a combination, enter it by using the Click Wheel to select the numbers and pressing the Center button to confirm them. 4 Use the Click Wheel to move the volume limit to the maximum level on the volume bar. This removes any restriction on volume. 5 Press the Play/Pause button to accept the change. If you forget the combination, you can restore iPod nano. See “Updating and Restoring iPod Software” on page 69.42 Chapter 3 Listening to Music Setting Songs to Play at the Same Volume Level iTunes can automatically adjust the volume of songs, so they play at the same relative volume level. You can set iPod nano to use the iTunes volume settings. To set iTunes to play songs at the same sound level: 1 In iTunes, choose iTunes > Preferences if you’re using a Mac, or choose Edit > Preferences if you’re using a Windows PC. 2 Click Playback and select Sound Check, and then click OK. To set iPod nano to use the iTunes volume settings: m Choose Settings and set Sound Check to On. If you haven’t activated Sound Check in iTunes, setting it on iPod nano has no effect. Using the Equalizer You can use equalizer presets to change the sound on iPod nano to suit a particular music genre or style. For example, to make rock music sound better, set the equalizer to Rock. To use the equalizer to change the sound on iPod nano: m Choose Settings > Playback > EQ, and then choose an equalizer preset. If you assigned an equalizer preset to a song in iTunes and the iPod nano equalizer is set to Off, the song plays using the iTunes setting. See iTunes Help for more information. Crossfading Between Songs You can set iPod nano to fade out at the end of each song and fade in at the beginning of the song following it. To turn on crossfading: m Choose Settings > Playback > Audio Crossfade and select On. Note: Songs that are grouped for gapless playback play without gaps even when crossfading is on. Watching and Listening to Podcasts Podcasts are downloadable audio or video shows you get at the iTunes Store. You can listen to audio podcasts and watch video podcasts. Podcasts are organized by shows, episodes within shows, and chapters within episodes. If you stop watching or listening to a podcast and go back to it later, the podcast begins playing from where you left off. To watch or listen to a podcast: 1 From the main menu, choose Podcasts, and then choose a show.Chapter 3 Listening to Music 43 Shows appear in reverse chronological order so that you can watch or listen to the most recent one first. You see a blue dot next to shows and episodes you haven’t watched or listened to yet. 2 Choose an episode to play it. The Now Playing screen displays the show, episode, and date information, along with elapsed and remaining time. Press the Center button to see more information about the podcast. If the podcast includes artwork, you also see a picture. Podcast artwork can change during an episode. If the podcast has chapters, you can press the Next/Fast-forward or Previous/Rewind button to skip to the next chapter or the beginning of the current chapter in the podcast. For more information about podcasts, open iTunes and choose Help > iTunes Help. Then search for “podcasts.” Listening to Audiobooks You can purchase and download audiobooks from the iTunes Store or from audible.com and listen to them on iPod nano. Use iTunes to add audiobooks to iPod nano the same way you add songs. If you stop listening to an audiobook on iPod nano and go back to it later, the audiobook begins playing from where you left off. iPod nano skips audiobooks when set to shuffle. If the audiobook you’re listening to has chapters, you can press the Next/Fast-forward or Previous/Rewind button to skip to the next chapter or the beginning of the current chapter. You can play audiobooks at speeds faster or slower than normal. To set audiobook play speed: m Choose Settings > Playback > Audiobooks and choose a speed, or press and hold the Center button from the Now Playing window. Setting the play speed affects only audiobooks purchased from the iTunes Store or audible.com. Listening to FM Radio You can listen to radio using the optional iPod Radio Remote accessory for iPod nano. iPod Radio Remote attaches to iPod nano using the Dock connector cable. When you’re using iPod Radio Remote, you see a Radio menu item on the iPod nano main menu. For more information, see the iPod Radio Remote documentation.4 44 4 Watching Videos You can use iPod nano to watch TV shows, movies, video podcasts, and more. Read this chapter to learn about watching videos on iPod nano and on your TV. You can view and listen to videos on iPod nano. If you have an AV cable from Apple (available separately at www.apple.com/ipodstore), you can watch videos from iPod nano on your TV. Watching and Listening to Videos on iPod nano Videos you add to iPod nano appear in the Videos menus. Music videos also appear in Music menus. To watch a video on iPod nano: m Choose Videos and browse for a video. Select a video and then press Play/Pause. To watch the video, hold iPod nano horizontally. If you rotate iPod nano to the left or right, the video adjusts accordingly. When you play a video on iPod nano, you see and hear the video. Chapter 4 Watching Videos 45 To just listen to a music video: m Choose Music and browse for a music video. When you play the video, you hear it but don’t see it. When you play a playlist that includes video podcasts, you hear the podcasts but don’t see them. To watch a video podcast: m From the main menu, choose Podcasts and then choose a video podcast. See “Watching and Listening to Podcasts” on page 42 for more information. Watching Videos on a TV Connected to iPod nano If you have an AV cable from Apple, you can watch videos on a TV connected to your iPod nano. First you set iPod nano to display videos on a TV, then connect iPod nano to your TV, and then play a video. Note: Use the Apple Component AV Cable, the Apple Composite AV Cable, or the Apple AV Connection Kit. Other similar RCA-type cables might not work. You can purchase the cables at www.apple.com/ipodstore. To set iPod nano to display videos on a TV: m Choose Videos > Settings, and then set TV Out to Ask or On. If you set TV Out to Ask, iPod nano gives you the option of displaying videos on TV or on iPod nano every time you play a video. You can also set video to display full screen or widescreen, and set video to display on PAL or NTSC devices. To set TV settings: m Choose Videos > Settings, and then follow the instructions below. To set Do this Video to display on a TV Set TV Out to Ask or On. Video to display on PAL or NTSC TVs Set TV Signal to PAL or NTSC. PAL and NTSC refer to TV broadcast standards. Your TV might use either of these, depending on the region where it was purchased. If you aren’t sure which your TV uses, check the documentation that came with your TV. The format of your external TV Set TV Screen to Widescreen for 16:9 format or Standard for 4:3 format. Video to fit to your screen Set “Fit to Screen” to On. If you set “Fit to Screen” to Off, widescreen videos display in letterbox format on iPod nano or a standard (4:3) TV screen. Alternate audio to play Set Alternate Audio to On.46 Chapter 4 Watching Videos To use the Apple Component AV Cable to connect iPod nano to your TV: 1 Plug the red, green, and blue video connectors into the component video input (Y, Pb, and Pr) ports on your TV. You can also use the Apple Composite AV cable. If you do, plug in the yellow video connector into the video input port on your TV. Your TV must have RCA video and audio ports. 2 Plug the white and red audio connectors into the left and right analog audio input ports, respectively, on your TV. 3 Plug the iPod Dock Connector into your iPod nano or Universal Dock. 4 Plug the USB connector into your USB Power Adapter or your computer to keep your iPod nano charged. 5 Turn on iPod nano and your TV or receiver to start playing. Make sure you set TV Out on iPod nano to On. Note: The ports on your TV or receiver may differ from the ports in the illustration. To watch a video on your TV: 1 Connect iPod nano to your TV (see above). 2 Turn on your TV and set it to display from the input ports connected to iPod nano. See the documentation that came with your TV for more information. 3 On iPod nano, choose Videos and browse for a video. Captions to display Set Captions to On. Subtitles to display Set Subtitles to On. To set Do this USB Power Adapter iPod Left audio (white) Dock Connector Television Video in (Y, Pb, Pr) Right audio (red) USB connector5 47 5 Photo Features You can import digital photos to your computer and add them to iPod nano. You can view your photos on iPod nano or as a slideshow on your TV. Read this chapter to learn about importing and viewing photos. Importing Photos You can import digital photos from a digital camera to your computer, and then add them to iPod nano for viewing. You can connect iPod nano to your TV and view photos as a slideshow with music. Importing Photos from a Camera to Your Computer You can import photos from a digital camera or a photo card reader. To import photos to a Mac using iPhoto: 1 Connect the camera or photo card reader to your computer. Open iPhoto (located in the Applications folder) if it doesn’t open automatically. 2 Click Import. Images from the camera are imported into iPhoto. You can import other digital images into iPhoto, such as images you download from the web. For more information about importing and working with photos and other images, open iPhoto and choose Help > iPhoto Help.48 Chapter 5 Photo Features iPhoto is available for purchase as part of the iLife suite of applications at www.apple.com/ilife. iPhoto might already be installed on your Mac, in the Applications folder. If you don’t have iPhoto, you can import photos using Image Capture. To import photos to a Mac using Image Capture: 1 Connect the camera or photo card reader to your computer. 2 Open Image Capture (located in the Applications folder) if it doesn’t open automatically. 3 To choose specific items to import, click Download Some. Or to download all items, click Download All. To import photos to a Windows PC: m Follow the instructions that came with your digital camera or photo application. Adding Photos From Your Computer to iPod nano You can add photos to iPod nano from a folder on your hard disk. If you have a Mac and iPhoto 6 or later, you can sync iPhoto albums automatically. If you have a Windows PC and Adobe Photoshop Album 2.0 or later, or Adobe Photoshop Elements 4.0 or later, you can sync photo collections automatically. Adding photos to iPod nano the first time might take some time, depending on how many photos are in your photo library. To sync photos from a Mac or Windows PC to iPod nano using a photo application: 1 In iTunes, select iPod nano in the source list and click the Photos tab. 2 Select “Sync photos from: …”  On a Mac, choose iPhoto from the pop-up menu.  On a Windows PC, choose Photoshop Album or Photoshop Elements from the pop-up menu. Note: Some versions of Photoshop Album and Photoshop Elements don’t support collections. You can still use them to add all your photos.Chapter 5 Photo Features 49 3 If you want to add all your photos, select “All photos and albums.” If you want to keep your photos organized by event, select “…events” and choose an option from the popup menu. If you want to add photos from only certain albums, select “Selected albums” and select the albums you want. 4 Click Apply. Each time you connect iPod nano to your computer, photos are synced automatically. To add photos from a folder on your hard disk to iPod nano: 1 Drag the images you want into a folder on your computer. If you want images to appear in separate photo albums on iPod nano, create folders inside the main image folder, and drag images into the new folders. 2 In iTunes, select iPod nano in the source list and click the Photos tab. 3 Select “Sync photos from …” 4 Choose “Choose Folder” from the pop-up menu and select your image folder. 5 Click Apply. When you add photos to iPod nano, iTunes optimizes the photos for viewing. Full-resolution image files aren’t transferred by default. Adding full-resolution image files is useful, for example if you want to move them from one computer to another, but isn’t necessary for viewing the images at full quality on iPod nano. To add full-resolution image files to iPod nano: 1 In iTunes, select iPod nano in the source list and click the Photos tab. 2 Select “Include full-resolution photos.” 3 Click Apply. iTunes copies the full-resolution versions of the photos to the Photos folder on iPod nano. To delete photos from iPod nano: 1 In iTunes, select iPod nano in the source list and click the Photos tab. 2 Select “Sync photos from: …”  On a Mac, choose iPhoto from the pop-up menu.  On a Windows PC, choose Photoshop Album or Photoshop Elements from the pop-up menu. 3 Choose “Selected albums” and deselect the albums you no longer want on iPod nano. 4 Click Apply.50 Chapter 5 Photo Features Adding Photos from iPod nano to a Computer If you add full-resolution photos from your computer to iPod nano using the previous steps, they’re stored in a Photos folder on iPod nano. You can connect iPod nano to a computer and put these photos on the computer. iPod nano must be enabled for disk use (see “Using iPod nano as an External Disk” on page 53). To add photos from iPod nano to a computer: 1 Connect iPod nano to the computer. 2 Drag image files from the Photos folder or DCIM folder on iPod nano to the desktop or to a photo editing application on the computer. You can also use a photo editing application, such as iPhoto, to add photos stored in the Photos folder. See the documentation that came with the application for more information. To delete photos from the Photos folder on iPod nano: 1 Connect iPod nano to the computer. 2 Navigate to the Photos folder on iPod nano and delete the photos you no longer want. Viewing Photos You can view photos on iPod nano manually or as a slideshow. If you have an optional AV cable from Apple (for example, Apple Component AV Cable), you can connect iPod nano to your TV and view photos as a slideshow with music. Viewing Photos on iPod nano To view photos on iPod nano: 1 On iPod nano, choose Photos > All Photos. Or choose Photos and a photo album to see only the photos in the album. Thumbnail views of the photos might take a moment to appear. 2 Select the photo you want and press the Center button.Chapter 5 Photo Features 51 3 To view photos, hold iPod nano vertically for portrait format, or horizontally for landscape format. From any photo-viewing screen, use the Click Wheel to scroll through photos (if you’re viewing a slideshow, the Click Wheel controls music volume only). Press the Next/Fastforward or Previous/Rewind button to skip to the next or previous screen of photos. Press and hold the Next/Fast-forward or Previous/Rewind button to skip to the last or first photo in the library or album. Viewing Slideshows You can view a slideshow, with music and transitions if you choose, on iPod nano. If you have an optional AV cable from Apple, you can view the slideshow on your TV. To set slideshow settings: m Choose Photos > Settings, and then follow these instructions: To set Do this How long each slide is shown Choose Time Per Slide and pick a time. The music that plays during slideshows Choose Music and choose a playlist or Now Playing. If you’re using iPhoto, you can choose From iPhoto to copy the iPhoto music setting. Only the songs that you’ve added to iPod nano play. Slides to repeat Set Repeat to On. Slides to display in random order Set Shuffle Photos to On. Slides to display with transitions Choose Transitions and choose a transition type. Slideshows to display on iPod nano Set TV Out to Ask or Off.52 Chapter 5 Photo Features To view a slideshow on iPod nano: m Select any photo, album, or roll, and press the Play/Pause button. Or select any full-screen photo and press the Center button. To pause, press the Play/Pause button. To skip to the next or previous photo, press the Next/Fast-forward or Previous/Rewind button. When you view a slideshow, the Click Wheel controls just the music volume. You can’t use the Click Wheel to scroll through photos during a slideshow. To connect iPod nano to your TV: 1 Connect the optional Apple Component or Composite AV cable to iPod nano. Use the Apple Component AV Cable, Apple Composite AV Cable, or Apple AV Connection Kit. Other similar RCA-type cables won’t work. You can purchase the cables at www.apple.com/ipodstore. 2 Connect the audio connectors to the ports on your TV (for an illustration, see page 46). Your TV must have RCA video and audio ports. To view a slideshow on your TV: 1 Connect iPod nano to your TV (see above). 2 Turn on your TV and set it to display from the input ports connected to iPod nano. See the documentation that came with your TV for more information. 3 On iPod nano, select any photo or album and press the Play/Pause button. Or select any full-screen photo and press the Center button. To pause, press the Play/Pause button. To skip to the next or previous photo, press the Next/Fast-forward or Previous/ Rewind button. If you selected a playlist in Photos > Settings > Music, the playlist plays automatically when you start the slideshow. The photos display on your TV and advance automatically according to settings in the Slideshow > Settings menu. Slideshows to display on TV Set TV Out to Ask or On. If you set TV Out to Ask, iPod nano gives you the option of showing slideshows on TV or on iPod nano every time you start a slideshow. Slides to show on PAL or NTSC TVs Set TV Signal to PAL or NTSC. PAL and NTSC refer to TV broadcast standards. Your TV might use either of these, depending on the region where it was purchased. If you aren’t sure which your TV uses, check the documentation that came with your TV. To set Do this6 53 6 More Settings, Extra Features, and Accessories iPod nano can do a lot more than play songs. And you can do a lot more with it than listen to music. Read this chapter to find out more about the extra features of iPod nano, including how to use it as an external disk, alarm, or sleep timer; play games; show the time of day in other parts of the world; display notes; and sync contacts, calendars, and to-do lists. Learn about how to use iPod nano as a stopwatch and to lock the screen, and about the accessories available for iPod nano. Using iPod nano as an External Disk You can use iPod nano as an external disk to store data files. You won’t see songs you add using iTunes in the Mac Finder or in Windows Explorer. And if you copy music files to iPod nano in the Mac Finder or Windows Explorer, you won’t be able to play them on iPod nano. To enable iPod nano as an external disk: 1 In iTunes, select iPod nano in the source list and click the Summary tab. 2 In the Options section, select “Enable disk use.” 3 Click Apply. When you use iPod nano as an external disk, the iPod nano disk icon appears on the desktop on Mac, or as the next available drive letter in Windows Explorer on a Windows PC. Note: Clicking Summary and selecting “Manually manage music and videos” in the Options section also enables iPod nano to be used as an external disk. Drag files to and from iPod nano to copy them. If you use iPod nano primarily as a disk, you might want to keep iTunes from opening automatically when you connect iPod nano to your computer.54 Chapter 6 More Settings, Extra Features, and Accessories To prevent iTunes from opening automatically when you connect iPod nano to your computer: 1 In iTunes, select iPod nano in the source list and click the Summary tab. 2 In the Options section, deselect “Open iTunes when this iPod is connected.” 3 Click Apply. Using Extra Settings You can set the date and time, clocks in different time zones, and alarm and sleep features on iPod nano. You can use iPod nano as a stopwatch or to play games, and you can lock the iPod nano screen. Setting and Viewing the Date and Time The date and time are set automatically from your computer’s clock when you connect iPod nano, but you can change the settings. To set date and time options: 1 Choose Settings > Date & Time. 2 Choose one or more of the following options: Adding Clocks for Other Time Zones To add clocks for other time zones: 1 Choose Extras > Clocks. 2 On the Clocks screen, click the Center button and choose Add. 3 Choose a region and then choose a city. The clocks you add appear in a list. The last clock you added appears last. To delete a clock: 1 Choose Extras > Clocks. 2 Choose the clock. To Do this Set the date Choose Date. Use the Click Wheel to change the selected value. Press the Center button to move to the next value. Set the time Choose Time. Use the Click Wheel to change the selected value. Press the Center button to move to the next value. Specify the time zone Choose Time Zone and use the Click Wheel to select a city in another time zone. Display the time in 24-hour format Choose 24 Hour Clock and press the Center button to turn the 24-hour format on or off. Display the time in the title bar Choose Time in Title and press the Center button to turn the option on or off. Chapter 6 More Settings, Extra Features, and Accessories 55 3 Press the Center button. 4 Choose Delete. Setting Alarms You can set an alarm for any clock on iPod nano. To use iPod nano as an alarm clock: 1 Choose Extras > Alarms. 2 Choose Create Alarm and set one or more of the following options: If you sync calendar events with alarms to iPod nano, the events appear in the Alarms menu. To delete an alarm: 1 Choose Extras > Alarms. 2 Choose the alarm and then choose Delete. Setting the Sleep Timer You can set iPod nano to turn off automatically after playing or other content for a specific period of time. To set the sleep timer: 1 Choose Extras > Alarms. 2 Choose Sleep Timer and choose how long you want iPod nano to play. Using the Stopwatch You can use the stopwatch as you exercise to track your overall time and, if you’re running on a track, your lap times. You can play music while you use the stopwatch. To Do this Turn the alarm on Choose Alarm and choose On. Set the date Choose Date. Use the Click Wheel to change the selected value. Press the Center button to move to the next value. Set the time Choose Time. Use the Click Wheel to change the selected value. Press the Center button to move to the next value. Set a repeat option Choose Repeat and choose an option (for example, “weekdays”). Choose a sound Choose Alerts or a playlist. If you choose Alerts, select Beep to hear the alarm through the internal speaker. If you choose a playlist, you’ll need to connect iPod nano to speakers or headphones to hear the alarm. Name the alarm Choose Label and choose an option (for example, “Wake up”).56 Chapter 6 More Settings, Extra Features, and Accessories To use the stopwatch: 1 Choose Extras > Stopwatch. 2 Press Play/Pause to start the timer. 3 Press the Center button to record lap times. The two most recent lap times appear above the overall time. All lap times are recorded in the log. 4 Press Play/Pause to stop the overall timer. To start the timer again, press Play/Pause. To start a new stopwatch session, press the Menu button and then choose New Timer. To review or delete a logged stopwatch session: 1 Choose Extras > Stopwatch. The current log and a list of saved sessions appear. 2 Choose a log to view session information. iPod nano stores stopwatch sessions with dates, times, and lap statistics. You see the date and time the session started; the total time of the session; the shortest, longest, and average lap times; and the last several lap times. 3 Press the Center button and choose Delete Log to delete the chosen log, or Clear Logs to delete all current logs. Playing Games iPod nano comes with three games: Klondike, Maze, and Vortex. To play a game: m Choose Extras > Games and choose a game. When you play a game created for previous versions of iPod nano, you’re first shown how iPod nano controls work in the game you’re about to play. You can purchase additional games from the iTunes Store (in some countries) to play on iPod nano. After purchasing games in iTunes, you can add them to iPod nano by syncing them automatically or by managing them manually. Many games can be played in portrait or landscape mode. To buy a game: 1 In iTunes, select iTunes Store in the source list. 2 Choose iPod Games from the iTunes Store list. 3 Select the game you want and click Buy Game. To sync games automatically to iPod nano: 1 In iTunes, select iPod nano in the source list and click the Games tab. 2 Select “Sync games.”Chapter 6 More Settings, Extra Features, and Accessories 57 3 Click “All games” or “Selected games.” If you click “Selected games,” also select the games you want to sync. 4 Click Apply. Locking the iPod nano Screen You can set a combination to prevent iPod nano from being used by someone without your permission. If you lock iPod nano while it isn’t connected to a computer, you must then enter a combination to unlock and use it. This combination is different from the Hold button, which just prevents iPod nano buttons from being pressed accidentally. The combination prevents another person from using iPod nano. To set a combination for iPod nano: 1 Choose Extras > Screen Lock. 2 On the New Combination screen, enter a combination:  Use the Click Wheel to select a number for the first position. Press the Center button to confirm your choice and move to the next position.  Use the same method to set the remaining numbers of the combination. You can use the Next/Fast-forward button to move to the next position and the Previous/Rewind button to move to the previous position. Press the Center button in the final position. 3 On the Confirm Combination screen, enter the combination to confirm it, or press Menu to exit without locking the screen. When you finish, you return to the Screen Lock screen, where you can lock the screen or reset the combination. Press the Menu button to exit without locking the screen. To lock the iPod nano screen: m Choose Extras > Screen Lock > Lock. If you just finished setting your combination, Lock will already be selected on the screen. Just press the Center button to lock iPod. When the screen is locked, you see a picture of a lock. You might want to add the Screen Lock menu item to the main menu so that you can quickly lock the iPod nano screen. See “Adding or Removing Items on the Main Menu” on page 11. When you see the lock on the screen, you can unlock the iPod nano screen in two ways:  Press the Center button to enter the combination on iPod nano. Use the Click Wheel to select the numbers and press the Center button to confirm them. If you enter the wrong combination, the lock remains. Try again.58 Chapter 6 More Settings, Extra Features, and Accessories  Connect iPod nano to the primary computer you use it with, and iPod nano automatically unlocks. Note: If you try these methods and you still can’t unlock iPod nano, you can restore iPod nano. See “Updating and Restoring iPod Software” on page 69. To change a combination you’ve already set: 1 Choose Extras > Screen Lock > Reset. 2 On the Enter Combination screen, enter the current combination. 3 On the New Combination screen, enter and confirm a new combination. If you can’t remember the current combination, the only way to clear it and enter a new one is to restore the iPod nano software. See “Updating and Restoring iPod Software” on page 69. Syncing Contacts, Calendars, and To-Do Lists iPod nano can store contacts, calendar events, and to-do lists for viewing on the go. You can use iTunes to sync the contact and calendar information on iPod nano with Address Book and iCal. If you’re using Windows XP, and you use Windows Address Book or Microsoft Outlook 2003 or later to store your contact information, you can use iTunes to sync the address book information on iPod nano. If you use Microsoft Outlook 2003 or later to keep a calendar, you can also sync calendar information. To sync contacts or calendar information using Mac OS X v10.4.11 or later: 1 Connect iPod nano to your computer. 2 In iTunes, select iPod nano in the source list and click the Contacts tab. 3 Do one of the following:  To sync contacts, in the Contacts section, select “Sync Address Book contacts,” and select an option:  To sync all contacts automatically, select “All contacts.”  To sync selected groups of contacts automatically, select “Selected groups” and select the groups you want to sync.  To copy contacts’ photos to iPod nano, when available, select “Include contacts’ photos.” When you click Apply, iTunes updates iPod nano with the Address Book contact information you specified.  To sync calendars, in the Calendars section, select “Sync iCal calendars,” and choose an option:  To sync all calendars automatically, choose “All calendars.”Chapter 6 More Settings, Extra Features, and Accessories 59  To sync selected calendars automatically, choose “Selected calendars” and select the calendars you want to sync. When you click Apply, iTunes updates iPod nano with the calendar information you specified. To sync contacts or calendars using Windows Address Book or Microsoft Outlook for Windows: 1 Connect iPod nano to your computer. 2 In iTunes, select iPod nano in the source list and click the Contacts tab. 3 Do one of the following:  To sync contacts, in the Contacts section, select “Sync contacts from” and choose Windows Address Book or Microsoft Outlook from the pop-up menu. Then select which contact information you want to sync.  To sync calendars from Microsoft Outlook, in the Calendars section, select “Sync calendars from Microsoft Outlook.” 4 Click Apply. You can also add contact and calendar information to iPod nano manually. iPod nano must be enabled as an external disk (see “Using iPod nano as an External Disk” on page 53). To add contact information manually: 1 Connect iPod nano and open your favorite email or contacts application. You can add contacts using Palm Desktop, Microsoft Outlook, Microsoft Entourage, and Eudora, among others. 2 Drag contacts from the application’s address book to the Contacts folder on iPod nano. In some cases, you might need to export contacts and then drag the exported file or files to the Contacts folder. See the documentation for your email or contacts application. To add appointments and other calendar events manually: 1 Export calendar events from any calendar application that uses the standard iCal format (filenames end in .ics) or vCal format (filenames end in .vcs). 2 Drag the files to the Calendars folder on iPod nano. To add to-do lists to iPod nano manually, save them in a calendar file with an .ics or .vcs extension. To view contacts on iPod nano: m Choose Extras > Contacts. To sort contacts by first or last name: m Choose Settings > General > Sort Contacts, and then select First or Last.60 Chapter 6 More Settings, Extra Features, and Accessories To view calendar events: m Choose Extras > Calendars > All Calendars, and then choose a calendar. To view to-do lists: m Choose Extras > Calendars > To Do’s. Storing and Reading Notes You can store and read text notes on iPod nano if it’s enabled as an external disk (see “Using iPod nano as an External Disk” on page 53). 1 Save a document in any word-processing application as a text (.txt) file. 2 Place the file in the Notes folder on iPod nano. To view notes: m Choose Extras > Notes. Recording Voice Memos You can record voice memos using an optional iPod nano–compatible microphone (available for purchase at www.apple.com/ipodstore). You can set chapter marks while you record, store voice memos on iPod nano and sync them with your computer, and add labels to voice memos. Voice memos cannot be longer than two hours. If you record for more than two hours, iPod nano automatically starts a new voice memo to continue your recording. To record a voice memo: 1 Connect a microphone to the Dock connector port on iPod nano. The Voice Memos item appears in the main menu. 2 To begin recording, choose Voice Memo > Start Recording. 3 Hold the microphone a few inches from your mouth and speak. To pause recording, press the Menu button. Choose Resume to continue recording. 4 When you finish, press Menu and then choose “Stop and Save.” Your saved recording is listed by date and time. To set chapter marks: m While recording, press the Center button whenever you want to set a chapter mark. During playback, you can go directly to the next chapter by pressing the Next/Forward button. Press the Previous/Rewind button once to go to the start of the current chapter, and twice to go to the start of the previous chapter.Chapter 6 More Settings, Extra Features, and Accessories 61 To label a recording: 1 Choose Voice Memos > Recordings, and then choose a saved recording. 2 Choose Label, and then choose a label for the recording. You can choose Podcast, Interview, Lecture, Idea, Meeting, or Memo. To remove a label from a recording, choose None. To play a recording: m In the main menu, choose Voice Memos and select the recording. You won’t see a Voice Memos menu item if you’ve never connected a microphone to iPod nano. To sync voice memos with your computer: Voice memos are saved in a Recordings folder on iPod in the WAV file format. If you enable iPod nano for disk use, you can drag voice memos from the folder to copy them. If iPod nano is set to sync songs automatically (see “Syncing Music Automatically” on page 28) voice memos on iPod nano are automatically synced as an album in iTunes (and removed from iPod nano) when you connect iPod nano. The new Voice Memos playlist appears in the source list. Using Spoken Menus for Accessibility iPod nano features optional spoken menus, enabling visually impaired users to browse through their iPod nano content more easily. iTunes generates spoken menus using voices that are included in your computer’s operating system or that you may have purchased from third parties. Not all voices from computer operating systems or third parties are compatible with spoken menus, and not all languages are supported. You must enable spoken menus in iTunes before you can activate them on iPod nano. To enable spoken menus in iTunes: 1 Connect iPod nano to your computer. 2 In iTunes, select iPod nano in the source list and click the Summary tab. 3 Select “Enable spoken menus for accessibility.” In Mac OS X, if you have VoiceOver turned on in Universal Access preferences, this option is selected by default. 4 Click Apply. After iPod nano syncs with iTunes, spoken menus are enabled and activated on your iPod nano. iPod nano takes longer to sync if spoken menus are being enabled.62 Chapter 6 More Settings, Extra Features, and Accessories To deactivate spoken menus on iPod nano: m Choose Settings > Spoken Menus and then choose Off. To turn spoken menus on again, choose Settings > General > Spoken Menus, and then choose On. Note: The Spoken Menus option appears in the Settings menu on iPod nano only if spoken menus have been enabled in iTunes. Learning About iPod nano Accessories iPod nano comes with some accessories, and many other accessories are available. To purchase iPod nano accessories, go to www.apple.com/ipodstore. Available accessories include:  Apple Headphones with Remote and Mic  Apple In-Ear Headphones with Remote and Mic  Nike + iPod Sport Kit  Apple Universal Dock  Apple Component AV Cable  Apple Composite AV Cable  Apple AV Connection Kit  Apple USB Power Adapter  Apple USB/FireWire Adapter  iPod In-Ear Headphones  iPod Radio Remote  World Travel Adapter Kit  iPod Socks  iPod Earphones  Third-party accessories—such as speakers, headsets, cases, car stereo adapters, power adapters, and moreChapter 6 More Settings, Extra Features, and Accessories 63 To use the earphones included with iPod nano: m Plug the earphones into the Headphones port. Then place the earbuds in your ears as shown. WARNING: Permanent hearing loss may occur if earbuds or headphones are used at high volume. You can adapt over time to a higher volume of sound that may sound normal but can be damaging to your hearing. If you experience ringing in your ears or muffled speech, stop listening and have your hearing checked. The louder the volume, the less time is required before your hearing could be affected. Hearing experts suggest that to protect your hearing:  Limit the amount of time you use earbuds or headphones at high volume.  Avoid turning up the volume to block out noisy surroundings.  Turn the volume down if you can’t hear people speaking near you. For information about setting a maximum volume limit on iPod, see “Setting the Maximum Volume Limit” on page 40. The earphones cord is adjustable.7 64 7 Tips and Troubleshooting Most problems with iPod nano can be solved quickly by following the advice in this chapter. General Suggestions Most problems with iPod nano can be solved by resetting it. First, make sure iPod nano is charged. To reset iPod nano: 1 Toggle the Hold switch on and off (slide it to HOLD and then back again). 2 Press and hold the Menu and Center buttons for at least 6 seconds, until the Apple logo appears. If iPod nano won’t turn on or respond  Make sure the Hold switch isn’t set to HOLD.  The iPod nano battery might need to be recharged. Connect iPod nano to your computer or to an Apple USB Power Adapter and let the battery recharge. Look for the lightning bolt icon on the iPod nano screen to verify that iPod nano is receiving a charge. To charge the battery, connect iPod nano to a USB 2.0 port on your computer. The 5 Rs: Reset, Retry, Restart, Reinstall, Restore Remember these five basic suggestions if you have a problem with iPod nano. Try these steps one at a time until your issue is resolved. If one of the following doesn’t help, read on for solutions to specific problems.  Reset iPod nano. See “General Suggestions,” below.  Retry with a different USB port if you cannot see iPod nano in iTunes.  Restart your computer, and make sure you have the latest software updates installed.  Reinstall iTunes software from the latest version on the web.  Restore iPod nano. See “Updating and Restoring iPod Software” on page 69.Chapter 7 Tips and Troubleshooting 65  Try the 5 Rs, one by one, until iPod nano responds. If you want to disconnect iPod nano, but you see the message “Connected” or “Sync in Progress”  If iPod nano is syncing music, wait for it to complete.  Select iPod nano in the iTunes source list and click the Eject (C) button.  If iPod nano disappears from the list of devices in the iTunes source list, but you still see the “Connected” or “Sync in Progress” message on the iPod nano screen, disconnect iPod nano.  If iPod nano doesn’t disappear from the list of devices in the iTunes source list, drag the iPod nano icon from the desktop to the Trash (if you’re using a Mac) or, if you’re using a Windows PC, eject the device in My Computer or click the Safely Remove Hardware icon in the system tray and select iPod nano. If you still see the “Connected” or “Sync in Progress” message, restart your computer and eject iPod nano again. If iPod nano isn’t playing music  Make sure the Hold switch isn’t set to HOLD.  Make sure the headphone connector is pushed in all the way.  Make sure the volume is adjusted properly. A maximum volume limit might have been set. You can change or remove it by using Settings > Volume Limit. See “Setting the Maximum Volume Limit” on page 40.  iPod nano might be paused. Try pressing the Play/Pause button.  Make sure you’re using iTunes 8.0 or later (go to www.apple.com/ipod/start). Songs purchased from the iTunes Store using earlier versions of iTunes won’t play on iPod nano until you upgrade iTunes.  If you’re using the iPod Universal Dock, make sure the iPod nano is seated firmly in the Dock and make sure all cables are connected properly. If you connect iPod nano to your computer and nothing happens  Make sure you have installed the latest iTunes software from www.apple.com/ipod/start.  Try connecting to a different USB port on your computer. Note: A USB 2.0 port is recommended to connect iPod nano. USB 1.1 is significantly slower than USB 2.0. If you have a Windows PC that doesn’t have a USB 2.0 port, in some cases you can purchase and install a USB 2.0 card. For more information, go to www.apple.com/ipod.  iPod nano might need to be reset (see page 64).  If you’re connecting iPod nano to a portable or laptop computer using the iPod Dock Connector to USB 2.0 Cable, connect the computer to a power outlet before connecting iPod nano.66 Chapter 7 Tips and Troubleshooting  Make sure you have the required computer and software. See “If you want to doublecheck the system requirements” on page 68.  Check the cable connections. Unplug the cable at both ends and make sure no foreign objects are in the USB ports. Then plug the cable back in securely. Make sure the connectors on the cables are oriented correctly. They can be inserted only one way.  Try restarting your computer.  If none of the previous suggestions solves your problems, you might need to restore iPod nano software. See “Updating and Restoring iPod Software” on page 69. If iPod nano displays a “Connect to Power” message This message may appear if iPod nano is exceptionally low on power and the battery needs to be charged before iPod nano can communicate with your computer. To charge the battery, connect iPod nano to a USB 2.0 port on your computer. Leave iPod nano connected to your computer until the message disappears and iPod nano appears in iTunes or the Finder. Depending on how depleted the battery is, you may need to charge iPod nano for up to 30 minutes before it will start up. To charge iPod nano more quickly, use the optional Apple USB Power Adapter. If iPod nano displays a “Use iTunes to restore” message  Make sure you have the latest version of iTunes on your computer (download it from www.apple.com/ipod/start).  Connect iPod nano to your computer. When iTunes opens, follow the onscreen prompts to restore iPod nano.  If restoring iPod nano doesn’t solve the problem, iPod nano may need to be repaired. You can arrange for service at the iPod Service & Support website: www.apple.com/support/ipod If songs or data sync more slowly over USB 2.0  If you sync a large number of songs or amount of data using USB 2.0 and the iPod nano battery is low, iPod nano syncs the information at a reduced speed in order to conserve battery power.  If you want to sync at higher speeds, you can stop syncing and keep the iPod nano connected so that it can recharge, or connect it to the optional iPod USB 2.0 Power Adapter. Let iPod nano charge for about an hour, and then resume syncing your music or data. If you can’t add a song or other item to iPod nano The song may have been encoded in a format that iPod nano doesn’t support. The following audio file formats are supported by iPod nano. These include formats for audiobooks and podcasting:  AAC (M4A, M4B, M4P, up to 320 Kbps)Chapter 7 Tips and Troubleshooting 67  Apple Lossless (a high-quality compressed format)  MP3 (up to 320 Kbps)  MP3 Variable Bit Rate (VBR)  WAV  AA (audible.com spoken word, formats 2, 3, and 4)  AIFF A song encoded using Apple Lossless format has full CD-quality sound, but takes up only about half as much space as a song encoded using AIFF or WAV format. The same song encoded in AAC or MP3 format takes up even less space. When you import music from a CD using iTunes, it’s converted to AAC format by default. Using iTunes for Windows, you can convert nonprotected WMA files to AAC or MP3 format. This can be useful if you have a library of music encoded in WMA format. iPod nano doesn’t support WMA, MPEG Layer 1, MPEG Layer 2 audio files, or audible.com format 1. If you have a song in iTunes that isn’t supported by iPod nano, you can convert it to a format iPod nano supports. For more information, see iTunes Help. If iPod nano displays a “Connect to iTunes to activate Genius” message: You haven’t activated Genius in iTunes, or you haven’t synced iPod nano since you activated Genius in iTunes. See “Using Genius in iTunes” on page 25. If iPod nano displays a “Genius is not available for the selected song” message: Genius is activated but doesn’t recognize the song you selected to start a Genius playlist. New songs are added to the iTunes Store Genius database all the time, so try again soon. If you accidentally set iPod nano to use a language you don’t understand You can reset the language. 1 Press and hold Menu until the main menu appears. 2 Choose the sixth menu item (Settings). 3 Choose the last menu item (Reset Settings). 4 Choose the first item (Reset) and select a language. Other iPod nano settings, such as song repeat, are also reset. Note: If you added or removed items from the iPod nano main menu (see “Adding or Removing Items on the Main Menu” on page 11) the Settings menu item may be in a different place. If you can’t find the Reset Settings menu item, you can restore iPod nano to its original state and choose a language. See “Updating and Restoring iPod Software” on page 69.68 Chapter 7 Tips and Troubleshooting If you can’t see videos or photos on your TV  You must use RCA-type cables made specifically for iPod nano, such as the Apple Component or Apple Composite AV cables, to connect iPod nano to your TV. Other similar RCA-type cables won’t work.  Make sure your TV is set to display images from the correct input source (see the documentation that came with your TV for more information).  Make sure all cables are connected correctly (see “Watching Videos on a TV Connected to iPod nano” on page 45).  Make sure the yellow end of the Apple Composite AV Cable is connected to the video port on your TV.  If you’re trying to view a video, choose Videos > Settings and set TV Out to On, and then try again. If you’re trying to view a slideshow, choose Photos > Slideshow Settings and set TV Out to On, and then try again.  If that doesn’t work, choose Videos > Settings (for video) or Photos > Settings (for a slideshow) and set TV Signal to PAL or NTSC, depending on which type of TV you have. Try both settings. If you want to double-check the system requirements To use iPod nano, you must have:  One of the following computer configurations:  A Mac with a USB 2.0 port  A Windows PC with a USB 2.0 or a USB 2.0 card installed  One of the following operating systems:  Mac OS X v10.4.11 or later  Windows Vista  Windows XP Home or Professional with Service Pack 3 or later  iTunes 8.0 or later (iTunes can be downloaded from www.apple.com/ipod/start) If your Windows PC doesn’t have a USB 2.0 port, you can purchase and install a USB 2.0 card. For more information about cables and compatible USB cards, go to www.apple.com/ipod. On the Mac, iPhoto 6 or later is recommended for adding photos and albums to iPod nano. This software is optional. iPhoto might already be installed on your Mac. Check the Applications folder. On a Windows PC, iPod nano can sync photo collections automatically from Adobe Photoshop Album 2.0 or later, and Adobe Photoshop Elements 4.0 or later, available at www.adobe.com. This software is optional. On both Mac and Windows PC, iPod nano can sync digital photos from folders on your computer’s hard disk.Chapter 7 Tips and Troubleshooting 69 If you want to use iPod nano with a Mac and a Windows PC If you’re using iPod nano with a Mac and you want to use it with a Windows PC, you must restore the iPod software for use with the PC (see “Updating and Restoring iPod Software” on page 69). Restoring the iPod software erases all data from iPod nano, including all songs. You cannot switch from using iPod nano with a Mac to using it with a Windows PC without erasing all data on iPod nano. If you lock the iPod nano screen and can’t unlock it Normally, if you can connect iPod nano to the computer it’s authorized to work with, iPod nano automatically unlocks. If the computer authorized to work with iPod nano is unavailable, you can connect iPod nano to another computer and use iTunes to restore iPod software. See the next section for more information. If you want to change the screen lock combination and you can’t remember the current combination, you must restore the iPod software and then set a new combination. Updating and Restoring iPod Software You can use iTunes to update or restore iPod software. It’s recommended that you update iPod nano to use the latest software. You can also restore the software, which puts iPod nano back to its original state.  If you choose to update, the software is updated, but your settings and songs aren’t affected.  If you choose to restore, all data is erased from iPod nano, including songs, videos, files, contacts, photos, calendar information, and any other data. All iPod nano settings are restored to their original state. To update or restore iPod nano: 1 Make sure you have an Internet connection and have installed the latest version of iTunes from www.apple.com/ipod/start. 2 Connect iPod nano to your computer. 3 In iTunes, select iPod nano in the source list and click the Summary tab. The Version section tells you whether iPod nano is up to date or needs a newer version of the software. 4 Click Update to install the latest version of the software. 5 If necessary, click Restore to restore iPod nano to its original settings (this erases all data from iPod nano). Follow the onscreen instructions to complete the restore process.8 70 8 Safety and Cleaning Read the following important safety and handling information for Apple iPods. Keep the iPod Safety Guide and the features guide for your iPod handy for future reference. Important Safety Information Handling iPod Do not bend, drop, crush, puncture, incinerate, or open iPod. Avoiding water and wet locations Do not use iPod in rain, or near washbasins or other wet locations. Take care not to spill any food or liquid into iPod. In case iPod gets wet, unplug all cables, turn iPod off, and slide the Hold switch (if available) to HOLD before cleaning, and allow it to dry thoroughly before turning it on again. Repairing iPod Never attempt to repair iPod yourself. iPod does not contain any userserviceable parts. For service information, choose iPod Help from the Help menu in iTunes or go to www.apple.com/support/ipod. The rechargeable battery in iPod should be replaced only by an Apple Authorized Service Provider. For more information about batteries, go to www.apple.com/batteries. ± Read all safety information below and operating instructions before using iPod to avoid injury. WARNING: Failure to follow these safety instructions could result in fire, electric shock, or other injury or damage.Chapter 8 Safety and Cleaning 71 Using the Apple USB Power Adapter (available separately) If you use the Apple USB Power Adapter (sold separately at www.apple.com/ipodstore) to charge iPod, make sure that the power adapter is fully assembled before you plug it into a power outlet. Then insert the Apple USB Power Adapter firmly into the power outlet. Do not connect or disconnect the Apple USB Power Adapter with wet hands. Do not use any power adapter other than an Apple iPod power adapter to charge your iPod. The iPod USB Power Adapter may become warm during normal use. Always allow adequate ventilation around the iPod USB Power Adapter and use care when handling. Unplug the iPod USB Power Adapter if any of the following conditions exist:  The power cord or plug has become frayed or damaged.  The adapter is exposed to rain, liquids, or excessive moisture.  The adapter case has become damaged.  You suspect the adapter needs service or repair.  You want to clean the adapter. Avoiding hearing damage Permanent hearing loss may occur if earbuds or headphones are used at high volume. Set the volume to a safe level. You can adapt over time to a higher volume of sound that may sound normal but can be damaging to your hearing. If you experience ringing in your ears or muffled speech, stop listening and have your hearing checked. The louder the volume, the less time is required before your hearing could be affected. Hearing experts suggest that to protect your hearing:  Limit the amount of time you use earbuds or headphones at high volume.  Avoid turning up the volume to block out noisy surroundings.  Turn the volume down if you can’t hear people speaking near you. For information about how to set a maximum volume limit on iPod, see “Setting the Maximum Volume Limit” on page 40. Using headphones safely Use of headphones while operating a vehicle is not recommended and is illegal in some areas. Be careful and attentive while driving. Stop using iPod if you find it disruptive or distracting while operating any type of vehicle or performing any other activity that requires your full attention. Avoiding seizures, blackouts, and eye strain If you have experienced seizures or blackouts, or if you have a family history of such occurrences, please consult a physician before playing video games on iPod (if available). Discontinue use and consult a physician if you experience: convulsion, eye or muscle twitching, loss of awareness, involuntary movements, or disorientation. When watching videos or playing games on iPod (if available), avoid prolonged use and take breaks to prevent eye strain.72 Chapter 8 Safety and Cleaning Important Handling Information Carrying iPod iPod contains sensitive components, including, in some cases, a hard drive. Do not bend, drop, or crush iPod. If you are concerned about scratching iPod, you can use one of the many cases sold separately. Using connectors and ports Never force a connector into a port. Check for obstructions on the port. If the connector and port don’t join with reasonable ease, they probably don’t match. Make sure that the connector matches the port and that you have positioned the connector correctly in relation to the port. Keeping iPod within acceptable temperatures Operate iPod in a place where the temperature is always between 0º and 35º C (32º to 95º F). iPod play time might temporarily shorten in low-temperature conditions. Store iPod in a place where the temperature is always between -20º and 45º C (-4º to 113º F). Don’t leave iPod in your car, because temperatures in parked cars can exceed this range. When you’re using iPod or charging the battery, it is normal for iPod to get warm. The exterior of iPod functions as a cooling surface that transfers heat from inside the unit to the cooler air outside. Keeping the outside of iPod clean To clean iPod, unplug all cables, turn iPod off, and slide the Hold switch (if available) to HOLD. Then use a soft, slightly damp, lint-free cloth. Avoid getting moisture in openings. Don’t use window cleaners, household cleaners, aerosol sprays, solvents, alcohol, ammonia, or abrasives to clean iPod. Disposing of iPod properly For information about the proper disposal of iPod, including other important regulatory compliance information, see “Regulatory Compliance Information” on page 74. NOTICE: Failure to follow these handling instructions could result in damage to iPod or other property.9 73 9 Learning More, Service, and Support You can find more information about using iPod nano in onscreen help and on the web. The following table describes where to get more iPod-related software and service information. To learn about Do this Service and support, discussions, tutorials, and Apple software downloads Go to: www.apple.com/support/ipodnano Using iTunes Open iTunes and choose Help > iTunes Help. For an online iTunes tutorial (available in some areas only), go to: www.apple.com/support/itunes Using iPhoto (on Mac OS X) Open iPhoto and choose Help > iPhoto Help. Using iCal (on Mac OS X) Open iCal and choose Help > iCal Help. The latest information on iPod nano Go to: www.apple.com/ipodnano Registering iPod nano To register iPod nano, install iTunes on your computer and connect iPod nano. Finding the iPod nano serial number Look at the back of iPod nano or choose Settings > About and press the Center button. In iTunes (with iPod nano connected to your computer), select iPod nano in the source list and click the Settings tab. Obtaining warranty service First follow the advice in this booklet, the onscreen help, and online resources. Then go to: www.apple.com/support/ipodnano/ service74 Regulatory Compliance Information FCC Compliance Statement This device complies with part 15 of the FCC rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation. See instructions if interference to radio or TV reception is suspected. Radio and TV Interference This computer equipment generates, uses, and can radiate radio-frequency energy. If it is not installed and used properly—that is, in strict accordance with Apple’s instructions—it may cause interference with radio and TV reception. This equipment has been tested and found to comply with the limits for a Class B digital device in accordance with the specifications in Part 15 of FCC rules. These specifications are designed to provide reasonable protection against such interference in a residential installation. However, there is no guarantee that interference will not occur in a particular installation. You can determine whether your computer system is causing interference by turning it off. If the interference stops, it was probably caused by the computer or one of the peripheral devices. If your computer system does cause interference to radio or TV reception, try to correct the interference by using one or more of the following measures:  Turn the TV or radio antenna until the interference stops.  Move the computer to one side or the other of the TV or radio.  Move the computer farther away from the TV or radio.  Plug the computer in to an outlet that is on a different circuit from the TV or radio. (That is, make certain the computer and the TV or radio are on circuits controlled by different circuit breakers or fuses.) If necessary, consult an Apple Authorized Service Provider or Apple. See the service and support information that came with your Apple product. Or, consult an experienced radio/TV technician for additional suggestions. Important: Changes or modifications to this product not authorized by Apple Inc. could void the EMC compliance and negate your authority to operate the product. This product was tested for EMC compliance under conditions that included the use of Apple peripheral devices and Apple shielded cables and connectors between system components. It is important that you use Apple peripheral devices and shielded cables and connectors between system components to reduce the possibility of causing interference to radios, TV sets, and other electronic devices. You can obtain Apple peripheral devices and the proper shielded cables and connectors through an Apple Authorized Reseller. For non-Apple peripheral devices, contact the manufacturer or dealer for assistance. Responsible party (contact for FCC matters only): Apple Inc. Corporate Compliance 1Infinite Loop, M/S 26-A Cupertino, CA 95014-2084 Industry Canada Statement This Class B device meets all requirements of the Canadian interference-causing equipment regulations. Cet appareil numérique de la classe B respecte toutes les exigences du Règlement sur le matériel brouilleur du Canada. VCCI Class B Statement Korea Class B Statement (૶ ૺૺဧ ઠધබ 75 Russia European Community Battery Replacement The rechargeable battery in iPod nano should be replaced only by an authorized service provider. For battery replacement services go to: www.apple.com/support/ipod/service/battery Disposal and Recycling Information Your iPod must be disposed of properly according to local laws and regulations. Because this product contains a battery, the product must be disposed of separately from household waste. When your iPod reaches its end of life, contact Apple or your local authorities to learn about recycling options. For information about Apple’s recycling program, go to: www.apple.com/environment/recycling Deutschland: Dieses Gerät enthält Batterien. Bitte nicht in den Hausmüll werfen. Entsorgen Sie dieses Gerätes am Ende seines Lebenszyklus entsprechend der maßgeblichen gesetzlichen Regelungen. Nederlands: Gebruikte batterijen kunnen worden ingeleverd bij de chemokar of in een speciale batterijcontainer voor klein chemisch afval (kca) worden gedeponeerd. China: Taiwan: European Union—Disposal Information: This symbol means that according to local laws and regulations your product should be disposed of separately from household waste. When this product reaches its end of life, take it to a collection point designated by local authorities. Some collection points accept products for free. The separate collection and recycling of your product at the time of disposal will help conserve natural resources and ensure that it is recycled in a manner that protects human health and the environment. Apple and the Environment At Apple, we recognize our responsibility to minimize the environmental impacts of our operations and products. For more information, go to: www.apple.com/environment © 2008 Apple Inc. All rights reserved. Apple, the Apple logo, FireWire, iCal, iLife, iPhoto, iPod, iPod Socks, iTunes, Mac, Macintosh, and Mac OS are trademarks of Apple Inc., registered in the U.S. and other countries. Finder, the FireWire logo, and Shuffle are trademarks of Apple Inc. iTunes Store is a service mark of Apple Inc., registered in the U.S. and other countries. NIKE is a trademark of NIKE, Inc. and its affiliates and is used under license. Other company and product names mentioned herein may be trademarks of their respective companies. Mention of third-party products is for informational purposes only and constitutes neither an endorsement nor a recommendation. Apple assumes no responsibility with regard to the performance or use of these products. All understandings, agreements, or warranties, if any, take place directly between the vendors and the prospective users. Every effort has been made to ensure that the information in this manual is accurate. Apple is not responsible for printing or clerical errors. The product described in this manual incorporates copyright protection technology that is protected by method claims of certain U.S. patents and other intellectual property rights owned by Macrovision Corporation and other rights owners. Use of this copyright protection technology must be authorized by Macrovision Corporation and is intended for home and other limited viewing uses only unless otherwise authorized by Macrovision Corporation. Reverse engineering or disassembly is prohibited. Apparatus Claims of U.S. Patent Nos. 4,631,603, 4,577,216, 4,819,098 and 4,907,093 licensed for limited viewing uses only. 019-1343/2008-09Index 76 Index A accessing additional options 8 accessories for iPod 62 adding album artwork 24 adding menu items 11, 40 adding music disconnecting iPod 15 from more than one computer 28, 31 manually 29 methods 27 On-The-Go playlists 37 tutorial 73 adding photos about 47 all or selected photos 48, 49 automatically 48 from computer to iPod 48 from iPod to computer 50 full-resolution image 49 address book, syncing 58 Adobe Photoshop Album 68 Adobe Photoshop Elements 68 alarms deleting 55 setting 55 album, browsing by 36 album artwork adding 24 viewing 36 albums, purchasing 22 artist, browsing by 37 audiobooks purchasing 22 setting play speed 43 AV cables 45, 46, 52 B backlight setting timer 12 turning on 7, 12 battery charge states when disconnected 19 charging 17 Energy Saver 19 improving performance 19 rechargeable 19 replacing 19 very low 66 viewing charge status 17 brightness setting 12 browsing by album 36 by artist 37 iTunes Store 21 podcasts 22 quickly 9, 10 songs 7, 34 videos 7 with Cover Flow 9 buttons Center 7 disabling with Hold switch 7 Eject 16 buying. See purchasing C calendar events, syncing 58 Center button, using 7, 34 Charging, Please Wait message 66 charging the battery about 17 using the iPod USB Power Adapter 18 using your computer 17 when battery very low 66 cleaning iPod 72 Click Wheel browsing songs 34 turning off the Click Wheel sound 13 using 7 clocks adding for other time zones 54 settings 54 close captions 46 compilations 40 component AV cable 45, 46, 52Index 77 composite AV cable 45, 46, 52 computer adding photos to iPod 48 charging the battery 17 connecting iPod 14 getting photos from iPod 50 importing photos from camera 47 problems connecting iPod 65 requirements 68 connecting iPod about 14 charging the battery 17 to a TV 46, 52 contacts sorting 59 syncing 58 controls disabling with Hold switch 13 using 7 converting unprotected WMA files 67 converting videos for use with iPod 27 Cover Flow 9 customizing the Music menu 40 D data files, storing on iPod 53 date and time setting 54 viewing 54 determining battery charge 19 diamond icon on scrubber bar 8 digital photos. See photos disconnecting iPod about 14 during music update 15 ejecting first 15 instructions 16 troubleshooting 65 disk, using iPod as 53 displaying time in title bar 54 downloading podcasts 22 video podcasts 26 See also adding; syncing E Eject button 16 ejecting before disconnecting 15 Energy Saver 19 external disk, using iPod as 53 F fast-forwarding a song or video 8 file formats, supported 66 finding your iPod serial number 8 fit video to screen 45 font size, setting 12 full-resolution images 49 G games 56 Genius 8, 25, 38 creating a playlist 8, 38 playing a playlist 8, 38 saving a playlist 8, 38 using in iTunes 25 using on iPod nano 38 getting help 73 getting information about your iPod 13 getting started with iPod 68 H handling information 70 hearing loss warning 63 help, getting 73 Hold switch 7, 13 I iCal, getting help 73 Image Capture, importing photos to a Mac 48 images. See photos importing contacts, calendars, and to-do lists. See syncing importing photos from camera to computer 47 See also adding photos importing videos 27 iPhoto getting help 47, 73 importing photos from camera 47 recommended version 68 iPod Dock 14 iPod Dock Connector 14 iPod Updater application 69 iPod USB power adapter 17 iTunes ejecting iPod 16 getting help 73 setting not to open automatically 53 Sound Check 42 Store 21 iTunes Library, adding songs 22 iTunes Store browsing 21 browsing videos 26 searching 22 signing in 21 L language78 Index resetting 67 specifying 12 letterbox 45 library, adding songs 22 lightning bolt on battery icon 17 locating your iPod serial number 8 locking iPod screen 57 lyrics adding 24 viewing on iPod 35 M Mac OS X operating system 68 main menu adding or removing items 11 opening 7 settings 11, 40 main menu, returning to 7 managing iPod manually 29 manually adding 29 maximum volume limit, setting 40 memos, recording 60 menu items adding or removing 11, 40 choosing 7 returning to main menu 7 returning to previous menu 7 modifying playlists 29 movies syncing 33 syncing selected 32 See also videos music iPod not playing 65 purchasing 22 rating 35 setting for slideshows 51 tutorial 73 See also adding music; songs Music menu, customizing 40 music videos syncing 29 See also videos N navigating quickly 10 notes, storing and reading 60 Now Playing screen moving to any point in a song or video 8 scrubber bar 8 NTSC TV 45, 52 O On-The-Go playlists copying to computer 37 making 37, 38 rating songs 35 saving 37 operating system requirements 68 P PAL TV 45, 52 pausing a song 7 a video 7 phone numbers, syncing 58 photo collections, adding automatically 48 photo library 48 photos adding and viewing 47 deleting 49, 50 full-resolution 49 importing to Windows PC 48 importing using Image Capture 48 syncing 48, 49 viewing on iPod 50 playing games 56 songs 7 videos 7 playlists adding songs 8, 29 making on iPod 37, 38 modifying 29 On-The-Go 37, 38 setting for slideshows 52 plug on battery icon 17 podcasting 42 podcasts browsing 22 downloading 22 downloading video podcasts 26 listening 42 subscribing 22 updating 30 ports RCA video and audio 46, 52 USB 68 power adapter safety 71 Power Search in iTunes Store 22 previous menu, returning to 7 problems. See troubleshooting purchasing songs, albums, audiobooks 22 purchasing videos 26 Q quick navigation 10 R radio accessory 43Index 79 random play 8 rating songs 35 RCA video and audio ports 46, 52 rechargeable batteries 19 recording voice memos 60 registering iPod 73 relative volume, playing songs at 42 removing menu items 11, 40 repairing iPod 70 replacing battery 19 replaying a song or video 8 requirements computer 68 operating system 68 reset all settings 13 resetting iPod 7, 64 resetting the language 67 restore message 66 restoring iPod software 69 rewinding a song or video 8 S Safely Remove Hardware icon 16 safety considerations setting up iPod 70 safety information 70 saving On-The-Go playlists 37 screen lock 57 scrolling quickly 10 scrubber bar 8 searching iPod 10 iTunes Store 22 Select button. See Center button serial number 8, 13 serial number, locating 73 service and support 73 sets of songs. See playlists setting combination for iPod 57 settings about your iPod 13 alarm 55 audiobook play speed 43 backlight timer 12 brightness 12 Click Wheel sound 13 date and time 54 language 12 main menu 11, 40 PAL or NTSC TV 45, 52 playing songs at relative volume 42 repeating songs 40 reset all 13 shuffle songs 39 sleep timer 55 slideshow 51 TV 45 volume limit 40 shuffling songs on iPod 8, 39 sleep mode and charging the battery 17 sleep timer, setting 55 slideshows background music 51 random order 51 setting playlist 52 settings 51 viewing on iPod 52 software getting help 73 iPhoto 68 iPod Updater 69 support versions 68 updating 69 songs adding to On-The-Go playlists 8 browsing 7 browsing and playing 34 entering names 23 fast-forwarding 8 pausing 7 playing 7 playing at relative volume 42 purchasing 22 rating 35 repeating 40 replaying 8 rewinding 8 shuffling 8, 39 skipping ahead 8 viewing lyrics 24 See also music sorting contacts 59 Sound Check 42 spoken menus 61 standard TV 45 stopwatch 55, 56 storing data files on iPod 53 notes on iPod 60 subscribing to podcasts 22 supported operating systems 68 suppressing iTunes from opening 53 syncing address book 58 movies 33 music 27 music videos 29 photos 48, 49 selected movies 32 selected videos 32 to-do lists 5880 Index TV shows 33 videos 31 See also adding T third-party accessories 62 time, displaying in title bar 54 timer, setting for backlight 12 time zones, clocks for 54 title bar, displaying time 54 to-do lists, syncing 58 transitions for slides 51 troubleshooting connecting iPod to computer 65 cross-platform use 69 disconnecting iPod 65 iPod not playing music 65 iPod won’t respond 64 resetting iPod 64 restore message 66 safety considerations 70 setting incorrect language 67 slow syncing of music or data 66 software update and restore 69 TV slideshows 68 unlocking iPod screen 69 turning iPod on and off 7 tutorial 73 TV connecting to iPod 46, 52 PAL or NTSC 45, 52 settings 45 viewing slideshows 46, 52 TV shows syncing 33 See also videos U unlocking iPod screen 57, 69 unresponsive iPod 64 unsupported audio file formats 67 updating and restoring software 69 USB 2.0 port recommendation 68 slow syncing of music or data 66 USB port on keyboard 14 Use iTunes to restore message in display 66 V video captions 46 video podcasts downloading 26 viewing on a TV 45 videos adding to iPod 31 browsing 7 browsing in iTunes Store 26 converting 27 fast-forwarding 8 importing into iTunes 27 pausing 7 playing 7 purchasing 26 replaying 8 rewinding 8 skipping ahead 8 syncing 31 viewing on a TV 45 viewing on iPod 44 viewing album artwork 36 viewing lyrics 35 viewing photos 50 viewing slideshows on a TV 46, 52 on iPod 52 settings 51 troubleshooting 68 voice memos recording 60 syncing with your computer 60 volume changing 7 setting maximum limit 40 W warranty service 73 widescreen TV 45 Windows importing photos 48 supported operating systems 68 troubleshooting 69 WMA files, converting 67 Safari Web Content GuideContents Developing Web Content for Safari 9 At a Glance 9 Making It Work 9 Enhancing the User Experience 10 How to Use This Document 11 Prerequisites 11 See Also 11 Introduction 13 Who Should Read This Document 14 Organization of This Document 14 See Also 15 Creating Compatible Web Content 18 Use Standards 18 Follow Good Web Design Practices 19 Use Security Features 20 Avoid Framesets 20 Use Columns and Blocks 21 Know iOS Resource Limits 23 Checking the Size of Webpages 24 Use the Select Element 25 Use Supported JavaScript Windows and Dialogs 25 Use Supported Content Types and iOS Features 26 Use Canvas for Vector Graphics and Animation 29 Use the HTML5 Audio and Video Elements 29 Use Supported iOS Rich Media MIME Types 29 Don’t Use Unsupported iOS Technologies 30 Optimizing Web Content 33 Using Conditional CSS 33 Using the Safari User Agent String 36 Configuring the Viewport 38 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 2Layout and Metrics on iPhone and iPod touch 39 What Is the Viewport? 39 Safari on the Desktop Viewport 41 Safari on iOS Viewport 41 Examples of Viewports on iOS 42 Default Viewport Settings 46 Using the Viewport Meta Tag 46 Changing the Viewport Width and Height 47 How Safari Infers the Width, Height, and Initial Scale 50 Viewport Settings for Web Applications 55 Customizing Style Sheets 58 Leveraging CSS3 Properties 58 Adjusting the Text Size 58 Highlighting Elements 60 Designing Forms 62 Laying Out Forms 62 Customizing Form Controls 64 Configuring Automatic Correction and Capitalization 66 Handling Events 68 One-Finger Events 69 Two-Finger Events 72 Form and Document Events 73 Making Elements Clickable 73 Handling Multi-Touch Events 74 Handling Gesture Events 77 Preventing Default Behavior 79 Handling Orientation Events 79 Supported Events 81 Promoting Apps with Smart App Banners 84 Implementing a Smart App Banner on Your Website 85 Providing Navigational Context to Your App 85 Configuring Web Applications 87 Specifying a Webpage Icon for Web Clip 87 Specifying a Startup Image 89 Hiding Safari User Interface Components 89 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 3 ContentsChanging the Status Bar Appearance 89 Creating Video 91 Sizing Movies Appropriately 92 Don’t Let the Bit Rate Stall Your Movie 92 Using Supported Movie Standards 92 Encoding Video for Wi-Fi, 3G, and EDGE 93 Creating a Reference Movie 94 Creating a Poster Image for Movies 94 Configuring Your Server 95 Storing Data on the Client 97 Creating a Manifest File 97 Declaring a Manifest File 98 Updating the Cache 98 Handling Cache Events 99 Getting Geographic Locations 101 Geographic Location Classes 101 Getting the Current Location 101 Tracking the Current Location 102 Handling Location Errors 103 Debugging 104 Enabling Web Inspector on iOS 104 Inspecting From Your Mac 106 Inspecting Content in a Web View 107 Using JavaScript to Interact with Your Device 108 HTML Basics 110 What Is HTML? 110 Basic HTML Structure 110 Creating Effective HTML Content 112 Using Other HTML Features 115 CSS Basics 117 What Is CSS? 117 Inline CSS 117 Head-Embedded CSS 118 External CSS 120 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 4 ContentsDocument Revision History 122 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 5 ContentsFigures, Tables, and Listings Creating Compatible Web Content 18 Figure 1-1 Comparison of frameset on the desktop and iOS 21 Figure 1-2 Comparison of no columns vs. columns 22 Figure 1-3 Comparison of the select element on the desktop and iOS 25 Figure 1-4 Confirm dialog 26 Figure 1-5 Playing video on iOS 27 Figure 1-6 Viewing PDF documents on iOS 28 Table 1-1 Supported iOS rich media MIME types 29 Optimizing Web Content 33 Figure 2-1 Small device rendering 34 Figure 2-2 Desktop rendering 34 Listing 2-1 Screen-specific style sheet 35 Listing 2-2 Print-specific style sheet 35 Listing 2-3 iPhone running on iOS 2.0 user agent string 36 Listing 2-4 iPod touch running iOS 1.1.3 user agent string 37 Listing 2-5 iPad running iOS 3.2 user agent string 37 Listing 2-6 iPhone running iOS 1.0 user agent string 37 Configuring the Viewport 38 Figure 3-1 Layout and metrics in portrait orientation 39 Figure 3-2 Differences between Safari on iOS and Safari on the desktop 40 Figure 3-3 Safari on desktop viewport 41 Figure 3-4 Viewport with default settings 42 Figure 3-5 Viewport with width set to 320 43 Figure 3-6 Viewport with width set to 320 and scale set to 150% 44 Figure 3-7 Viewport with width set to 320 and scale set to 50% 45 Figure 3-8 Viewport with arbitrary user scale 45 Figure 3-9 Default settings work well for most webpages 46 Figure 3-10 Comparison of 320 and 980 viewport widths 48 Figure 3-11 Webpage is too narrow for default settings 49 Figure 3-12 Web application page is too small for default settings 50 Figure 3-13 Default width and initial scale 51 Figure 3-14 Default width with initial scale set to 1.0 52 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 6Figure 3-15 Width set to 320 with default initial scale 53 Figure 3-16 Width set to 200 with default initial scale 54 Figure 3-17 Width set to 980 and initial scale set to 1.0 55 Figure 3-18 Not specifying viewport properties 56 Figure 3-19 Width set to device-width pixels 57 Customizing Style Sheets 58 Figure 4-1 Comparison of text adjustment settings 59 Figure 4-2 Differences between default and custom highlighting 61 Listing 4-1 Setting the text size adjustment property 60 Listing 4-2 Changing the tap highlight color 60 Designing Forms 62 Figure 5-1 Form metrics when the keyboard is displayed 63 Figure 5-2 A custom checkbox 64 Figure 5-3 A custom text field 65 Figure 5-4 A custom select element 66 Table 5-1 Form metrics 63 Listing 5-1 Creating a custom checkbox with CSS 64 Listing 5-2 Creating a custom text field with CSS 65 Listing 5-3 Creating a custom select control with CSS 66 Handling Events 68 Figure 6-1 The panning gesture 69 Figure 6-2 The touch and hold gesture 70 Figure 6-3 The double-tap gesture 71 Figure 6-4 One-finger gesture emulating a mouse 72 Figure 6-5 The pinch open gesture 72 Figure 6-6 Two-finger panning gesture 73 Table 6-1 Types of events 82 Listing 6-1 A menu using a mouseover handler 73 Listing 6-2 Adding an onclick handler 74 Listing 6-3 Displaying the orientation 80 Promoting Apps with Smart App Banners 84 Figure 7-1 A Smart App Banner of the Apple Store app 84 Listing 7-1 Routing the user to the correct view controller 86 Creating Video 91 Figure 9-1 Export movie panel 93 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 7 Figures, Tables, and ListingsFigure 9-2 Reference movie components 94 Table 9-1 File name extensions for MIME types 95 Storing Data on the Client 97 Listing 10-1 Sample manifest file 98 Debugging 104 Figure 12-1 The Develop menu 106 Figure 12-2 Web Inspector 106 Figure 12-3 Observing the value of document.title in the debug console 108 Figure 12-4 Alert dialog triggered from the debug console 109 HTML Basics 110 Listing A-1 Basic HTML document 110 Listing A-2 Adding a paragraph 113 Listing A-3 Adding a heading 113 Listing A-4 Creating a hyperlink 113 Listing A-5 Adding an image 114 Listing A-6 Creating a table 115 CSS Basics 117 Listing B-1 The styles.css file 120 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 8 Figures, Tables, and ListingsSafari is a full-featured Web browser for Mac OS, Windows, and iOS. You don't need to add any Safari-specific tweaks to make your website work with Safari or to make your website work on iOS-based devices. If you design your website using W3C standardsfor HTML, CSS, and JavaScript, and don't rely on third-party plug-ins, users can view and interact with your website using Safari on all supported platforms. Making websites work with Safari is just a first step, however. It should be your goal to optimize websites to create the best experience for all users, including people using Safari on handheld devices with touch screens. Use CSS to change the layout of your website in portrait or landscape modes, for example; add touch and gesture support; animate changes in CSS properties for Safari users, and so on. At a Glance There are three main areas to focus on when creating web content for Safari: ● Make sure your website is compatible with Safari. ● Enhance the user experience in Safari, particularly on mobile devices. ● Make the best use of dynamically changing network bandwidth when delivering audio and video. Making It Work Safari has an array of built-in tools for quickly spotting incompatibilities and debugging problems. If you have a website up and running, and are getting complaintsthat the site doesn't work with Safari, it is usually because of one of the following problems: ● The site uses Internet Explorer extensions that other browsers don't support. ● The site includes media compressed in a format that Safari doesn't support. ● The site relies on plug-ins to handle audio, video, or animation. Use the Error Console to immediately identify and locate any unsupported HTML, CSS, or JavaScript, making it easy to correct. 2012-09-19 | © 2012 Apple Inc. All Rights Reserved. 9 Developing Web Content for SafariThere are Safari-compatible media formats and embedding techniques for every job. Safari supports audio media in AAC, MP3, AIFF, and WAVE formats on all platforms. Safari supports video media encoded using H.264 compression, commonly used in MPEG-4 format, on all platforms. Handheld devicessupport a somewhat more limited set of MPEG-4 profiles than desktop devices. Safari on the desktop supports plug-ins. There are Safari-compatible versions of all common plug-ins, including QuickTime, Flash, and SilverLight. Safari on iOS does not support plug-ins. To make your website accessible using handheld devices, do not rely on plug-ins to display content. Use the HTML5