Sep 12 22:57:37.988241 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 20:38:35 -00 2025 Sep 12 22:57:37.988262 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:57:37.988270 kernel: BIOS-provided physical RAM map: Sep 12 22:57:37.988275 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 22:57:37.988279 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 12 22:57:37.988283 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Sep 12 22:57:37.988289 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Sep 12 22:57:37.988294 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Sep 12 22:57:37.988299 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Sep 12 22:57:37.988303 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 12 22:57:37.988308 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 12 22:57:37.988312 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 12 22:57:37.988316 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 12 22:57:37.988321 kernel: printk: legacy bootconsole [earlyser0] enabled Sep 12 22:57:37.988327 kernel: NX (Execute Disable) protection: active Sep 12 22:57:37.988332 kernel: APIC: Static calls initialized Sep 12 22:57:37.988337 kernel: efi: EFI v2.7 by Microsoft Sep 12 22:57:37.988341 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Sep 12 22:57:37.988413 kernel: random: crng init done Sep 12 22:57:37.988423 kernel: secureboot: Secure boot disabled Sep 12 22:57:37.988430 kernel: SMBIOS 3.1.0 present. Sep 12 22:57:37.988438 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Sep 12 22:57:37.988445 kernel: DMI: Memory slots populated: 2/2 Sep 12 22:57:37.988455 kernel: Hypervisor detected: Microsoft Hyper-V Sep 12 22:57:37.988462 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Sep 12 22:57:37.988469 kernel: Hyper-V: Nested features: 0x3e0101 Sep 12 22:57:37.988476 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 12 22:57:37.988483 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 12 22:57:37.988491 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 22:57:37.988498 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 22:57:37.988505 kernel: tsc: Detected 2299.999 MHz processor Sep 12 22:57:37.988512 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 22:57:37.988521 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 22:57:37.988528 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Sep 12 22:57:37.988538 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 22:57:37.988546 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 22:57:37.988554 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Sep 12 22:57:37.988562 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Sep 12 22:57:37.988569 kernel: Using GB pages for direct mapping Sep 12 22:57:37.988577 kernel: ACPI: Early table checksum verification disabled Sep 12 22:57:37.988588 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 12 22:57:37.988598 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 22:57:37.988606 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 22:57:37.988615 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 22:57:37.988623 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 12 22:57:37.988631 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 22:57:37.988639 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 22:57:37.988649 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 22:57:37.988657 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 12 22:57:37.988665 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 12 22:57:37.988673 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 22:57:37.988682 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 12 22:57:37.988690 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Sep 12 22:57:37.988699 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 12 22:57:37.988707 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 12 22:57:37.988715 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 12 22:57:37.988725 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 12 22:57:37.988734 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Sep 12 22:57:37.988742 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Sep 12 22:57:37.988750 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 12 22:57:37.988758 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 22:57:37.988766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Sep 12 22:57:37.988775 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Sep 12 22:57:37.988783 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Sep 12 22:57:37.988792 kernel: Zone ranges: Sep 12 22:57:37.988802 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 22:57:37.988810 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 22:57:37.988818 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 22:57:37.988825 kernel: Device empty Sep 12 22:57:37.988833 kernel: Movable zone start for each node Sep 12 22:57:37.988841 kernel: Early memory node ranges Sep 12 22:57:37.988849 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 22:57:37.988857 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Sep 12 22:57:37.988865 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Sep 12 22:57:37.988875 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 12 22:57:37.988882 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 22:57:37.988917 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 12 22:57:37.988926 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 22:57:37.988934 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 22:57:37.988942 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 12 22:57:37.988950 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Sep 12 22:57:37.988959 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 12 22:57:37.988968 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 22:57:37.988978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 22:57:37.988987 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 22:57:37.988995 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 12 22:57:37.989004 kernel: TSC deadline timer available Sep 12 22:57:37.989012 kernel: CPU topo: Max. logical packages: 1 Sep 12 22:57:37.989021 kernel: CPU topo: Max. logical dies: 1 Sep 12 22:57:37.989029 kernel: CPU topo: Max. dies per package: 1 Sep 12 22:57:37.989037 kernel: CPU topo: Max. threads per core: 2 Sep 12 22:57:37.989046 kernel: CPU topo: Num. cores per package: 1 Sep 12 22:57:37.989056 kernel: CPU topo: Num. threads per package: 2 Sep 12 22:57:37.989065 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 12 22:57:37.989073 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 12 22:57:37.989082 kernel: Booting paravirtualized kernel on Hyper-V Sep 12 22:57:37.989091 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 22:57:37.989100 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 22:57:37.989108 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 12 22:57:37.989117 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 12 22:57:37.989126 kernel: pcpu-alloc: [0] 0 1 Sep 12 22:57:37.989136 kernel: Hyper-V: PV spinlocks enabled Sep 12 22:57:37.989145 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 22:57:37.989155 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:57:37.989164 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 22:57:37.989173 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 12 22:57:37.989182 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 22:57:37.989190 kernel: Fallback order for Node 0: 0 Sep 12 22:57:37.989199 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Sep 12 22:57:37.989210 kernel: Policy zone: Normal Sep 12 22:57:37.989218 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 22:57:37.989227 kernel: software IO TLB: area num 2. Sep 12 22:57:37.989235 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 22:57:37.989244 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 22:57:37.989253 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 22:57:37.989261 kernel: Dynamic Preempt: voluntary Sep 12 22:57:37.989269 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 22:57:37.989280 kernel: rcu: RCU event tracing is enabled. Sep 12 22:57:37.989297 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 22:57:37.989306 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 22:57:37.989316 kernel: Rude variant of Tasks RCU enabled. Sep 12 22:57:37.989326 kernel: Tracing variant of Tasks RCU enabled. Sep 12 22:57:37.989336 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 22:57:37.989360 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 22:57:37.989370 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 22:57:37.989380 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 22:57:37.989389 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 22:57:37.989399 kernel: Using NULL legacy PIC Sep 12 22:57:37.989410 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 12 22:57:37.989419 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 22:57:37.989428 kernel: Console: colour dummy device 80x25 Sep 12 22:57:37.989438 kernel: printk: legacy console [tty1] enabled Sep 12 22:57:37.989447 kernel: printk: legacy console [ttyS0] enabled Sep 12 22:57:37.989457 kernel: printk: legacy bootconsole [earlyser0] disabled Sep 12 22:57:37.989466 kernel: ACPI: Core revision 20240827 Sep 12 22:57:37.989477 kernel: Failed to register legacy timer interrupt Sep 12 22:57:37.989486 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 22:57:37.989495 kernel: x2apic enabled Sep 12 22:57:37.989504 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 22:57:37.989513 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 12 22:57:37.989522 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 22:57:37.989532 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Sep 12 22:57:37.989541 kernel: Hyper-V: Using IPI hypercalls Sep 12 22:57:37.989550 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 12 22:57:37.989561 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 12 22:57:37.989570 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 12 22:57:37.989579 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 12 22:57:37.989588 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 12 22:57:37.989597 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 12 22:57:37.989606 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 12 22:57:37.989615 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Sep 12 22:57:37.989624 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 22:57:37.989633 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 22:57:37.989644 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 22:57:37.989653 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 22:57:37.989662 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 22:57:37.989671 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 22:57:37.989680 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 22:57:37.989688 kernel: RETBleed: Vulnerable Sep 12 22:57:37.989697 kernel: Speculative Store Bypass: Vulnerable Sep 12 22:57:37.989706 kernel: active return thunk: its_return_thunk Sep 12 22:57:37.989714 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 22:57:37.989723 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 22:57:37.989732 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 22:57:37.989742 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 22:57:37.989751 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 22:57:37.989760 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 22:57:37.989769 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 22:57:37.989777 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Sep 12 22:57:37.989786 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Sep 12 22:57:37.989795 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Sep 12 22:57:37.989803 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 22:57:37.989812 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 12 22:57:37.989820 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 12 22:57:37.989831 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 12 22:57:37.989839 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Sep 12 22:57:37.989848 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Sep 12 22:57:37.989857 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Sep 12 22:57:37.989865 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Sep 12 22:57:37.989874 kernel: Freeing SMP alternatives memory: 32K Sep 12 22:57:37.989883 kernel: pid_max: default: 32768 minimum: 301 Sep 12 22:57:37.989891 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 22:57:37.989900 kernel: landlock: Up and running. Sep 12 22:57:37.989909 kernel: SELinux: Initializing. Sep 12 22:57:37.989917 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 22:57:37.989926 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 22:57:37.989937 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Sep 12 22:57:37.989945 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Sep 12 22:57:37.989954 kernel: signal: max sigframe size: 11952 Sep 12 22:57:37.989963 kernel: rcu: Hierarchical SRCU implementation. Sep 12 22:57:37.989972 kernel: rcu: Max phase no-delay instances is 400. Sep 12 22:57:37.989981 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 22:57:37.989990 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 22:57:37.989999 kernel: smp: Bringing up secondary CPUs ... Sep 12 22:57:37.990008 kernel: smpboot: x86: Booting SMP configuration: Sep 12 22:57:37.990018 kernel: .... node #0, CPUs: #1 Sep 12 22:57:37.990026 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 22:57:37.990035 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 12 22:57:37.990045 kernel: Memory: 8077032K/8383228K available (14336K kernel code, 2432K rwdata, 9992K rodata, 54084K init, 2880K bss, 299988K reserved, 0K cma-reserved) Sep 12 22:57:37.990054 kernel: devtmpfs: initialized Sep 12 22:57:37.990064 kernel: x86/mm: Memory block size: 128MB Sep 12 22:57:37.990072 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 12 22:57:37.990082 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 22:57:37.990091 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 22:57:37.990101 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 22:57:37.990110 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 22:57:37.990119 kernel: audit: initializing netlink subsys (disabled) Sep 12 22:57:37.990129 kernel: audit: type=2000 audit(1757717854.029:1): state=initialized audit_enabled=0 res=1 Sep 12 22:57:37.990137 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 22:57:37.990146 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 22:57:37.990155 kernel: cpuidle: using governor menu Sep 12 22:57:37.990164 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 22:57:37.990173 kernel: dca service started, version 1.12.1 Sep 12 22:57:37.990183 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Sep 12 22:57:37.990192 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Sep 12 22:57:37.990201 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 22:57:37.990211 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 22:57:37.990220 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 22:57:37.990230 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 22:57:37.990239 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 22:57:37.990248 kernel: ACPI: Added _OSI(Module Device) Sep 12 22:57:37.990257 kernel: ACPI: Added _OSI(Processor Device) Sep 12 22:57:37.990268 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 22:57:37.990277 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 22:57:37.990286 kernel: ACPI: Interpreter enabled Sep 12 22:57:37.990296 kernel: ACPI: PM: (supports S0 S5) Sep 12 22:57:37.990305 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 22:57:37.990314 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 22:57:37.990324 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 22:57:37.990333 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 12 22:57:37.990342 kernel: iommu: Default domain type: Translated Sep 12 22:57:37.990365 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 22:57:37.993387 kernel: efivars: Registered efivars operations Sep 12 22:57:37.993400 kernel: PCI: Using ACPI for IRQ routing Sep 12 22:57:37.993409 kernel: PCI: System does not support PCI Sep 12 22:57:37.993418 kernel: vgaarb: loaded Sep 12 22:57:37.993427 kernel: clocksource: Switched to clocksource tsc-early Sep 12 22:57:37.993436 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 22:57:37.993445 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 22:57:37.993453 kernel: pnp: PnP ACPI init Sep 12 22:57:37.993462 kernel: pnp: PnP ACPI: found 3 devices Sep 12 22:57:37.993468 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 22:57:37.993474 kernel: NET: Registered PF_INET protocol family Sep 12 22:57:37.993480 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 22:57:37.993485 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 12 22:57:37.993491 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 22:57:37.993497 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 22:57:37.993502 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 22:57:37.993508 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 12 22:57:37.993520 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 22:57:37.993529 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 22:57:37.993536 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 22:57:37.993541 kernel: NET: Registered PF_XDP protocol family Sep 12 22:57:37.993547 kernel: PCI: CLS 0 bytes, default 64 Sep 12 22:57:37.993553 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 22:57:37.993560 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Sep 12 22:57:37.993571 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Sep 12 22:57:37.993582 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Sep 12 22:57:37.993592 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 12 22:57:37.993598 kernel: clocksource: Switched to clocksource tsc Sep 12 22:57:37.993603 kernel: Initialise system trusted keyrings Sep 12 22:57:37.993609 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 12 22:57:37.993617 kernel: Key type asymmetric registered Sep 12 22:57:37.993628 kernel: Asymmetric key parser 'x509' registered Sep 12 22:57:37.993638 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 22:57:37.993646 kernel: io scheduler mq-deadline registered Sep 12 22:57:37.993652 kernel: io scheduler kyber registered Sep 12 22:57:37.993659 kernel: io scheduler bfq registered Sep 12 22:57:37.993665 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 22:57:37.993671 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 22:57:37.993681 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 22:57:37.993690 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 22:57:37.993699 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 22:57:37.993707 kernel: i8042: PNP: No PS/2 controller found. Sep 12 22:57:37.993833 kernel: rtc_cmos 00:02: registered as rtc0 Sep 12 22:57:37.993919 kernel: rtc_cmos 00:02: setting system clock to 2025-09-12T22:57:37 UTC (1757717857) Sep 12 22:57:37.993983 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 12 22:57:37.993990 kernel: intel_pstate: Intel P-state driver initializing Sep 12 22:57:37.993997 kernel: efifb: probing for efifb Sep 12 22:57:37.994008 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 22:57:37.994017 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 22:57:37.994024 kernel: efifb: scrolling: redraw Sep 12 22:57:37.994029 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 22:57:37.994036 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 22:57:37.994042 kernel: fb0: EFI VGA frame buffer device Sep 12 22:57:37.994052 kernel: pstore: Using crash dump compression: deflate Sep 12 22:57:37.994061 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 22:57:37.994070 kernel: NET: Registered PF_INET6 protocol family Sep 12 22:57:37.994077 kernel: Segment Routing with IPv6 Sep 12 22:57:37.994082 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 22:57:37.994087 kernel: NET: Registered PF_PACKET protocol family Sep 12 22:57:37.994093 kernel: Key type dns_resolver registered Sep 12 22:57:37.994100 kernel: IPI shorthand broadcast: enabled Sep 12 22:57:37.994111 kernel: sched_clock: Marking stable (3150048490, 100010305)->(3602325919, -352267124) Sep 12 22:57:37.994121 kernel: registered taskstats version 1 Sep 12 22:57:37.994128 kernel: Loading compiled-in X.509 certificates Sep 12 22:57:37.994134 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: c3297a5801573420030c321362a802da1fd49c4e' Sep 12 22:57:37.994140 kernel: Demotion targets for Node 0: null Sep 12 22:57:37.994145 kernel: Key type .fscrypt registered Sep 12 22:57:37.994152 kernel: Key type fscrypt-provisioning registered Sep 12 22:57:37.994162 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 22:57:37.994173 kernel: ima: Allocated hash algorithm: sha1 Sep 12 22:57:37.994179 kernel: ima: No architecture policies found Sep 12 22:57:37.994185 kernel: clk: Disabling unused clocks Sep 12 22:57:37.994190 kernel: Warning: unable to open an initial console. Sep 12 22:57:37.994196 kernel: Freeing unused kernel image (initmem) memory: 54084K Sep 12 22:57:37.994206 kernel: Write protecting the kernel read-only data: 24576k Sep 12 22:57:37.994215 kernel: Freeing unused kernel image (rodata/data gap) memory: 248K Sep 12 22:57:37.994222 kernel: Run /init as init process Sep 12 22:57:37.994227 kernel: with arguments: Sep 12 22:57:37.994234 kernel: /init Sep 12 22:57:37.994242 kernel: with environment: Sep 12 22:57:37.994253 kernel: HOME=/ Sep 12 22:57:37.994261 kernel: TERM=linux Sep 12 22:57:37.994266 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 22:57:37.994273 systemd[1]: Successfully made /usr/ read-only. Sep 12 22:57:37.994281 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:57:37.994291 systemd[1]: Detected virtualization microsoft. Sep 12 22:57:37.994305 systemd[1]: Detected architecture x86-64. Sep 12 22:57:37.994313 systemd[1]: Running in initrd. Sep 12 22:57:37.994319 systemd[1]: No hostname configured, using default hostname. Sep 12 22:57:37.994325 systemd[1]: Hostname set to . Sep 12 22:57:37.994331 systemd[1]: Initializing machine ID from random generator. Sep 12 22:57:37.994341 systemd[1]: Queued start job for default target initrd.target. Sep 12 22:57:37.994382 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:57:37.994394 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:57:37.994407 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 22:57:37.994414 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:57:37.994420 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 22:57:37.994426 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 22:57:37.994436 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 22:57:37.994447 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 22:57:37.994457 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:57:37.994465 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:57:37.994471 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:57:37.994477 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:57:37.994486 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:57:37.994497 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:57:37.994507 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:57:37.994516 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:57:37.994522 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 22:57:37.994529 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 22:57:37.994536 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:57:37.994548 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:57:37.994559 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:57:37.994569 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:57:37.994578 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 22:57:37.994585 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:57:37.994591 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 22:57:37.994597 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 22:57:37.994607 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 22:57:37.994617 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:57:37.994638 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:57:37.994648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:57:37.994669 systemd-journald[205]: Collecting audit messages is disabled. Sep 12 22:57:37.994694 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 22:57:37.994702 systemd-journald[205]: Journal started Sep 12 22:57:37.994725 systemd-journald[205]: Runtime Journal (/run/log/journal/195a8c4f7b9241afa6517b8011af3e3f) is 8M, max 158.9M, 150.9M free. Sep 12 22:57:38.004593 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:57:38.006978 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:57:38.012252 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 22:57:38.016236 systemd-modules-load[207]: Inserted module 'overlay' Sep 12 22:57:38.017134 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 22:57:38.028459 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:57:38.030805 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:57:38.041058 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 22:57:38.054640 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:57:38.056504 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 22:57:38.057603 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:57:38.059256 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 22:57:38.065070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:57:38.074040 kernel: Bridge firewalling registered Sep 12 22:57:38.074097 systemd-modules-load[207]: Inserted module 'br_netfilter' Sep 12 22:57:38.075134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:57:38.077466 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:57:38.084240 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:57:38.092002 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:57:38.094454 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:57:38.109190 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:57:38.119461 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 22:57:38.127473 systemd-resolved[235]: Positive Trust Anchors: Sep 12 22:57:38.127487 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:57:38.127520 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:57:38.130056 systemd-resolved[235]: Defaulting to hostname 'linux'. Sep 12 22:57:38.155582 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8e60d6befc710e967d67e9a1d87ced7416895090c99a765b3a00e66a62f49e40 Sep 12 22:57:38.130858 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:57:38.132479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:57:38.214370 kernel: SCSI subsystem initialized Sep 12 22:57:38.221362 kernel: Loading iSCSI transport class v2.0-870. Sep 12 22:57:38.231373 kernel: iscsi: registered transport (tcp) Sep 12 22:57:38.248649 kernel: iscsi: registered transport (qla4xxx) Sep 12 22:57:38.248694 kernel: QLogic iSCSI HBA Driver Sep 12 22:57:38.262179 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:57:38.277829 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:57:38.283475 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:57:38.314556 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 22:57:38.317482 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 22:57:38.362367 kernel: raid6: avx512x4 gen() 42558 MB/s Sep 12 22:57:38.380360 kernel: raid6: avx512x2 gen() 42229 MB/s Sep 12 22:57:38.397359 kernel: raid6: avx512x1 gen() 25589 MB/s Sep 12 22:57:38.414359 kernel: raid6: avx2x4 gen() 35693 MB/s Sep 12 22:57:38.432358 kernel: raid6: avx2x2 gen() 36974 MB/s Sep 12 22:57:38.450570 kernel: raid6: avx2x1 gen() 30694 MB/s Sep 12 22:57:38.450586 kernel: raid6: using algorithm avx512x4 gen() 42558 MB/s Sep 12 22:57:38.468985 kernel: raid6: .... xor() 7920 MB/s, rmw enabled Sep 12 22:57:38.469007 kernel: raid6: using avx512x2 recovery algorithm Sep 12 22:57:38.488370 kernel: xor: automatically using best checksumming function avx Sep 12 22:57:38.611372 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 22:57:38.616665 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:57:38.620829 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:57:38.646712 systemd-udevd[454]: Using default interface naming scheme 'v255'. Sep 12 22:57:38.650998 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:57:38.655801 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 22:57:38.680383 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Sep 12 22:57:38.698781 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:57:38.704130 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:57:38.735897 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:57:38.742697 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 22:57:38.786372 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 22:57:38.794371 kernel: AES CTR mode by8 optimization enabled Sep 12 22:57:38.817372 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 22:57:38.825466 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:57:38.825614 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:57:38.836017 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:57:38.848322 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 22:57:38.848371 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 22:57:38.844764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:57:38.867366 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 22:57:38.867656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:57:38.868195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:57:38.879272 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 22:57:38.879454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:57:38.885430 kernel: PTP clock support registered Sep 12 22:57:38.885454 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 22:57:38.891936 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Sep 12 22:57:38.891971 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 22:57:38.898108 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 22:57:38.898139 kernel: hv_vmbus: registering driver hv_utils Sep 12 22:57:38.905341 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 22:57:38.905384 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Sep 12 22:57:38.905399 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 22:57:38.909015 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 22:57:38.633936 systemd-resolved[235]: Clock change detected. Flushing caches. Sep 12 22:57:38.639217 systemd-journald[205]: Time jumped backwards, rotating. Sep 12 22:57:38.645724 kernel: hv_vmbus: registering driver hv_pci Sep 12 22:57:38.649729 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 22:57:38.649640 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:57:38.659757 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Sep 12 22:57:38.663625 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 22:57:38.670683 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Sep 12 22:57:38.670877 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d729901 (unnamed net_device) (uninitialized): VF slot 1 added Sep 12 22:57:38.670972 kernel: scsi host0: storvsc_host_t Sep 12 22:57:38.671066 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Sep 12 22:57:38.673713 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 12 22:57:38.677315 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 22:57:38.678807 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Sep 12 22:57:38.685837 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Sep 12 22:57:38.701308 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 22:57:38.701949 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 22:57:38.702084 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 22:57:38.702095 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Sep 12 22:57:38.704716 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 22:57:38.728725 kernel: nvme nvme0: pci function c05b:00:00.0 Sep 12 22:57:38.728881 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 22:57:38.732401 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Sep 12 22:57:38.750720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#76 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 22:57:38.885732 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 22:57:38.889714 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 22:57:39.129805 kernel: nvme nvme0: using unchecked data buffer Sep 12 22:57:39.307884 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Sep 12 22:57:39.327340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 12 22:57:39.355616 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Sep 12 22:57:39.359003 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 22:57:39.367563 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 12 22:57:39.371762 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 12 22:57:39.375889 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:57:39.379309 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:57:39.384743 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:57:39.387812 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 22:57:39.389795 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 22:57:39.408543 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:57:39.410907 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 22:57:39.430720 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 22:57:39.727855 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Sep 12 22:57:39.728094 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Sep 12 22:57:39.730909 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Sep 12 22:57:39.732577 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 22:57:39.737772 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Sep 12 22:57:39.741803 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Sep 12 22:57:39.746738 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Sep 12 22:57:39.748897 kernel: pci 7870:00:00.0: enabling Extended Tags Sep 12 22:57:39.764970 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 22:57:39.765160 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Sep 12 22:57:39.767790 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Sep 12 22:57:39.773111 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Sep 12 22:57:39.782728 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Sep 12 22:57:39.786003 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d729901 eth0: VF registering: eth1 Sep 12 22:57:39.786170 kernel: mana 7870:00:00.0 eth1: joined to eth0 Sep 12 22:57:39.789721 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Sep 12 22:57:40.436152 disk-uuid[679]: The operation has completed successfully. Sep 12 22:57:40.438892 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 22:57:40.495464 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 22:57:40.495550 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 22:57:40.528826 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 22:57:40.547745 sh[713]: Success Sep 12 22:57:40.581916 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 22:57:40.581967 kernel: device-mapper: uevent: version 1.0.3 Sep 12 22:57:40.583363 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 22:57:40.591730 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 22:57:40.792478 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 22:57:40.797319 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 22:57:40.807081 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 22:57:40.820735 kernel: BTRFS: device fsid 5d2ab445-1154-4e47-9d7e-ff4b81d84474 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (726) Sep 12 22:57:40.822912 kernel: BTRFS info (device dm-0): first mount of filesystem 5d2ab445-1154-4e47-9d7e-ff4b81d84474 Sep 12 22:57:40.823015 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:57:41.114155 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 22:57:41.114229 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 22:57:41.115693 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 22:57:41.167097 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 22:57:41.167635 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:57:41.172599 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 22:57:41.173384 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 22:57:41.182067 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 22:57:41.206506 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (763) Sep 12 22:57:41.206548 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:57:41.208415 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:57:41.230073 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 22:57:41.230107 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 12 22:57:41.231596 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 22:57:41.236722 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:57:41.237446 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 22:57:41.241228 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 22:57:41.270633 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:57:41.275112 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:57:41.308091 systemd-networkd[895]: lo: Link UP Sep 12 22:57:41.308098 systemd-networkd[895]: lo: Gained carrier Sep 12 22:57:41.310475 systemd-networkd[895]: Enumeration completed Sep 12 22:57:41.315369 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 12 22:57:41.310768 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:57:41.322492 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 12 22:57:41.322649 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d729901 eth0: Data path switched to VF: enP30832s1 Sep 12 22:57:41.310928 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:57:41.310932 systemd-networkd[895]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:57:41.317183 systemd[1]: Reached target network.target - Network. Sep 12 22:57:41.321948 systemd-networkd[895]: enP30832s1: Link UP Sep 12 22:57:41.322015 systemd-networkd[895]: eth0: Link UP Sep 12 22:57:41.322101 systemd-networkd[895]: eth0: Gained carrier Sep 12 22:57:41.322111 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:57:41.335884 systemd-networkd[895]: enP30832s1: Gained carrier Sep 12 22:57:41.353734 systemd-networkd[895]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 22:57:42.166157 ignition[848]: Ignition 2.22.0 Sep 12 22:57:42.166170 ignition[848]: Stage: fetch-offline Sep 12 22:57:42.167996 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:57:42.166271 ignition[848]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:57:42.170838 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 22:57:42.166278 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 22:57:42.166362 ignition[848]: parsed url from cmdline: "" Sep 12 22:57:42.166365 ignition[848]: no config URL provided Sep 12 22:57:42.166370 ignition[848]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 22:57:42.166376 ignition[848]: no config at "/usr/lib/ignition/user.ign" Sep 12 22:57:42.166381 ignition[848]: failed to fetch config: resource requires networking Sep 12 22:57:42.166604 ignition[848]: Ignition finished successfully Sep 12 22:57:42.200199 ignition[904]: Ignition 2.22.0 Sep 12 22:57:42.200209 ignition[904]: Stage: fetch Sep 12 22:57:42.200404 ignition[904]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:57:42.200413 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 22:57:42.201378 ignition[904]: parsed url from cmdline: "" Sep 12 22:57:42.201381 ignition[904]: no config URL provided Sep 12 22:57:42.201386 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 22:57:42.201392 ignition[904]: no config at "/usr/lib/ignition/user.ign" Sep 12 22:57:42.201408 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 22:57:42.267177 ignition[904]: GET result: OK Sep 12 22:57:42.267273 ignition[904]: config has been read from IMDS userdata Sep 12 22:57:42.268414 ignition[904]: parsing config with SHA512: 7568b31f49c9eba10879fb8883aa9848b21ec56ee59a4bfd35af71627726ba59e740b2d89acd16472a5919187a465981df666af5fac712b38fbcbde82c594067 Sep 12 22:57:42.272271 unknown[904]: fetched base config from "system" Sep 12 22:57:42.272880 ignition[904]: fetch: fetch complete Sep 12 22:57:42.272294 unknown[904]: fetched base config from "system" Sep 12 22:57:42.272886 ignition[904]: fetch: fetch passed Sep 12 22:57:42.272300 unknown[904]: fetched user config from "azure" Sep 12 22:57:42.272929 ignition[904]: Ignition finished successfully Sep 12 22:57:42.276269 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 22:57:42.281353 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 22:57:42.312334 ignition[911]: Ignition 2.22.0 Sep 12 22:57:42.312345 ignition[911]: Stage: kargs Sep 12 22:57:42.314825 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 22:57:42.312535 ignition[911]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:57:42.320815 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 22:57:42.312543 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 22:57:42.313293 ignition[911]: kargs: kargs passed Sep 12 22:57:42.313327 ignition[911]: Ignition finished successfully Sep 12 22:57:42.350320 ignition[917]: Ignition 2.22.0 Sep 12 22:57:42.350330 ignition[917]: Stage: disks Sep 12 22:57:42.350536 ignition[917]: no configs at "/usr/lib/ignition/base.d" Sep 12 22:57:42.353179 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 22:57:42.350543 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 22:57:42.351329 ignition[917]: disks: disks passed Sep 12 22:57:42.351361 ignition[917]: Ignition finished successfully Sep 12 22:57:42.361786 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 22:57:42.365386 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 22:57:42.368760 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:57:42.371474 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:57:42.372844 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:57:42.375807 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 22:57:42.439636 systemd-fsck[925]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 12 22:57:42.443833 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 22:57:42.449368 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 22:57:42.699731 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d027afc5-396a-49bf-a5be-60ddd42cb089 r/w with ordered data mode. Quota mode: none. Sep 12 22:57:42.700310 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 22:57:42.705126 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 22:57:42.730801 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:57:42.736619 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 22:57:42.742835 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 22:57:42.748330 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 22:57:42.748615 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:57:42.756996 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (934) Sep 12 22:57:42.757855 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 22:57:42.764806 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:57:42.764827 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:57:42.767822 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 22:57:42.776027 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 22:57:42.776046 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 12 22:57:42.776056 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 22:57:42.774171 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:57:43.037841 systemd-networkd[895]: eth0: Gained IPv6LL Sep 12 22:57:43.079259 coreos-metadata[936]: Sep 12 22:57:43.079 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 22:57:43.086767 coreos-metadata[936]: Sep 12 22:57:43.086 INFO Fetch successful Sep 12 22:57:43.089781 coreos-metadata[936]: Sep 12 22:57:43.088 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 22:57:43.098621 coreos-metadata[936]: Sep 12 22:57:43.098 INFO Fetch successful Sep 12 22:57:43.112755 coreos-metadata[936]: Sep 12 22:57:43.112 INFO wrote hostname ci-4459.0.0-a-49ee7b6560 to /sysroot/etc/hostname Sep 12 22:57:43.115114 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 22:57:43.343784 initrd-setup-root[964]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 22:57:43.364306 initrd-setup-root[971]: cut: /sysroot/etc/group: No such file or directory Sep 12 22:57:43.396206 initrd-setup-root[978]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 22:57:43.427302 initrd-setup-root[985]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 22:57:44.248266 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 22:57:44.254004 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 22:57:44.261827 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 22:57:44.269803 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:57:44.268162 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 22:57:44.297543 ignition[1052]: INFO : Ignition 2.22.0 Sep 12 22:57:44.300806 ignition[1052]: INFO : Stage: mount Sep 12 22:57:44.300806 ignition[1052]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:57:44.300806 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 22:57:44.300806 ignition[1052]: INFO : mount: mount passed Sep 12 22:57:44.300806 ignition[1052]: INFO : Ignition finished successfully Sep 12 22:57:44.298013 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 22:57:44.310389 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 22:57:44.314632 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 22:57:44.325921 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 22:57:44.344717 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1064) Sep 12 22:57:44.346763 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem fd5cdc72-255e-4ed2-8d25-c5e581a08827 Sep 12 22:57:44.346864 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 22:57:44.351164 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 22:57:44.351193 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 12 22:57:44.351262 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 22:57:44.353438 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 22:57:44.379178 ignition[1081]: INFO : Ignition 2.22.0 Sep 12 22:57:44.379178 ignition[1081]: INFO : Stage: files Sep 12 22:57:44.381196 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:57:44.381196 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 22:57:44.384748 ignition[1081]: DEBUG : files: compiled without relabeling support, skipping Sep 12 22:57:44.387775 ignition[1081]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 22:57:44.387775 ignition[1081]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 22:57:44.402820 ignition[1081]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 22:57:44.405121 ignition[1081]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 22:57:44.405121 ignition[1081]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 22:57:44.403108 unknown[1081]: wrote ssh authorized keys file for user: core Sep 12 22:57:44.433060 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 22:57:44.437783 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 22:57:44.485937 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 22:57:44.738104 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 22:57:44.738104 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 22:57:44.745545 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 22:57:45.004214 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 22:57:45.283229 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 22:57:45.283229 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 22:57:45.291286 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 22:57:45.291286 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:57:45.291286 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 22:57:45.291286 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:57:45.291286 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 22:57:45.291286 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:57:45.291286 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 22:57:45.314730 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:57:45.314730 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 22:57:45.314730 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 22:57:45.314730 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 22:57:45.314730 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 22:57:45.314730 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 22:57:45.603120 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 22:57:46.165627 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 22:57:46.165627 ignition[1081]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 22:57:46.199692 ignition[1081]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:57:46.213518 ignition[1081]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 22:57:46.213518 ignition[1081]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 22:57:46.224820 ignition[1081]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 22:57:46.224820 ignition[1081]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 22:57:46.224820 ignition[1081]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:57:46.224820 ignition[1081]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 22:57:46.224820 ignition[1081]: INFO : files: files passed Sep 12 22:57:46.224820 ignition[1081]: INFO : Ignition finished successfully Sep 12 22:57:46.218211 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 22:57:46.223165 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 22:57:46.234822 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 22:57:46.249885 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 22:57:46.259918 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:57:46.259918 initrd-setup-root-after-ignition[1110]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:57:46.249969 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 22:57:46.275850 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 22:57:46.262661 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:57:46.264924 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 22:57:46.272452 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 22:57:46.304902 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 22:57:46.304986 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 22:57:46.310084 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 22:57:46.314770 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 22:57:46.317793 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 22:57:46.319446 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 22:57:46.338683 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:57:46.343596 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 22:57:46.360427 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:57:46.361143 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:57:46.361447 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 22:57:46.367880 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 22:57:46.368005 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 22:57:46.370893 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 22:57:46.373413 systemd[1]: Stopped target basic.target - Basic System. Sep 12 22:57:46.374397 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 22:57:46.374726 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 22:57:46.375080 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 22:57:46.383884 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 22:57:46.390460 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 22:57:46.392831 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 22:57:46.397871 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 22:57:46.399721 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 22:57:46.400013 systemd[1]: Stopped target swap.target - Swaps. Sep 12 22:57:46.400303 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 22:57:46.400420 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 22:57:46.400948 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:57:46.401280 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:57:46.401836 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 22:57:46.404954 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:57:46.412296 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 22:57:46.412392 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 22:57:46.427472 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 22:57:46.427602 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 22:57:46.439979 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 22:57:46.441454 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 22:57:46.444769 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 22:57:46.446420 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 22:57:46.451910 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 22:57:46.455058 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 22:57:46.461818 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 22:57:46.462078 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:57:46.473688 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 22:57:46.473865 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 22:57:46.486890 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 22:57:46.486973 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 22:57:46.494316 ignition[1134]: INFO : Ignition 2.22.0 Sep 12 22:57:46.494316 ignition[1134]: INFO : Stage: umount Sep 12 22:57:46.499422 ignition[1134]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 22:57:46.499422 ignition[1134]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 22:57:46.499422 ignition[1134]: INFO : umount: umount passed Sep 12 22:57:46.499422 ignition[1134]: INFO : Ignition finished successfully Sep 12 22:57:46.496535 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 22:57:46.496607 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 22:57:46.501111 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 22:57:46.501187 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 22:57:46.504023 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 22:57:46.504067 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 22:57:46.510818 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 22:57:46.510863 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 22:57:46.513171 systemd[1]: Stopped target network.target - Network. Sep 12 22:57:46.514511 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 22:57:46.514552 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 22:57:46.516385 systemd[1]: Stopped target paths.target - Path Units. Sep 12 22:57:46.518827 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 22:57:46.523039 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:57:46.527754 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 22:57:46.531957 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 22:57:46.535196 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 22:57:46.535235 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 22:57:46.540308 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 22:57:46.540332 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 22:57:46.546043 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 22:57:46.546090 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 22:57:46.561777 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 22:57:46.561822 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 22:57:46.565181 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 22:57:46.571211 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 22:57:46.576763 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 22:57:46.578270 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 22:57:46.578345 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 22:57:46.582355 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 22:57:46.582908 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 22:57:46.590212 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 22:57:46.590257 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:57:46.596296 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 22:57:46.600032 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 22:57:46.601776 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 22:57:46.606856 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:57:46.612026 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 22:57:46.612113 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 22:57:46.618767 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 22:57:46.620844 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 22:57:46.622217 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:57:46.628692 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 22:57:46.628780 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 22:57:46.636581 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 22:57:46.641168 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d729901 eth0: Data path switched from VF: enP30832s1 Sep 12 22:57:46.636621 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:57:46.646806 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 12 22:57:46.641208 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 22:57:46.641255 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 22:57:46.646876 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 22:57:46.646923 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 22:57:46.654184 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 22:57:46.655630 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 22:57:46.661302 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 22:57:46.663133 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 22:57:46.663178 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:57:46.672602 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 22:57:46.674305 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:57:46.677532 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 22:57:46.677564 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 22:57:46.686576 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 22:57:46.686627 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:57:46.691564 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 22:57:46.691598 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:57:46.694241 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 22:57:46.694274 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:57:46.705946 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 22:57:46.705993 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:57:46.711264 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 22:57:46.711296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:57:46.718582 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 22:57:46.718618 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 22:57:46.718640 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 22:57:46.718661 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 22:57:46.718683 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:57:46.718733 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 22:57:46.719025 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 22:57:46.719109 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 22:57:46.721719 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 22:57:46.721804 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 22:57:46.757174 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 22:57:46.757269 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 22:57:46.760142 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 22:57:46.760224 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 22:57:46.760289 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 22:57:46.761823 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 22:57:46.790918 systemd[1]: Switching root. Sep 12 22:57:46.879271 systemd-journald[205]: Journal stopped Sep 12 22:57:50.613348 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Sep 12 22:57:50.613380 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 22:57:50.613393 kernel: SELinux: policy capability open_perms=1 Sep 12 22:57:50.613402 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 22:57:50.613411 kernel: SELinux: policy capability always_check_network=0 Sep 12 22:57:50.613420 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 22:57:50.613430 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 22:57:50.613441 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 22:57:50.613450 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 22:57:50.613459 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 22:57:50.613468 kernel: audit: type=1403 audit(1757717868.151:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 22:57:50.613480 systemd[1]: Successfully loaded SELinux policy in 129.990ms. Sep 12 22:57:50.613491 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.255ms. Sep 12 22:57:50.613503 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 22:57:50.613516 systemd[1]: Detected virtualization microsoft. Sep 12 22:57:50.613526 systemd[1]: Detected architecture x86-64. Sep 12 22:57:50.613536 systemd[1]: Detected first boot. Sep 12 22:57:50.613547 systemd[1]: Hostname set to . Sep 12 22:57:50.613559 systemd[1]: Initializing machine ID from random generator. Sep 12 22:57:50.613570 zram_generator::config[1178]: No configuration found. Sep 12 22:57:50.613581 kernel: Guest personality initialized and is inactive Sep 12 22:57:50.613590 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 12 22:57:50.613599 kernel: Initialized host personality Sep 12 22:57:50.613609 kernel: NET: Registered PF_VSOCK protocol family Sep 12 22:57:50.613619 systemd[1]: Populated /etc with preset unit settings. Sep 12 22:57:50.613632 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 22:57:50.613642 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 22:57:50.613652 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 22:57:50.613662 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 22:57:50.613673 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 22:57:50.613684 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 22:57:50.613694 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 22:57:50.616283 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 22:57:50.616309 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 22:57:50.616321 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 22:57:50.616331 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 22:57:50.616341 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 22:57:50.616351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 22:57:50.616362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 22:57:50.616373 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 22:57:50.616386 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 22:57:50.616399 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 22:57:50.616410 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 22:57:50.616420 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 22:57:50.616431 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 22:57:50.616441 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 22:57:50.616452 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 22:57:50.616462 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 22:57:50.616474 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 22:57:50.616484 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 22:57:50.616494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 22:57:50.616505 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 22:57:50.616515 systemd[1]: Reached target slices.target - Slice Units. Sep 12 22:57:50.616525 systemd[1]: Reached target swap.target - Swaps. Sep 12 22:57:50.616535 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 22:57:50.616545 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 22:57:50.616558 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 22:57:50.616568 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 22:57:50.616578 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 22:57:50.616588 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 22:57:50.616599 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 22:57:50.616611 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 22:57:50.616621 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 22:57:50.616631 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 22:57:50.616642 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:57:50.616652 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 22:57:50.616662 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 22:57:50.616673 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 22:57:50.616684 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 22:57:50.616694 systemd[1]: Reached target machines.target - Containers. Sep 12 22:57:50.616733 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 22:57:50.616744 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:57:50.616755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 22:57:50.616765 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 22:57:50.616776 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:57:50.616786 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:57:50.616797 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:57:50.616807 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 22:57:50.616819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:57:50.616830 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 22:57:50.616841 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 22:57:50.616851 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 22:57:50.616861 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 22:57:50.616872 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 22:57:50.616883 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:57:50.616893 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 22:57:50.616906 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 22:57:50.616916 kernel: fuse: init (API version 7.41) Sep 12 22:57:50.616927 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 22:57:50.616937 kernel: loop: module loaded Sep 12 22:57:50.616947 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 22:57:50.616957 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 22:57:50.616968 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 22:57:50.616978 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 22:57:50.616988 systemd[1]: Stopped verity-setup.service. Sep 12 22:57:50.617001 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:57:50.617011 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 22:57:50.617021 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 22:57:50.617032 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 22:57:50.617042 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 22:57:50.617052 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 22:57:50.617063 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 22:57:50.617073 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 22:57:50.617085 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 22:57:50.617096 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 22:57:50.617107 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 22:57:50.617141 systemd-journald[1278]: Collecting audit messages is disabled. Sep 12 22:57:50.617167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:57:50.617178 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:57:50.617188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:57:50.617199 systemd-journald[1278]: Journal started Sep 12 22:57:50.617222 systemd-journald[1278]: Runtime Journal (/run/log/journal/0abc10e71788464da722f6a5a9bd80b6) is 8M, max 158.9M, 150.9M free. Sep 12 22:57:50.132222 systemd[1]: Queued start job for default target multi-user.target. Sep 12 22:57:50.140275 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 22:57:50.140602 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 22:57:50.619735 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:57:50.625719 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 22:57:50.627131 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 22:57:50.627292 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 22:57:50.630231 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:57:50.630482 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:57:50.635095 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 22:57:50.639000 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 22:57:50.643841 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 22:57:50.647051 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 22:57:50.658680 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 22:57:50.665723 kernel: ACPI: bus type drm_connector registered Sep 12 22:57:50.664792 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 22:57:50.668180 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 22:57:50.670918 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 22:57:50.670948 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 22:57:50.673960 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 22:57:50.681865 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 22:57:50.684605 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:57:50.701862 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 22:57:50.705891 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 22:57:50.708256 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:57:50.709485 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 22:57:50.714882 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:57:50.716984 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:57:50.723819 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 22:57:50.727812 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 22:57:50.732515 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:57:50.732907 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:57:50.736133 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 22:57:50.740104 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 22:57:50.742674 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 22:57:50.747510 systemd-journald[1278]: Time spent on flushing to /var/log/journal/0abc10e71788464da722f6a5a9bd80b6 is 44.374ms for 1000 entries. Sep 12 22:57:50.747510 systemd-journald[1278]: System Journal (/var/log/journal/0abc10e71788464da722f6a5a9bd80b6) is 11.8M, max 2.6G, 2.6G free. Sep 12 22:57:50.934747 systemd-journald[1278]: Received client request to flush runtime journal. Sep 12 22:57:50.934793 systemd-journald[1278]: /var/log/journal/0abc10e71788464da722f6a5a9bd80b6/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 12 22:57:50.934812 systemd-journald[1278]: Rotating system journal. Sep 12 22:57:50.934829 kernel: loop0: detected capacity change from 0 to 110984 Sep 12 22:57:50.750986 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 22:57:50.757286 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 22:57:50.759861 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 22:57:50.802194 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:57:50.865536 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Sep 12 22:57:50.865546 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. Sep 12 22:57:50.867788 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 22:57:50.871616 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 22:57:50.935675 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 22:57:50.948517 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 22:57:50.978599 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 22:57:50.982549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 22:57:51.001186 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 12 22:57:51.001206 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Sep 12 22:57:51.003351 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 22:57:51.141794 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 22:57:51.176722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 22:57:51.204726 kernel: loop1: detected capacity change from 0 to 27936 Sep 12 22:57:51.526725 kernel: loop2: detected capacity change from 0 to 128016 Sep 12 22:57:51.849293 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 22:57:51.854090 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 22:57:51.858785 kernel: loop3: detected capacity change from 0 to 224512 Sep 12 22:57:51.882389 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Sep 12 22:57:51.886748 kernel: loop4: detected capacity change from 0 to 110984 Sep 12 22:57:51.897730 kernel: loop5: detected capacity change from 0 to 27936 Sep 12 22:57:51.908721 kernel: loop6: detected capacity change from 0 to 128016 Sep 12 22:57:51.917891 kernel: loop7: detected capacity change from 0 to 224512 Sep 12 22:57:51.928433 (sd-merge)[1349]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 22:57:51.928834 (sd-merge)[1349]: Merged extensions into '/usr'. Sep 12 22:57:51.932092 systemd[1]: Reload requested from client PID 1318 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 22:57:51.932106 systemd[1]: Reloading... Sep 12 22:57:51.983739 zram_generator::config[1375]: No configuration found. Sep 12 22:57:52.274327 systemd[1]: Reloading finished in 341 ms. Sep 12 22:57:52.276057 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 22:57:52.278740 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 22:57:52.293209 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 22:57:52.297189 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 22:57:52.303741 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 22:57:52.305021 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 22:57:52.312844 systemd[1]: Starting ensure-sysext.service... Sep 12 22:57:52.319749 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 22:57:52.320417 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 22:57:52.326804 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 22:57:52.328493 kernel: Console: switching to colour dummy device 80x25 Sep 12 22:57:52.330163 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 22:57:52.335350 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 22:57:52.339728 kernel: hv_vmbus: registering driver hv_balloon Sep 12 22:57:52.356740 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 22:57:52.374632 systemd[1]: Reload requested from client PID 1484 ('systemctl') (unit ensure-sysext.service)... Sep 12 22:57:52.374644 systemd[1]: Reloading... Sep 12 22:57:52.405903 systemd-tmpfiles[1488]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 22:57:52.406356 systemd-tmpfiles[1488]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 22:57:52.406603 systemd-tmpfiles[1488]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 22:57:52.406961 systemd-tmpfiles[1488]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 22:57:52.407644 systemd-tmpfiles[1488]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 22:57:52.409204 systemd-tmpfiles[1488]: ACLs are not supported, ignoring. Sep 12 22:57:52.409298 systemd-tmpfiles[1488]: ACLs are not supported, ignoring. Sep 12 22:57:52.449388 systemd-tmpfiles[1488]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:57:52.449402 systemd-tmpfiles[1488]: Skipping /boot Sep 12 22:57:52.460216 systemd-tmpfiles[1488]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 22:57:52.460227 systemd-tmpfiles[1488]: Skipping /boot Sep 12 22:57:52.493746 zram_generator::config[1527]: No configuration found. Sep 12 22:57:52.646749 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 12 22:57:52.822355 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 12 22:57:52.824411 systemd[1]: Reloading finished in 449 ms. Sep 12 22:57:52.843677 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 22:57:52.876051 systemd[1]: Finished ensure-sysext.service. Sep 12 22:57:52.891134 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:57:52.891907 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:57:52.909408 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 22:57:52.912233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 22:57:52.913081 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 22:57:52.918802 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 22:57:52.923739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 22:57:52.927821 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 22:57:52.930308 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 22:57:52.936648 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 22:57:52.939367 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 22:57:52.941322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 22:57:52.945998 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 22:57:52.948152 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 22:57:52.953499 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 22:57:52.956448 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 22:57:52.958820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 22:57:52.959819 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 22:57:52.960479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 22:57:52.961759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 22:57:52.962044 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 22:57:52.962183 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 22:57:52.962388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 22:57:52.962510 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 22:57:52.963047 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 22:57:52.963166 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 22:57:52.968875 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 22:57:52.969609 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 22:57:52.992043 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 22:57:52.998286 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 22:57:53.021355 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 22:57:53.038771 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 22:57:53.041187 augenrules[1644]: No rules Sep 12 22:57:53.047091 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:57:53.047296 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:57:53.122892 systemd-resolved[1614]: Positive Trust Anchors: Sep 12 22:57:53.122903 systemd-resolved[1614]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 22:57:53.122937 systemd-resolved[1614]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 22:57:53.126045 systemd-resolved[1614]: Using system hostname 'ci-4459.0.0-a-49ee7b6560'. Sep 12 22:57:53.126886 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 22:57:53.127039 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 22:57:53.148966 systemd-networkd[1486]: lo: Link UP Sep 12 22:57:53.148973 systemd-networkd[1486]: lo: Gained carrier Sep 12 22:57:53.150225 systemd-networkd[1486]: Enumeration completed Sep 12 22:57:53.150299 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 22:57:53.150435 systemd[1]: Reached target network.target - Network. Sep 12 22:57:53.153544 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:57:53.153555 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:57:53.153641 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 22:57:53.156723 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 22:57:53.160728 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 12 22:57:53.164421 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 12 22:57:53.167201 systemd-networkd[1486]: enP30832s1: Link UP Sep 12 22:57:53.167387 systemd-networkd[1486]: eth0: Link UP Sep 12 22:57:53.167438 systemd-networkd[1486]: eth0: Gained carrier Sep 12 22:57:53.167493 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:57:53.167729 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d729901 eth0: Data path switched to VF: enP30832s1 Sep 12 22:57:53.170931 systemd-networkd[1486]: enP30832s1: Gained carrier Sep 12 22:57:53.176752 systemd-networkd[1486]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 22:57:53.200079 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 22:57:53.225770 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 22:57:53.427886 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 22:57:53.431004 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 22:57:54.941850 systemd-networkd[1486]: eth0: Gained IPv6LL Sep 12 22:57:54.943855 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 22:57:54.946963 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 22:57:56.603339 ldconfig[1313]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 22:57:56.615929 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 22:57:56.620896 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 22:57:56.642364 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 22:57:56.644271 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 22:57:56.648004 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 22:57:56.651782 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 22:57:56.654772 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 22:57:56.656508 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 22:57:56.659796 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 22:57:56.662755 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 22:57:56.665736 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 22:57:56.665767 systemd[1]: Reached target paths.target - Path Units. Sep 12 22:57:56.666945 systemd[1]: Reached target timers.target - Timer Units. Sep 12 22:57:56.670376 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 22:57:56.674682 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 22:57:56.679332 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 22:57:56.686896 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 22:57:56.688732 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 22:57:56.698175 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 22:57:56.701961 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 22:57:56.703822 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 22:57:56.707383 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 22:57:56.708911 systemd[1]: Reached target basic.target - Basic System. Sep 12 22:57:56.711785 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:57:56.711808 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 22:57:56.713576 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 22:57:56.715735 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 22:57:56.721341 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 22:57:56.730721 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 22:57:56.733372 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 22:57:56.738305 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 22:57:56.744812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 22:57:56.747712 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 22:57:56.749817 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 22:57:56.751682 jq[1675]: false Sep 12 22:57:56.755777 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Sep 12 22:57:56.757509 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 22:57:56.760034 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 22:57:56.761345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:57:56.765886 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 22:57:56.771666 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 22:57:56.775849 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 22:57:56.780627 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 22:57:56.785222 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 22:57:56.799869 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 22:57:56.804539 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 22:57:56.805184 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 22:57:56.808839 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 22:57:56.813556 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 22:57:56.818880 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 22:57:56.819078 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 22:57:56.830019 KVP[1678]: KVP starting; pid is:1678 Sep 12 22:57:56.835393 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 22:57:56.838598 KVP[1678]: KVP LIC Version: 3.1 Sep 12 22:57:56.838664 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 22:57:56.838735 kernel: hv_utils: KVP IC version 4.0 Sep 12 22:57:56.842093 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 22:57:56.845348 chronyd[1667]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 12 22:57:56.848965 jq[1691]: true Sep 12 22:57:56.850824 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Refreshing passwd entry cache Sep 12 22:57:56.850810 oslogin_cache_refresh[1677]: Refreshing passwd entry cache Sep 12 22:57:56.852213 extend-filesystems[1676]: Found /dev/nvme0n1p6 Sep 12 22:57:56.866862 jq[1706]: true Sep 12 22:57:56.874870 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Failure getting users, quitting Sep 12 22:57:56.874870 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 22:57:56.874870 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Refreshing group entry cache Sep 12 22:57:56.873754 oslogin_cache_refresh[1677]: Failure getting users, quitting Sep 12 22:57:56.873770 oslogin_cache_refresh[1677]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 22:57:56.873811 oslogin_cache_refresh[1677]: Refreshing group entry cache Sep 12 22:57:56.891100 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Failure getting groups, quitting Sep 12 22:57:56.891100 google_oslogin_nss_cache[1677]: oslogin_cache_refresh[1677]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 22:57:56.890911 oslogin_cache_refresh[1677]: Failure getting groups, quitting Sep 12 22:57:56.890920 oslogin_cache_refresh[1677]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 22:57:56.973015 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 22:57:56.973308 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 22:57:56.980838 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 22:57:56.983905 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 22:57:56.984359 (ntainerd)[1729]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 22:57:57.028224 chronyd[1667]: Timezone right/UTC failed leap second check, ignoring Sep 12 22:57:57.028579 chronyd[1667]: Loaded seccomp filter (level 2) Sep 12 22:57:57.029953 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 22:57:57.043320 extend-filesystems[1676]: Found /dev/nvme0n1p9 Sep 12 22:57:57.045787 extend-filesystems[1676]: Checking size of /dev/nvme0n1p9 Sep 12 22:57:57.050453 systemd-logind[1688]: New seat seat0. Sep 12 22:57:57.511273 update_engine[1689]: I20250912 22:57:57.052843 1689 main.cc:92] Flatcar Update Engine starting Sep 12 22:57:57.511510 tar[1696]: linux-amd64/LICENSE Sep 12 22:57:57.511510 tar[1696]: linux-amd64/helm Sep 12 22:57:57.057972 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 22:57:57.506887 systemd-logind[1688]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Sep 12 22:57:57.507110 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 22:57:57.535551 bash[1725]: Updated "/home/core/.ssh/authorized_keys" Sep 12 22:57:57.531663 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 22:57:57.539289 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 22:57:57.543414 extend-filesystems[1676]: Old size kept for /dev/nvme0n1p9 Sep 12 22:57:57.548028 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 22:57:57.548224 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 22:57:57.604064 dbus-daemon[1670]: [system] SELinux support is enabled Sep 12 22:57:57.604963 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 22:57:57.611961 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 22:57:57.612505 dbus-daemon[1670]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 22:57:57.611993 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 22:57:57.614786 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 22:57:57.614804 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 22:57:57.626169 update_engine[1689]: I20250912 22:57:57.620236 1689 update_check_scheduler.cc:74] Next update check in 3m50s Sep 12 22:57:57.620981 systemd[1]: Started update-engine.service - Update Engine. Sep 12 22:57:57.629526 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 22:57:57.865602 sshd_keygen[1697]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 22:57:57.910287 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 22:57:57.914881 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 22:57:57.918987 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 22:57:57.946166 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 22:57:57.946777 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 22:57:57.952877 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 22:57:57.959251 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 22:57:58.029474 coreos-metadata[1669]: Sep 12 22:57:58.027 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 22:57:58.033372 coreos-metadata[1669]: Sep 12 22:57:58.033 INFO Fetch successful Sep 12 22:57:58.034845 coreos-metadata[1669]: Sep 12 22:57:58.034 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 22:57:58.038647 coreos-metadata[1669]: Sep 12 22:57:58.038 INFO Fetch successful Sep 12 22:57:58.038647 coreos-metadata[1669]: Sep 12 22:57:58.038 INFO Fetching http://168.63.129.16/machine/09fd903a-5b29-4a60-b8a2-64f0fb19d6bb/5ea11856%2D725d%2D41a8%2D9119%2D1a5ffd404a90.%5Fci%2D4459.0.0%2Da%2D49ee7b6560?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 22:57:58.039942 coreos-metadata[1669]: Sep 12 22:57:58.039 INFO Fetch successful Sep 12 22:57:58.039942 coreos-metadata[1669]: Sep 12 22:57:58.039 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 22:57:58.050904 coreos-metadata[1669]: Sep 12 22:57:58.050 INFO Fetch successful Sep 12 22:57:58.084147 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 22:57:58.092570 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 22:57:58.098542 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 22:57:58.100919 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 22:57:58.103570 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 22:57:58.107880 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 22:57:58.155406 tar[1696]: linux-amd64/README.md Sep 12 22:57:58.172423 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 22:57:58.372477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:57:58.391941 (kubelet)[1806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:57:58.793817 locksmithd[1759]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 22:57:58.915465 kubelet[1806]: E0912 22:57:58.915420 1806 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:57:58.917321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:57:58.917429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:57:58.918047 systemd[1]: kubelet.service: Consumed 932ms CPU time, 264.4M memory peak. Sep 12 22:57:59.682471 containerd[1729]: time="2025-09-12T22:57:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 22:57:59.683045 containerd[1729]: time="2025-09-12T22:57:59.683016306Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 22:57:59.689290 containerd[1729]: time="2025-09-12T22:57:59.689251003Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.368µs" Sep 12 22:57:59.689290 containerd[1729]: time="2025-09-12T22:57:59.689278734Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 22:57:59.689389 containerd[1729]: time="2025-09-12T22:57:59.689297414Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 22:57:59.689458 containerd[1729]: time="2025-09-12T22:57:59.689435463Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 22:57:59.689458 containerd[1729]: time="2025-09-12T22:57:59.689454395Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 22:57:59.689499 containerd[1729]: time="2025-09-12T22:57:59.689479342Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:57:59.689545 containerd[1729]: time="2025-09-12T22:57:59.689528061Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 22:57:59.689545 containerd[1729]: time="2025-09-12T22:57:59.689538829Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:57:59.689757 containerd[1729]: time="2025-09-12T22:57:59.689736949Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 22:57:59.689757 containerd[1729]: time="2025-09-12T22:57:59.689750401Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:57:59.689803 containerd[1729]: time="2025-09-12T22:57:59.689760991Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 22:57:59.689803 containerd[1729]: time="2025-09-12T22:57:59.689769042Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 22:57:59.689847 containerd[1729]: time="2025-09-12T22:57:59.689832653Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 22:57:59.690032 containerd[1729]: time="2025-09-12T22:57:59.690013164Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:57:59.690058 containerd[1729]: time="2025-09-12T22:57:59.690037069Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 22:57:59.690058 containerd[1729]: time="2025-09-12T22:57:59.690047389Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 22:57:59.690100 containerd[1729]: time="2025-09-12T22:57:59.690080889Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 22:57:59.690320 containerd[1729]: time="2025-09-12T22:57:59.690302875Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 22:57:59.690365 containerd[1729]: time="2025-09-12T22:57:59.690349892Z" level=info msg="metadata content store policy set" policy=shared Sep 12 22:57:59.717524 containerd[1729]: time="2025-09-12T22:57:59.717490364Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 22:57:59.717524 containerd[1729]: time="2025-09-12T22:57:59.717535389Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 22:57:59.717637 containerd[1729]: time="2025-09-12T22:57:59.717549592Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 22:57:59.717637 containerd[1729]: time="2025-09-12T22:57:59.717560992Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 22:57:59.717637 containerd[1729]: time="2025-09-12T22:57:59.717582814Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 22:57:59.717637 containerd[1729]: time="2025-09-12T22:57:59.717595336Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 22:57:59.717637 containerd[1729]: time="2025-09-12T22:57:59.717609225Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 22:57:59.717637 containerd[1729]: time="2025-09-12T22:57:59.717621116Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 22:57:59.717637 containerd[1729]: time="2025-09-12T22:57:59.717633744Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 22:57:59.717795 containerd[1729]: time="2025-09-12T22:57:59.717644207Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 22:57:59.717795 containerd[1729]: time="2025-09-12T22:57:59.717655135Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 22:57:59.717795 containerd[1729]: time="2025-09-12T22:57:59.717671941Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 22:57:59.717852 containerd[1729]: time="2025-09-12T22:57:59.717795656Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 22:57:59.717852 containerd[1729]: time="2025-09-12T22:57:59.717813500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 22:57:59.717852 containerd[1729]: time="2025-09-12T22:57:59.717830799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 22:57:59.717852 containerd[1729]: time="2025-09-12T22:57:59.717841420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 22:57:59.717927 containerd[1729]: time="2025-09-12T22:57:59.717851851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 22:57:59.717927 containerd[1729]: time="2025-09-12T22:57:59.717862366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 22:57:59.717927 containerd[1729]: time="2025-09-12T22:57:59.717873530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 22:57:59.717927 containerd[1729]: time="2025-09-12T22:57:59.717883926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 22:57:59.717927 containerd[1729]: time="2025-09-12T22:57:59.717895000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 22:57:59.717927 containerd[1729]: time="2025-09-12T22:57:59.717905473Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 22:57:59.717927 containerd[1729]: time="2025-09-12T22:57:59.717916087Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 22:57:59.718062 containerd[1729]: time="2025-09-12T22:57:59.717976640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 22:57:59.718062 containerd[1729]: time="2025-09-12T22:57:59.717989200Z" level=info msg="Start snapshots syncer" Sep 12 22:57:59.718062 containerd[1729]: time="2025-09-12T22:57:59.718005200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 22:57:59.718269 containerd[1729]: time="2025-09-12T22:57:59.718240165Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 22:57:59.718390 containerd[1729]: time="2025-09-12T22:57:59.718286887Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 22:57:59.718390 containerd[1729]: time="2025-09-12T22:57:59.718349262Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 22:57:59.718447 containerd[1729]: time="2025-09-12T22:57:59.718433602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 22:57:59.718478 containerd[1729]: time="2025-09-12T22:57:59.718456978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 22:57:59.718478 containerd[1729]: time="2025-09-12T22:57:59.718468271Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 22:57:59.718534 containerd[1729]: time="2025-09-12T22:57:59.718480922Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 22:57:59.718534 containerd[1729]: time="2025-09-12T22:57:59.718493330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 22:57:59.718534 containerd[1729]: time="2025-09-12T22:57:59.718503215Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 22:57:59.718534 containerd[1729]: time="2025-09-12T22:57:59.718514566Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 22:57:59.718629 containerd[1729]: time="2025-09-12T22:57:59.718541844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 22:57:59.718629 containerd[1729]: time="2025-09-12T22:57:59.718554761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 22:57:59.718629 containerd[1729]: time="2025-09-12T22:57:59.718569836Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 22:57:59.718629 containerd[1729]: time="2025-09-12T22:57:59.718599796Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:57:59.718629 containerd[1729]: time="2025-09-12T22:57:59.718613656Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 22:57:59.718629 containerd[1729]: time="2025-09-12T22:57:59.718622827Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:57:59.718800 containerd[1729]: time="2025-09-12T22:57:59.718632568Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 22:57:59.718800 containerd[1729]: time="2025-09-12T22:57:59.718640460Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 22:57:59.718800 containerd[1729]: time="2025-09-12T22:57:59.718649989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 22:57:59.718800 containerd[1729]: time="2025-09-12T22:57:59.718659986Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 22:57:59.718800 containerd[1729]: time="2025-09-12T22:57:59.718674832Z" level=info msg="runtime interface created" Sep 12 22:57:59.718800 containerd[1729]: time="2025-09-12T22:57:59.718679945Z" level=info msg="created NRI interface" Sep 12 22:57:59.718800 containerd[1729]: time="2025-09-12T22:57:59.718688111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 22:57:59.718800 containerd[1729]: time="2025-09-12T22:57:59.718699309Z" level=info msg="Connect containerd service" Sep 12 22:57:59.718800 containerd[1729]: time="2025-09-12T22:57:59.718750943Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 22:57:59.720561 containerd[1729]: time="2025-09-12T22:57:59.720261391Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 22:58:00.938543 containerd[1729]: time="2025-09-12T22:58:00.938432223Z" level=info msg="Start subscribing containerd event" Sep 12 22:58:00.938543 containerd[1729]: time="2025-09-12T22:58:00.938497853Z" level=info msg="Start recovering state" Sep 12 22:58:00.938928 containerd[1729]: time="2025-09-12T22:58:00.938584726Z" level=info msg="Start event monitor" Sep 12 22:58:00.938928 containerd[1729]: time="2025-09-12T22:58:00.938598808Z" level=info msg="Start cni network conf syncer for default" Sep 12 22:58:00.938928 containerd[1729]: time="2025-09-12T22:58:00.938606821Z" level=info msg="Start streaming server" Sep 12 22:58:00.938928 containerd[1729]: time="2025-09-12T22:58:00.938616542Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 22:58:00.938928 containerd[1729]: time="2025-09-12T22:58:00.938624594Z" level=info msg="runtime interface starting up..." Sep 12 22:58:00.938928 containerd[1729]: time="2025-09-12T22:58:00.938631041Z" level=info msg="starting plugins..." Sep 12 22:58:00.938928 containerd[1729]: time="2025-09-12T22:58:00.938643776Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 22:58:00.939229 containerd[1729]: time="2025-09-12T22:58:00.939121186Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 22:58:00.939229 containerd[1729]: time="2025-09-12T22:58:00.939166629Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 22:58:00.939229 containerd[1729]: time="2025-09-12T22:58:00.939213410Z" level=info msg="containerd successfully booted in 1.257011s" Sep 12 22:58:00.939352 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 22:58:00.941880 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 22:58:00.945082 systemd[1]: Startup finished in 3.276s (kernel) + 10.584s (initrd) + 12.921s (userspace) = 26.782s. Sep 12 22:58:01.526067 login[1795]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 22:58:01.526067 login[1796]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 22:58:01.539207 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 22:58:01.539897 systemd-logind[1688]: New session 2 of user core. Sep 12 22:58:01.542844 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 22:58:01.547605 systemd-logind[1688]: New session 1 of user core. Sep 12 22:58:01.560136 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 22:58:01.563934 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 22:58:01.577404 (systemd)[1842]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 22:58:01.578987 systemd-logind[1688]: New session c1 of user core. Sep 12 22:58:01.694951 systemd[1842]: Queued start job for default target default.target. Sep 12 22:58:01.705371 systemd[1842]: Created slice app.slice - User Application Slice. Sep 12 22:58:01.705401 systemd[1842]: Reached target paths.target - Paths. Sep 12 22:58:01.705429 systemd[1842]: Reached target timers.target - Timers. Sep 12 22:58:01.706291 systemd[1842]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 22:58:01.714082 systemd[1842]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 22:58:01.714131 systemd[1842]: Reached target sockets.target - Sockets. Sep 12 22:58:01.714174 systemd[1842]: Reached target basic.target - Basic System. Sep 12 22:58:01.714237 systemd[1842]: Reached target default.target - Main User Target. Sep 12 22:58:01.714261 systemd[1842]: Startup finished in 130ms. Sep 12 22:58:01.714278 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 22:58:01.715765 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 22:58:01.716389 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 22:58:05.421124 waagent[1788]: 2025-09-12T22:58:05.421034Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 12 22:58:05.422857 waagent[1788]: 2025-09-12T22:58:05.422764Z INFO Daemon Daemon OS: flatcar 4459.0.0 Sep 12 22:58:05.424153 waagent[1788]: 2025-09-12T22:58:05.424068Z INFO Daemon Daemon Python: 3.11.13 Sep 12 22:58:05.425457 waagent[1788]: 2025-09-12T22:58:05.425391Z INFO Daemon Daemon Run daemon Sep 12 22:58:05.426594 waagent[1788]: 2025-09-12T22:58:05.426562Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.0.0' Sep 12 22:58:05.428820 waagent[1788]: 2025-09-12T22:58:05.428790Z INFO Daemon Daemon Using waagent for provisioning Sep 12 22:58:05.430487 waagent[1788]: 2025-09-12T22:58:05.430297Z INFO Daemon Daemon Activate resource disk Sep 12 22:58:05.431698 waagent[1788]: 2025-09-12T22:58:05.431614Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 22:58:05.434882 waagent[1788]: 2025-09-12T22:58:05.434843Z INFO Daemon Daemon Found device: None Sep 12 22:58:05.435899 waagent[1788]: 2025-09-12T22:58:05.435047Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 22:58:05.437881 waagent[1788]: 2025-09-12T22:58:05.436371Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 22:58:05.440776 waagent[1788]: 2025-09-12T22:58:05.440726Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 22:58:05.442435 waagent[1788]: 2025-09-12T22:58:05.442083Z INFO Daemon Daemon Running default provisioning handler Sep 12 22:58:05.449203 waagent[1788]: 2025-09-12T22:58:05.449070Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 22:58:05.450868 waagent[1788]: 2025-09-12T22:58:05.450331Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 22:58:05.450868 waagent[1788]: 2025-09-12T22:58:05.450626Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 22:58:05.451111 waagent[1788]: 2025-09-12T22:58:05.451089Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 22:58:06.333449 waagent[1788]: 2025-09-12T22:58:06.333368Z INFO Daemon Daemon Successfully mounted dvd Sep 12 22:58:06.344995 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 22:58:06.347344 waagent[1788]: 2025-09-12T22:58:06.347291Z INFO Daemon Daemon Detect protocol endpoint Sep 12 22:58:06.349759 waagent[1788]: 2025-09-12T22:58:06.347490Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 22:58:06.349759 waagent[1788]: 2025-09-12T22:58:06.347911Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 22:58:06.349759 waagent[1788]: 2025-09-12T22:58:06.348155Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 22:58:06.349759 waagent[1788]: 2025-09-12T22:58:06.348310Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 22:58:06.349759 waagent[1788]: 2025-09-12T22:58:06.348493Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 22:58:06.363105 waagent[1788]: 2025-09-12T22:58:06.363071Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 22:58:06.364825 waagent[1788]: 2025-09-12T22:58:06.364769Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 22:58:06.365833 waagent[1788]: 2025-09-12T22:58:06.364880Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 22:58:06.573785 waagent[1788]: 2025-09-12T22:58:06.573686Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 22:58:06.574277 waagent[1788]: 2025-09-12T22:58:06.573949Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 22:58:06.578949 waagent[1788]: 2025-09-12T22:58:06.578908Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 22:58:06.591601 waagent[1788]: 2025-09-12T22:58:06.591533Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 12 22:58:06.595157 waagent[1788]: 2025-09-12T22:58:06.592040Z INFO Daemon Sep 12 22:58:06.595157 waagent[1788]: 2025-09-12T22:58:06.592583Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: dd5c4ede-29b6-4082-a3b1-5a7111ccc6f7 eTag: 16407660066583325074 source: Fabric] Sep 12 22:58:06.595157 waagent[1788]: 2025-09-12T22:58:06.592858Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 22:58:06.595157 waagent[1788]: 2025-09-12T22:58:06.593180Z INFO Daemon Sep 12 22:58:06.595157 waagent[1788]: 2025-09-12T22:58:06.593424Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 22:58:06.603909 waagent[1788]: 2025-09-12T22:58:06.597089Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 22:58:06.679378 waagent[1788]: 2025-09-12T22:58:06.679325Z INFO Daemon Downloaded certificate {'thumbprint': 'E5569CF17E751781D78CE1D92D7989683C86F68C', 'hasPrivateKey': True} Sep 12 22:58:06.681970 waagent[1788]: 2025-09-12T22:58:06.681937Z INFO Daemon Fetch goal state completed Sep 12 22:58:06.690237 waagent[1788]: 2025-09-12T22:58:06.690203Z INFO Daemon Daemon Starting provisioning Sep 12 22:58:06.690871 waagent[1788]: 2025-09-12T22:58:06.690551Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 22:58:06.690871 waagent[1788]: 2025-09-12T22:58:06.690895Z INFO Daemon Daemon Set hostname [ci-4459.0.0-a-49ee7b6560] Sep 12 22:58:06.712106 waagent[1788]: 2025-09-12T22:58:06.712068Z INFO Daemon Daemon Publish hostname [ci-4459.0.0-a-49ee7b6560] Sep 12 22:58:06.718923 waagent[1788]: 2025-09-12T22:58:06.712348Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 22:58:06.718923 waagent[1788]: 2025-09-12T22:58:06.712884Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 22:58:06.720620 systemd-networkd[1486]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 22:58:06.720627 systemd-networkd[1486]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 22:58:06.720649 systemd-networkd[1486]: eth0: DHCP lease lost Sep 12 22:58:06.721425 waagent[1788]: 2025-09-12T22:58:06.721375Z INFO Daemon Daemon Create user account if not exists Sep 12 22:58:06.722971 waagent[1788]: 2025-09-12T22:58:06.721579Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 22:58:06.722971 waagent[1788]: 2025-09-12T22:58:06.721832Z INFO Daemon Daemon Configure sudoer Sep 12 22:58:06.725493 waagent[1788]: 2025-09-12T22:58:06.725452Z INFO Daemon Daemon Configure sshd Sep 12 22:58:06.743748 systemd-networkd[1486]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 22:58:06.911981 waagent[1788]: 2025-09-12T22:58:06.911888Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 22:58:06.912231 waagent[1788]: 2025-09-12T22:58:06.912100Z INFO Daemon Daemon Deploy ssh public key. Sep 12 22:58:07.980096 waagent[1788]: 2025-09-12T22:58:07.980052Z INFO Daemon Daemon Provisioning complete Sep 12 22:58:07.988671 waagent[1788]: 2025-09-12T22:58:07.988637Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 22:58:07.993044 waagent[1788]: 2025-09-12T22:58:07.988857Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 22:58:07.993044 waagent[1788]: 2025-09-12T22:58:07.989119Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 12 22:58:08.094722 waagent[1893]: 2025-09-12T22:58:08.094650Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 12 22:58:08.094985 waagent[1893]: 2025-09-12T22:58:08.094761Z INFO ExtHandler ExtHandler OS: flatcar 4459.0.0 Sep 12 22:58:08.094985 waagent[1893]: 2025-09-12T22:58:08.094805Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 12 22:58:08.094985 waagent[1893]: 2025-09-12T22:58:08.094848Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 12 22:58:08.102290 waagent[1893]: 2025-09-12T22:58:08.102242Z INFO ExtHandler ExtHandler Distro: flatcar-4459.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 12 22:58:08.102426 waagent[1893]: 2025-09-12T22:58:08.102401Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 22:58:08.102488 waagent[1893]: 2025-09-12T22:58:08.102453Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 22:58:08.111198 waagent[1893]: 2025-09-12T22:58:08.111146Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 22:58:08.116291 waagent[1893]: 2025-09-12T22:58:08.116254Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 12 22:58:08.116618 waagent[1893]: 2025-09-12T22:58:08.116589Z INFO ExtHandler Sep 12 22:58:08.116661 waagent[1893]: 2025-09-12T22:58:08.116641Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2579bb10-1306-433a-ac4e-a2f37cf30fae eTag: 16407660066583325074 source: Fabric] Sep 12 22:58:08.116898 waagent[1893]: 2025-09-12T22:58:08.116869Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 22:58:08.117249 waagent[1893]: 2025-09-12T22:58:08.117218Z INFO ExtHandler Sep 12 22:58:08.117288 waagent[1893]: 2025-09-12T22:58:08.117263Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 22:58:08.119780 waagent[1893]: 2025-09-12T22:58:08.119750Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 22:58:08.972389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 22:58:08.973999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:58:10.719943 waagent[1893]: 2025-09-12T22:58:10.718688Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E5569CF17E751781D78CE1D92D7989683C86F68C', 'hasPrivateKey': True} Sep 12 22:58:10.719943 waagent[1893]: 2025-09-12T22:58:10.719376Z INFO ExtHandler Fetch goal state completed Sep 12 22:58:10.734799 waagent[1893]: 2025-09-12T22:58:10.734740Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.2 1 Jul 2025 (Library: OpenSSL 3.4.2 1 Jul 2025) Sep 12 22:58:10.739256 waagent[1893]: 2025-09-12T22:58:10.739207Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1893 Sep 12 22:58:10.739365 waagent[1893]: 2025-09-12T22:58:10.739341Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 22:58:10.739604 waagent[1893]: 2025-09-12T22:58:10.739583Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 12 22:58:10.740698 waagent[1893]: 2025-09-12T22:58:10.740663Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.0.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 22:58:10.741038 waagent[1893]: 2025-09-12T22:58:10.741007Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 12 22:58:10.741150 waagent[1893]: 2025-09-12T22:58:10.741125Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 12 22:58:10.741559 waagent[1893]: 2025-09-12T22:58:10.741531Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 22:58:10.874753 waagent[1893]: 2025-09-12T22:58:10.873868Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 22:58:10.874753 waagent[1893]: 2025-09-12T22:58:10.874041Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 22:58:10.880110 waagent[1893]: 2025-09-12T22:58:10.879992Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 22:58:10.885121 systemd[1]: Reload requested from client PID 1913 ('systemctl') (unit waagent.service)... Sep 12 22:58:10.885133 systemd[1]: Reloading... Sep 12 22:58:10.953751 zram_generator::config[1953]: No configuration found. Sep 12 22:58:11.145636 systemd[1]: Reloading finished in 260 ms. Sep 12 22:58:11.168730 waagent[1893]: 2025-09-12T22:58:11.167407Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 22:58:11.168730 waagent[1893]: 2025-09-12T22:58:11.167535Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 22:58:11.412679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:58:11.415924 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:58:11.452174 kubelet[2018]: E0912 22:58:11.452124 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:58:11.455051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:58:11.455170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:58:11.455450 systemd[1]: kubelet.service: Consumed 145ms CPU time, 110M memory peak. Sep 12 22:58:11.547261 waagent[1893]: 2025-09-12T22:58:11.547197Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 22:58:11.547519 waagent[1893]: 2025-09-12T22:58:11.547494Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 12 22:58:11.548232 waagent[1893]: 2025-09-12T22:58:11.548199Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 22:58:11.548404 waagent[1893]: 2025-09-12T22:58:11.548364Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 22:58:11.548463 waagent[1893]: 2025-09-12T22:58:11.548444Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 22:58:11.548656 waagent[1893]: 2025-09-12T22:58:11.548636Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 22:58:11.548931 waagent[1893]: 2025-09-12T22:58:11.548908Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 22:58:11.549060 waagent[1893]: 2025-09-12T22:58:11.549038Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 22:58:11.549060 waagent[1893]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 22:58:11.549060 waagent[1893]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 22:58:11.549060 waagent[1893]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 22:58:11.549060 waagent[1893]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 22:58:11.549060 waagent[1893]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 22:58:11.549060 waagent[1893]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 22:58:11.549549 waagent[1893]: 2025-09-12T22:58:11.549314Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 22:58:11.549549 waagent[1893]: 2025-09-12T22:58:11.549375Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 22:58:11.549549 waagent[1893]: 2025-09-12T22:58:11.549496Z INFO EnvHandler ExtHandler Configure routes Sep 12 22:58:11.549763 waagent[1893]: 2025-09-12T22:58:11.549722Z INFO EnvHandler ExtHandler Gateway:None Sep 12 22:58:11.549859 waagent[1893]: 2025-09-12T22:58:11.549799Z INFO EnvHandler ExtHandler Routes:None Sep 12 22:58:11.550204 waagent[1893]: 2025-09-12T22:58:11.550179Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 22:58:11.550311 waagent[1893]: 2025-09-12T22:58:11.550283Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 22:58:11.550548 waagent[1893]: 2025-09-12T22:58:11.550498Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 22:58:11.550578 waagent[1893]: 2025-09-12T22:58:11.550556Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 22:58:11.550777 waagent[1893]: 2025-09-12T22:58:11.550748Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 22:58:11.559129 waagent[1893]: 2025-09-12T22:58:11.559101Z INFO ExtHandler ExtHandler Sep 12 22:58:11.559189 waagent[1893]: 2025-09-12T22:58:11.559159Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8daaf23b-6823-4e58-9c3a-a9cfcde0d568 correlation e01dcda7-f1f7-4b15-9f96-7f9f85875988 created: 2025-09-12T22:57:05.680229Z] Sep 12 22:58:11.559440 waagent[1893]: 2025-09-12T22:58:11.559415Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 22:58:11.559845 waagent[1893]: 2025-09-12T22:58:11.559822Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Sep 12 22:58:11.584782 waagent[1893]: 2025-09-12T22:58:11.584737Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 22:58:11.584782 waagent[1893]: Executing ['ip', '-a', '-o', 'link']: Sep 12 22:58:11.584782 waagent[1893]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 22:58:11.584782 waagent[1893]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:72:99:01 brd ff:ff:ff:ff:ff:ff\ alias Network Device Sep 12 22:58:11.584782 waagent[1893]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:72:99:01 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Sep 12 22:58:11.584782 waagent[1893]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 22:58:11.584782 waagent[1893]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 22:58:11.584782 waagent[1893]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 22:58:11.584782 waagent[1893]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 22:58:11.584782 waagent[1893]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 22:58:11.584782 waagent[1893]: 2: eth0 inet6 fe80::7eed:8dff:fe72:9901/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 22:58:11.596879 waagent[1893]: 2025-09-12T22:58:11.596833Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 12 22:58:11.596879 waagent[1893]: Try `iptables -h' or 'iptables --help' for more information.) Sep 12 22:58:11.597212 waagent[1893]: 2025-09-12T22:58:11.597188Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 7F30A323-8612-4B30-BF67-3704EA41C03D;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 12 22:58:11.643931 waagent[1893]: 2025-09-12T22:58:11.643885Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 12 22:58:11.643931 waagent[1893]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 22:58:11.643931 waagent[1893]: pkts bytes target prot opt in out source destination Sep 12 22:58:11.643931 waagent[1893]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 22:58:11.643931 waagent[1893]: pkts bytes target prot opt in out source destination Sep 12 22:58:11.643931 waagent[1893]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 22:58:11.643931 waagent[1893]: pkts bytes target prot opt in out source destination Sep 12 22:58:11.643931 waagent[1893]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 22:58:11.643931 waagent[1893]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 22:58:11.643931 waagent[1893]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 22:58:11.646744 waagent[1893]: 2025-09-12T22:58:11.646658Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 22:58:11.646744 waagent[1893]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 22:58:11.646744 waagent[1893]: pkts bytes target prot opt in out source destination Sep 12 22:58:11.646744 waagent[1893]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 22:58:11.646744 waagent[1893]: pkts bytes target prot opt in out source destination Sep 12 22:58:11.646744 waagent[1893]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 22:58:11.646744 waagent[1893]: pkts bytes target prot opt in out source destination Sep 12 22:58:11.646744 waagent[1893]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 22:58:11.646744 waagent[1893]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 22:58:11.646744 waagent[1893]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 22:58:20.819261 chronyd[1667]: Selected source PHC0 Sep 12 22:58:21.575906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 22:58:21.577276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:58:22.328305 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 22:58:22.329452 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:47926.service - OpenSSH per-connection server daemon (10.200.16.10:47926). Sep 12 22:58:23.385799 sshd[2059]: Accepted publickey for core from 10.200.16.10 port 47926 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 22:58:23.386978 sshd-session[2059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:23.392165 systemd-logind[1688]: New session 3 of user core. Sep 12 22:58:23.397889 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 22:58:23.932348 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:47934.service - OpenSSH per-connection server daemon (10.200.16.10:47934). Sep 12 22:58:24.566409 sshd[2065]: Accepted publickey for core from 10.200.16.10 port 47934 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 22:58:24.567495 sshd-session[2065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:24.571642 systemd-logind[1688]: New session 4 of user core. Sep 12 22:58:24.580848 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 22:58:25.007344 sshd[2068]: Connection closed by 10.200.16.10 port 47934 Sep 12 22:58:25.007831 sshd-session[2065]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:25.010313 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:47934.service: Deactivated successfully. Sep 12 22:58:25.011889 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 22:58:25.013450 systemd-logind[1688]: Session 4 logged out. Waiting for processes to exit. Sep 12 22:58:25.014199 systemd-logind[1688]: Removed session 4. Sep 12 22:58:25.123214 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:47948.service - OpenSSH per-connection server daemon (10.200.16.10:47948). Sep 12 22:58:25.749624 sshd[2074]: Accepted publickey for core from 10.200.16.10 port 47948 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 22:58:25.750744 sshd-session[2074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:25.755035 systemd-logind[1688]: New session 5 of user core. Sep 12 22:58:25.760857 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 22:58:26.188488 sshd[2077]: Connection closed by 10.200.16.10 port 47948 Sep 12 22:58:26.189045 sshd-session[2074]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:26.192359 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:47948.service: Deactivated successfully. Sep 12 22:58:26.193778 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 22:58:26.194459 systemd-logind[1688]: Session 5 logged out. Waiting for processes to exit. Sep 12 22:58:26.195834 systemd-logind[1688]: Removed session 5. Sep 12 22:58:26.307353 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:47962.service - OpenSSH per-connection server daemon (10.200.16.10:47962). Sep 12 22:58:26.388685 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:58:26.391780 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:58:26.430191 kubelet[2091]: E0912 22:58:26.430150 2091 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:58:26.431869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:58:26.431993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:58:26.432326 systemd[1]: kubelet.service: Consumed 126ms CPU time, 109.3M memory peak. Sep 12 22:58:26.935019 sshd[2083]: Accepted publickey for core from 10.200.16.10 port 47962 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 22:58:26.936131 sshd-session[2083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:26.940367 systemd-logind[1688]: New session 6 of user core. Sep 12 22:58:26.946843 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 22:58:27.374998 sshd[2098]: Connection closed by 10.200.16.10 port 47962 Sep 12 22:58:27.375479 sshd-session[2083]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:27.378122 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:47962.service: Deactivated successfully. Sep 12 22:58:27.379531 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 22:58:27.380850 systemd-logind[1688]: Session 6 logged out. Waiting for processes to exit. Sep 12 22:58:27.382162 systemd-logind[1688]: Removed session 6. Sep 12 22:58:27.485149 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:47964.service - OpenSSH per-connection server daemon (10.200.16.10:47964). Sep 12 22:58:28.110234 sshd[2104]: Accepted publickey for core from 10.200.16.10 port 47964 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 22:58:28.111327 sshd-session[2104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:28.115446 systemd-logind[1688]: New session 7 of user core. Sep 12 22:58:28.121839 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 22:58:28.556362 sudo[2108]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 22:58:28.556580 sudo[2108]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:58:28.572424 sudo[2108]: pam_unix(sudo:session): session closed for user root Sep 12 22:58:28.672370 sshd[2107]: Connection closed by 10.200.16.10 port 47964 Sep 12 22:58:28.672982 sshd-session[2104]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:28.675934 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:47964.service: Deactivated successfully. Sep 12 22:58:28.677318 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 22:58:28.679168 systemd-logind[1688]: Session 7 logged out. Waiting for processes to exit. Sep 12 22:58:28.679962 systemd-logind[1688]: Removed session 7. Sep 12 22:58:28.794489 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:47974.service - OpenSSH per-connection server daemon (10.200.16.10:47974). Sep 12 22:58:29.419594 sshd[2114]: Accepted publickey for core from 10.200.16.10 port 47974 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 22:58:29.420752 sshd-session[2114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:29.425207 systemd-logind[1688]: New session 8 of user core. Sep 12 22:58:29.430851 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 22:58:29.761324 sudo[2119]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 22:58:29.761550 sudo[2119]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:58:29.767257 sudo[2119]: pam_unix(sudo:session): session closed for user root Sep 12 22:58:29.771157 sudo[2118]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 22:58:29.771376 sudo[2118]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:58:29.779006 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 22:58:29.807572 augenrules[2141]: No rules Sep 12 22:58:29.808513 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 22:58:29.808672 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 22:58:29.809528 sudo[2118]: pam_unix(sudo:session): session closed for user root Sep 12 22:58:29.909430 sshd[2117]: Connection closed by 10.200.16.10 port 47974 Sep 12 22:58:29.909811 sshd-session[2114]: pam_unix(sshd:session): session closed for user core Sep 12 22:58:29.912268 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:47974.service: Deactivated successfully. Sep 12 22:58:29.913637 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 22:58:29.914834 systemd-logind[1688]: Session 8 logged out. Waiting for processes to exit. Sep 12 22:58:29.915838 systemd-logind[1688]: Removed session 8. Sep 12 22:58:30.022069 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:57434.service - OpenSSH per-connection server daemon (10.200.16.10:57434). Sep 12 22:58:30.647013 sshd[2150]: Accepted publickey for core from 10.200.16.10 port 57434 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 22:58:30.648066 sshd-session[2150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 22:58:30.652508 systemd-logind[1688]: New session 9 of user core. Sep 12 22:58:30.658846 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 22:58:30.987206 sudo[2154]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 22:58:30.987440 sudo[2154]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 22:58:33.552014 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 22:58:33.567996 (dockerd)[2172]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 22:58:35.341695 dockerd[2172]: time="2025-09-12T22:58:35.341635195Z" level=info msg="Starting up" Sep 12 22:58:35.342675 dockerd[2172]: time="2025-09-12T22:58:35.342647425Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 22:58:35.351541 dockerd[2172]: time="2025-09-12T22:58:35.351497715Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 22:58:35.379777 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport479584845-merged.mount: Deactivated successfully. Sep 12 22:58:36.575998 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 22:58:36.577595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:58:37.807380 dockerd[2172]: time="2025-09-12T22:58:37.807323984Z" level=info msg="Loading containers: start." Sep 12 22:58:37.871795 kernel: Initializing XFRM netlink socket Sep 12 22:58:37.945449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:58:37.949378 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:58:37.988285 kubelet[2253]: E0912 22:58:37.988224 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:58:37.989865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:58:37.989964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:58:37.990790 systemd[1]: kubelet.service: Consumed 134ms CPU time, 110.1M memory peak. Sep 12 22:58:38.086244 systemd-networkd[1486]: docker0: Link UP Sep 12 22:58:38.099915 dockerd[2172]: time="2025-09-12T22:58:38.099858657Z" level=info msg="Loading containers: done." Sep 12 22:58:38.127034 dockerd[2172]: time="2025-09-12T22:58:38.127004733Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 22:58:38.127149 dockerd[2172]: time="2025-09-12T22:58:38.127083938Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 22:58:38.127182 dockerd[2172]: time="2025-09-12T22:58:38.127151838Z" level=info msg="Initializing buildkit" Sep 12 22:58:38.178459 dockerd[2172]: time="2025-09-12T22:58:38.178413744Z" level=info msg="Completed buildkit initialization" Sep 12 22:58:38.181413 dockerd[2172]: time="2025-09-12T22:58:38.181381855Z" level=info msg="Daemon has completed initialization" Sep 12 22:58:38.181921 dockerd[2172]: time="2025-09-12T22:58:38.181840989Z" level=info msg="API listen on /run/docker.sock" Sep 12 22:58:38.181569 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 22:58:39.762737 containerd[1729]: time="2025-09-12T22:58:39.762674086Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 22:58:40.494725 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 12 22:58:40.626461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975119755.mount: Deactivated successfully. Sep 12 22:58:42.864237 containerd[1729]: time="2025-09-12T22:58:42.864183759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:58:42.866479 containerd[1729]: time="2025-09-12T22:58:42.866357248Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Sep 12 22:58:42.869233 containerd[1729]: time="2025-09-12T22:58:42.869207525Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:58:42.873405 containerd[1729]: time="2025-09-12T22:58:42.873359919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:58:42.874258 containerd[1729]: time="2025-09-12T22:58:42.874078738Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.111343837s" Sep 12 22:58:42.874258 containerd[1729]: time="2025-09-12T22:58:42.874112223Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 22:58:42.874778 containerd[1729]: time="2025-09-12T22:58:42.874746570Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 22:58:43.165917 update_engine[1689]: I20250912 22:58:43.165766 1689 update_attempter.cc:509] Updating boot flags... Sep 12 22:58:46.342943 containerd[1729]: time="2025-09-12T22:58:46.342895135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:58:46.346675 containerd[1729]: time="2025-09-12T22:58:46.346639908Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Sep 12 22:58:46.349309 containerd[1729]: time="2025-09-12T22:58:46.349269109Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:58:46.353848 containerd[1729]: time="2025-09-12T22:58:46.353128960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:58:46.353848 containerd[1729]: time="2025-09-12T22:58:46.353725962Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 3.478853335s" Sep 12 22:58:46.353848 containerd[1729]: time="2025-09-12T22:58:46.353753485Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 22:58:46.354408 containerd[1729]: time="2025-09-12T22:58:46.354375405Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 22:58:48.075834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 22:58:48.077302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:58:53.677773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:58:53.685950 (kubelet)[2489]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:58:53.724913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:58:53.859617 kubelet[2489]: E0912 22:58:53.723930 2489 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:58:53.725006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:58:53.725271 systemd[1]: kubelet.service: Consumed 134ms CPU time, 110.7M memory peak. Sep 12 22:58:58.587150 containerd[1729]: time="2025-09-12T22:58:58.587097711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:58:58.590720 containerd[1729]: time="2025-09-12T22:58:58.590633546Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Sep 12 22:58:58.593599 containerd[1729]: time="2025-09-12T22:58:58.593560877Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:58:58.598434 containerd[1729]: time="2025-09-12T22:58:58.598388192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:58:58.599167 containerd[1729]: time="2025-09-12T22:58:58.599048751Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 12.244643168s" Sep 12 22:58:58.599167 containerd[1729]: time="2025-09-12T22:58:58.599080490Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 22:58:58.599813 containerd[1729]: time="2025-09-12T22:58:58.599789529Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 22:59:00.380449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899655963.mount: Deactivated successfully. Sep 12 22:59:00.749066 containerd[1729]: time="2025-09-12T22:59:00.748965902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:00.751909 containerd[1729]: time="2025-09-12T22:59:00.751868413Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Sep 12 22:59:00.754317 containerd[1729]: time="2025-09-12T22:59:00.754277513Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:00.757667 containerd[1729]: time="2025-09-12T22:59:00.757112328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:00.757667 containerd[1729]: time="2025-09-12T22:59:00.757459669Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.157644467s" Sep 12 22:59:00.757667 containerd[1729]: time="2025-09-12T22:59:00.757484130Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 22:59:00.758308 containerd[1729]: time="2025-09-12T22:59:00.758272312Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 22:59:01.296636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3276399907.mount: Deactivated successfully. Sep 12 22:59:02.245583 containerd[1729]: time="2025-09-12T22:59:02.245534403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:02.247759 containerd[1729]: time="2025-09-12T22:59:02.247733760Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 12 22:59:02.250565 containerd[1729]: time="2025-09-12T22:59:02.250538331Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:02.256727 containerd[1729]: time="2025-09-12T22:59:02.256385482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:02.257556 containerd[1729]: time="2025-09-12T22:59:02.257066300Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.49876101s" Sep 12 22:59:02.257556 containerd[1729]: time="2025-09-12T22:59:02.257097459Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 22:59:02.257880 containerd[1729]: time="2025-09-12T22:59:02.257858270Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 22:59:02.848755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1306415153.mount: Deactivated successfully. Sep 12 22:59:02.865550 containerd[1729]: time="2025-09-12T22:59:02.865508968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:59:02.868406 containerd[1729]: time="2025-09-12T22:59:02.868374521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 12 22:59:02.873728 containerd[1729]: time="2025-09-12T22:59:02.873670307Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:59:02.879887 containerd[1729]: time="2025-09-12T22:59:02.879844042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 22:59:02.880386 containerd[1729]: time="2025-09-12T22:59:02.880263647Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 622.378098ms" Sep 12 22:59:02.880386 containerd[1729]: time="2025-09-12T22:59:02.880293346Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 22:59:02.880828 containerd[1729]: time="2025-09-12T22:59:02.880807255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 22:59:03.467275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921736599.mount: Deactivated successfully. Sep 12 22:59:03.825840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 22:59:03.827664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:59:04.685733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:59:04.688835 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 22:59:04.722354 kubelet[2600]: E0912 22:59:04.722320 2600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 22:59:04.723892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 22:59:04.724024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 22:59:04.724355 systemd[1]: kubelet.service: Consumed 135ms CPU time, 108.3M memory peak. Sep 12 22:59:05.694823 containerd[1729]: time="2025-09-12T22:59:05.694778909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:05.697195 containerd[1729]: time="2025-09-12T22:59:05.697048317Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 12 22:59:05.699864 containerd[1729]: time="2025-09-12T22:59:05.699840300Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:05.703690 containerd[1729]: time="2025-09-12T22:59:05.703661269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:05.704616 containerd[1729]: time="2025-09-12T22:59:05.704587552Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.823758035s" Sep 12 22:59:05.704662 containerd[1729]: time="2025-09-12T22:59:05.704616913Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 22:59:07.884206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:59:07.884367 systemd[1]: kubelet.service: Consumed 135ms CPU time, 108.3M memory peak. Sep 12 22:59:07.886487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:59:07.907421 systemd[1]: Reload requested from client PID 2660 ('systemctl') (unit session-9.scope)... Sep 12 22:59:07.907434 systemd[1]: Reloading... Sep 12 22:59:08.003725 zram_generator::config[2706]: No configuration found. Sep 12 22:59:08.246511 systemd[1]: Reloading finished in 338 ms. Sep 12 22:59:08.283000 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 22:59:08.283091 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 22:59:08.283403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:59:08.283457 systemd[1]: kubelet.service: Consumed 82ms CPU time, 83.3M memory peak. Sep 12 22:59:08.284966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:59:08.791781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:59:08.795832 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:59:08.832546 kubelet[2773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:59:08.832765 kubelet[2773]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 22:59:08.832765 kubelet[2773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:59:08.832955 kubelet[2773]: I0912 22:59:08.832930 2773 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:59:09.069515 kubelet[2773]: I0912 22:59:09.069424 2773 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 22:59:09.069515 kubelet[2773]: I0912 22:59:09.069447 2773 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:59:09.069942 kubelet[2773]: I0912 22:59:09.069921 2773 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 22:59:09.133522 kubelet[2773]: E0912 22:59:09.133481 2773 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:59:09.134198 kubelet[2773]: I0912 22:59:09.134102 2773 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:59:09.145158 kubelet[2773]: I0912 22:59:09.145140 2773 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:59:09.147898 kubelet[2773]: I0912 22:59:09.147877 2773 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:59:09.148120 kubelet[2773]: I0912 22:59:09.148090 2773 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:59:09.148267 kubelet[2773]: I0912 22:59:09.148119 2773 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-a-49ee7b6560","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:59:09.148391 kubelet[2773]: I0912 22:59:09.148273 2773 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:59:09.148391 kubelet[2773]: I0912 22:59:09.148282 2773 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 22:59:09.148391 kubelet[2773]: I0912 22:59:09.148381 2773 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:59:09.151216 kubelet[2773]: I0912 22:59:09.151200 2773 kubelet.go:446] "Attempting to sync node with API server" Sep 12 22:59:09.151284 kubelet[2773]: I0912 22:59:09.151226 2773 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:59:09.151284 kubelet[2773]: I0912 22:59:09.151250 2773 kubelet.go:352] "Adding apiserver pod source" Sep 12 22:59:09.151284 kubelet[2773]: I0912 22:59:09.151261 2773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:59:09.163960 kubelet[2773]: W0912 22:59:09.163923 2773 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-a-49ee7b6560&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Sep 12 22:59:09.164032 kubelet[2773]: E0912 22:59:09.163980 2773 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.0.0-a-49ee7b6560&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:59:09.164086 kubelet[2773]: W0912 22:59:09.164059 2773 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Sep 12 22:59:09.164115 kubelet[2773]: E0912 22:59:09.164095 2773 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:59:09.164340 kubelet[2773]: I0912 22:59:09.164328 2773 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:59:09.164821 kubelet[2773]: I0912 22:59:09.164811 2773 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 22:59:09.164911 kubelet[2773]: W0912 22:59:09.164905 2773 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 22:59:09.167466 kubelet[2773]: I0912 22:59:09.167444 2773 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 22:59:09.167542 kubelet[2773]: I0912 22:59:09.167479 2773 server.go:1287] "Started kubelet" Sep 12 22:59:09.167631 kubelet[2773]: I0912 22:59:09.167596 2773 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:59:09.168544 kubelet[2773]: I0912 22:59:09.168464 2773 server.go:479] "Adding debug handlers to kubelet server" Sep 12 22:59:09.171878 kubelet[2773]: I0912 22:59:09.171043 2773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:59:09.171878 kubelet[2773]: I0912 22:59:09.171184 2773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:59:09.171878 kubelet[2773]: I0912 22:59:09.171366 2773 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:59:09.173459 kubelet[2773]: E0912 22:59:09.172268 2773 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.0.0-a-49ee7b6560.1864ab29777459fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.0.0-a-49ee7b6560,UID:ci-4459.0.0-a-49ee7b6560,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.0.0-a-49ee7b6560,},FirstTimestamp:2025-09-12 22:59:09.167458814 +0000 UTC m=+0.367790128,LastTimestamp:2025-09-12 22:59:09.167458814 +0000 UTC m=+0.367790128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.0.0-a-49ee7b6560,}" Sep 12 22:59:09.176142 kubelet[2773]: E0912 22:59:09.176125 2773 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 22:59:09.176213 kubelet[2773]: I0912 22:59:09.176194 2773 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 22:59:09.176352 kubelet[2773]: I0912 22:59:09.176341 2773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:59:09.176776 kubelet[2773]: E0912 22:59:09.176762 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-a-49ee7b6560\" not found" Sep 12 22:59:09.177780 kubelet[2773]: I0912 22:59:09.177760 2773 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 22:59:09.177848 kubelet[2773]: I0912 22:59:09.177810 2773 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:59:09.179730 kubelet[2773]: W0912 22:59:09.179382 2773 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Sep 12 22:59:09.179730 kubelet[2773]: E0912 22:59:09.179445 2773 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:59:09.179730 kubelet[2773]: E0912 22:59:09.179520 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-a-49ee7b6560?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Sep 12 22:59:09.179911 kubelet[2773]: I0912 22:59:09.179896 2773 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:59:09.181275 kubelet[2773]: I0912 22:59:09.181262 2773 factory.go:221] Registration of the containerd container factory successfully Sep 12 22:59:09.181353 kubelet[2773]: I0912 22:59:09.181347 2773 factory.go:221] Registration of the systemd container factory successfully Sep 12 22:59:09.202220 kubelet[2773]: I0912 22:59:09.202201 2773 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 22:59:09.202220 kubelet[2773]: I0912 22:59:09.202215 2773 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 22:59:09.202220 kubelet[2773]: I0912 22:59:09.202230 2773 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:59:09.213386 kubelet[2773]: I0912 22:59:09.213266 2773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 22:59:09.214774 kubelet[2773]: I0912 22:59:09.214759 2773 policy_none.go:49] "None policy: Start" Sep 12 22:59:09.215038 kubelet[2773]: I0912 22:59:09.214850 2773 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 22:59:09.215038 kubelet[2773]: I0912 22:59:09.214863 2773 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:59:09.216948 kubelet[2773]: I0912 22:59:09.216932 2773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 22:59:09.217016 kubelet[2773]: I0912 22:59:09.217011 2773 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 22:59:09.217053 kubelet[2773]: I0912 22:59:09.217048 2773 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 22:59:09.217082 kubelet[2773]: I0912 22:59:09.217078 2773 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 22:59:09.217135 kubelet[2773]: E0912 22:59:09.217127 2773 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:59:09.219977 kubelet[2773]: W0912 22:59:09.219954 2773 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Sep 12 22:59:09.223054 kubelet[2773]: E0912 22:59:09.219987 2773 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" Sep 12 22:59:09.229950 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 22:59:09.239746 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 22:59:09.242636 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 22:59:09.260356 kubelet[2773]: I0912 22:59:09.260321 2773 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 22:59:09.260469 kubelet[2773]: I0912 22:59:09.260459 2773 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:59:09.260591 kubelet[2773]: I0912 22:59:09.260473 2773 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:59:09.260637 kubelet[2773]: I0912 22:59:09.260627 2773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:59:09.262426 kubelet[2773]: E0912 22:59:09.262354 2773 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 22:59:09.262426 kubelet[2773]: E0912 22:59:09.262396 2773 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.0.0-a-49ee7b6560\" not found" Sep 12 22:59:09.327199 systemd[1]: Created slice kubepods-burstable-pod9c91909bf2c0bdbaa7b510848b0f60f7.slice - libcontainer container kubepods-burstable-pod9c91909bf2c0bdbaa7b510848b0f60f7.slice. Sep 12 22:59:09.337303 kubelet[2773]: E0912 22:59:09.337284 2773 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-a-49ee7b6560\" not found" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.340098 systemd[1]: Created slice kubepods-burstable-pod16b96d1f0f2b4a6152f3e3c61621a46f.slice - libcontainer container kubepods-burstable-pod16b96d1f0f2b4a6152f3e3c61621a46f.slice. Sep 12 22:59:09.342052 kubelet[2773]: E0912 22:59:09.342033 2773 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-a-49ee7b6560\" not found" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.343903 systemd[1]: Created slice kubepods-burstable-pod2ebf390b4738ffd9e2138b2542439fc6.slice - libcontainer container kubepods-burstable-pod2ebf390b4738ffd9e2138b2542439fc6.slice. Sep 12 22:59:09.345306 kubelet[2773]: E0912 22:59:09.345283 2773 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-a-49ee7b6560\" not found" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.362261 kubelet[2773]: I0912 22:59:09.362232 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.362520 kubelet[2773]: E0912 22:59:09.362495 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.380106 kubelet[2773]: E0912 22:59:09.380072 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-a-49ee7b6560?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Sep 12 22:59:09.479656 kubelet[2773]: I0912 22:59:09.479617 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ebf390b4738ffd9e2138b2542439fc6-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-a-49ee7b6560\" (UID: \"2ebf390b4738ffd9e2138b2542439fc6\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.479656 kubelet[2773]: I0912 22:59:09.479661 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ebf390b4738ffd9e2138b2542439fc6-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-a-49ee7b6560\" (UID: \"2ebf390b4738ffd9e2138b2542439fc6\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.479839 kubelet[2773]: I0912 22:59:09.479682 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ebf390b4738ffd9e2138b2542439fc6-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-a-49ee7b6560\" (UID: \"2ebf390b4738ffd9e2138b2542439fc6\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.479839 kubelet[2773]: I0912 22:59:09.479697 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c91909bf2c0bdbaa7b510848b0f60f7-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-a-49ee7b6560\" (UID: \"9c91909bf2c0bdbaa7b510848b0f60f7\") " pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.479839 kubelet[2773]: I0912 22:59:09.479728 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c91909bf2c0bdbaa7b510848b0f60f7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-a-49ee7b6560\" (UID: \"9c91909bf2c0bdbaa7b510848b0f60f7\") " pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.479839 kubelet[2773]: I0912 22:59:09.479745 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2ebf390b4738ffd9e2138b2542439fc6-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-a-49ee7b6560\" (UID: \"2ebf390b4738ffd9e2138b2542439fc6\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.479839 kubelet[2773]: I0912 22:59:09.479763 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ebf390b4738ffd9e2138b2542439fc6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-a-49ee7b6560\" (UID: \"2ebf390b4738ffd9e2138b2542439fc6\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.479952 kubelet[2773]: I0912 22:59:09.479783 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16b96d1f0f2b4a6152f3e3c61621a46f-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-a-49ee7b6560\" (UID: \"16b96d1f0f2b4a6152f3e3c61621a46f\") " pod="kube-system/kube-scheduler-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.479952 kubelet[2773]: I0912 22:59:09.479807 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c91909bf2c0bdbaa7b510848b0f60f7-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-a-49ee7b6560\" (UID: \"9c91909bf2c0bdbaa7b510848b0f60f7\") " pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.563845 kubelet[2773]: I0912 22:59:09.563821 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.564129 kubelet[2773]: E0912 22:59:09.564110 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.638399 containerd[1729]: time="2025-09-12T22:59:09.638331218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-a-49ee7b6560,Uid:9c91909bf2c0bdbaa7b510848b0f60f7,Namespace:kube-system,Attempt:0,}" Sep 12 22:59:09.642771 containerd[1729]: time="2025-09-12T22:59:09.642741154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-a-49ee7b6560,Uid:16b96d1f0f2b4a6152f3e3c61621a46f,Namespace:kube-system,Attempt:0,}" Sep 12 22:59:09.646449 containerd[1729]: time="2025-09-12T22:59:09.646420671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-a-49ee7b6560,Uid:2ebf390b4738ffd9e2138b2542439fc6,Namespace:kube-system,Attempt:0,}" Sep 12 22:59:09.727276 containerd[1729]: time="2025-09-12T22:59:09.727205839Z" level=info msg="connecting to shim c100cdfe9c945152e73fc3f29bfcd5c671ec081c909919e372360e278163ee41" address="unix:///run/containerd/s/ffebf847291f3f200363ad9e12a0d0de807ec59f485a34487a5cee9294f99b8a" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:59:09.743263 containerd[1729]: time="2025-09-12T22:59:09.742845805Z" level=info msg="connecting to shim 41b98b20febb9edb4ae06ab83a88b3875b66c7e37f40f30511d74c46615d4c06" address="unix:///run/containerd/s/52c1679ba65555cd6b98330f7dd6cac65cb15d708e3c046b6251e8cf321233f5" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:59:09.747135 containerd[1729]: time="2025-09-12T22:59:09.747100185Z" level=info msg="connecting to shim e8ffefe43514bd28d98b9fd5ce9059a00c86e5f1e9e47e94372a5ef43ab9b54e" address="unix:///run/containerd/s/e86b3b9ee79163be2d8639a466fa4409b099f2bb0f56f711bcc48edb0fe7e887" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:59:09.768635 systemd[1]: Started cri-containerd-c100cdfe9c945152e73fc3f29bfcd5c671ec081c909919e372360e278163ee41.scope - libcontainer container c100cdfe9c945152e73fc3f29bfcd5c671ec081c909919e372360e278163ee41. Sep 12 22:59:09.780477 kubelet[2773]: E0912 22:59:09.780449 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.0.0-a-49ee7b6560?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Sep 12 22:59:09.781830 systemd[1]: Started cri-containerd-e8ffefe43514bd28d98b9fd5ce9059a00c86e5f1e9e47e94372a5ef43ab9b54e.scope - libcontainer container e8ffefe43514bd28d98b9fd5ce9059a00c86e5f1e9e47e94372a5ef43ab9b54e. Sep 12 22:59:09.786137 systemd[1]: Started cri-containerd-41b98b20febb9edb4ae06ab83a88b3875b66c7e37f40f30511d74c46615d4c06.scope - libcontainer container 41b98b20febb9edb4ae06ab83a88b3875b66c7e37f40f30511d74c46615d4c06. Sep 12 22:59:09.847148 containerd[1729]: time="2025-09-12T22:59:09.847116297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.0.0-a-49ee7b6560,Uid:9c91909bf2c0bdbaa7b510848b0f60f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"c100cdfe9c945152e73fc3f29bfcd5c671ec081c909919e372360e278163ee41\"" Sep 12 22:59:09.851476 containerd[1729]: time="2025-09-12T22:59:09.851448262Z" level=info msg="CreateContainer within sandbox \"c100cdfe9c945152e73fc3f29bfcd5c671ec081c909919e372360e278163ee41\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 22:59:09.852871 containerd[1729]: time="2025-09-12T22:59:09.852849930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.0.0-a-49ee7b6560,Uid:2ebf390b4738ffd9e2138b2542439fc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8ffefe43514bd28d98b9fd5ce9059a00c86e5f1e9e47e94372a5ef43ab9b54e\"" Sep 12 22:59:09.867587 containerd[1729]: time="2025-09-12T22:59:09.867554879Z" level=info msg="CreateContainer within sandbox \"e8ffefe43514bd28d98b9fd5ce9059a00c86e5f1e9e47e94372a5ef43ab9b54e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 22:59:09.870416 containerd[1729]: time="2025-09-12T22:59:09.870390443Z" level=info msg="Container 890899f3d3b2c81dcb4eb3a987c1d34ab9fb5db9ac9e5e782f9e5555c3df26c6: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:09.872634 containerd[1729]: time="2025-09-12T22:59:09.872610893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.0.0-a-49ee7b6560,Uid:16b96d1f0f2b4a6152f3e3c61621a46f,Namespace:kube-system,Attempt:0,} returns sandbox id \"41b98b20febb9edb4ae06ab83a88b3875b66c7e37f40f30511d74c46615d4c06\"" Sep 12 22:59:09.874097 containerd[1729]: time="2025-09-12T22:59:09.874058598Z" level=info msg="CreateContainer within sandbox \"41b98b20febb9edb4ae06ab83a88b3875b66c7e37f40f30511d74c46615d4c06\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 22:59:09.889586 containerd[1729]: time="2025-09-12T22:59:09.889145740Z" level=info msg="CreateContainer within sandbox \"c100cdfe9c945152e73fc3f29bfcd5c671ec081c909919e372360e278163ee41\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"890899f3d3b2c81dcb4eb3a987c1d34ab9fb5db9ac9e5e782f9e5555c3df26c6\"" Sep 12 22:59:09.890721 containerd[1729]: time="2025-09-12T22:59:09.889851443Z" level=info msg="StartContainer for \"890899f3d3b2c81dcb4eb3a987c1d34ab9fb5db9ac9e5e782f9e5555c3df26c6\"" Sep 12 22:59:09.892203 containerd[1729]: time="2025-09-12T22:59:09.892166511Z" level=info msg="connecting to shim 890899f3d3b2c81dcb4eb3a987c1d34ab9fb5db9ac9e5e782f9e5555c3df26c6" address="unix:///run/containerd/s/ffebf847291f3f200363ad9e12a0d0de807ec59f485a34487a5cee9294f99b8a" protocol=ttrpc version=3 Sep 12 22:59:09.907991 containerd[1729]: time="2025-09-12T22:59:09.907967850Z" level=info msg="Container 76f38723980535b978b556b32a381500db0d783050d1579f3585195883a93615: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:09.909961 systemd[1]: Started cri-containerd-890899f3d3b2c81dcb4eb3a987c1d34ab9fb5db9ac9e5e782f9e5555c3df26c6.scope - libcontainer container 890899f3d3b2c81dcb4eb3a987c1d34ab9fb5db9ac9e5e782f9e5555c3df26c6. Sep 12 22:59:09.911236 containerd[1729]: time="2025-09-12T22:59:09.911174696Z" level=info msg="Container 73d2cf7885bf993b0c6591e103ebca88b145e9d6bb02e6463f820a7025901875: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:09.922575 containerd[1729]: time="2025-09-12T22:59:09.922556563Z" level=info msg="CreateContainer within sandbox \"e8ffefe43514bd28d98b9fd5ce9059a00c86e5f1e9e47e94372a5ef43ab9b54e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"76f38723980535b978b556b32a381500db0d783050d1579f3585195883a93615\"" Sep 12 22:59:09.923096 containerd[1729]: time="2025-09-12T22:59:09.923063778Z" level=info msg="StartContainer for \"76f38723980535b978b556b32a381500db0d783050d1579f3585195883a93615\"" Sep 12 22:59:09.924291 containerd[1729]: time="2025-09-12T22:59:09.924249229Z" level=info msg="connecting to shim 76f38723980535b978b556b32a381500db0d783050d1579f3585195883a93615" address="unix:///run/containerd/s/e86b3b9ee79163be2d8639a466fa4409b099f2bb0f56f711bcc48edb0fe7e887" protocol=ttrpc version=3 Sep 12 22:59:09.938480 containerd[1729]: time="2025-09-12T22:59:09.938428833Z" level=info msg="CreateContainer within sandbox \"41b98b20febb9edb4ae06ab83a88b3875b66c7e37f40f30511d74c46615d4c06\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"73d2cf7885bf993b0c6591e103ebca88b145e9d6bb02e6463f820a7025901875\"" Sep 12 22:59:09.938951 containerd[1729]: time="2025-09-12T22:59:09.938857258Z" level=info msg="StartContainer for \"73d2cf7885bf993b0c6591e103ebca88b145e9d6bb02e6463f820a7025901875\"" Sep 12 22:59:09.940014 containerd[1729]: time="2025-09-12T22:59:09.939987950Z" level=info msg="connecting to shim 73d2cf7885bf993b0c6591e103ebca88b145e9d6bb02e6463f820a7025901875" address="unix:///run/containerd/s/52c1679ba65555cd6b98330f7dd6cac65cb15d708e3c046b6251e8cf321233f5" protocol=ttrpc version=3 Sep 12 22:59:09.942836 systemd[1]: Started cri-containerd-76f38723980535b978b556b32a381500db0d783050d1579f3585195883a93615.scope - libcontainer container 76f38723980535b978b556b32a381500db0d783050d1579f3585195883a93615. Sep 12 22:59:09.965043 systemd[1]: Started cri-containerd-73d2cf7885bf993b0c6591e103ebca88b145e9d6bb02e6463f820a7025901875.scope - libcontainer container 73d2cf7885bf993b0c6591e103ebca88b145e9d6bb02e6463f820a7025901875. Sep 12 22:59:09.965910 kubelet[2773]: I0912 22:59:09.965856 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.966147 kubelet[2773]: E0912 22:59:09.966124 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:09.974592 containerd[1729]: time="2025-09-12T22:59:09.974564014Z" level=info msg="StartContainer for \"890899f3d3b2c81dcb4eb3a987c1d34ab9fb5db9ac9e5e782f9e5555c3df26c6\" returns successfully" Sep 12 22:59:10.038390 containerd[1729]: time="2025-09-12T22:59:10.038356154Z" level=info msg="StartContainer for \"76f38723980535b978b556b32a381500db0d783050d1579f3585195883a93615\" returns successfully" Sep 12 22:59:10.053065 containerd[1729]: time="2025-09-12T22:59:10.053043696Z" level=info msg="StartContainer for \"73d2cf7885bf993b0c6591e103ebca88b145e9d6bb02e6463f820a7025901875\" returns successfully" Sep 12 22:59:10.237668 kubelet[2773]: E0912 22:59:10.237110 2773 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-a-49ee7b6560\" not found" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:10.237668 kubelet[2773]: E0912 22:59:10.237479 2773 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-a-49ee7b6560\" not found" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:10.245180 kubelet[2773]: E0912 22:59:10.245163 2773 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-a-49ee7b6560\" not found" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:10.768325 kubelet[2773]: I0912 22:59:10.767782 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:11.249901 kubelet[2773]: E0912 22:59:11.247983 2773 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-a-49ee7b6560\" not found" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:11.249901 kubelet[2773]: E0912 22:59:11.248320 2773 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.0.0-a-49ee7b6560\" not found" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:11.935167 kubelet[2773]: E0912 22:59:11.935134 2773 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.0.0-a-49ee7b6560\" not found" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:11.990977 kubelet[2773]: I0912 22:59:11.990944 2773 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:11.991108 kubelet[2773]: E0912 22:59:11.990991 2773 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459.0.0-a-49ee7b6560\": node \"ci-4459.0.0-a-49ee7b6560\" not found" Sep 12 22:59:12.077620 kubelet[2773]: I0912 22:59:12.077584 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:12.081187 kubelet[2773]: E0912 22:59:12.081158 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.0.0-a-49ee7b6560\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:12.081187 kubelet[2773]: I0912 22:59:12.081182 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:12.082470 kubelet[2773]: E0912 22:59:12.082431 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-a-49ee7b6560\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:12.082470 kubelet[2773]: I0912 22:59:12.082466 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:12.084135 kubelet[2773]: E0912 22:59:12.084108 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-a-49ee7b6560\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:12.155291 kubelet[2773]: I0912 22:59:12.155267 2773 apiserver.go:52] "Watching apiserver" Sep 12 22:59:12.178156 kubelet[2773]: I0912 22:59:12.178110 2773 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 22:59:12.245517 kubelet[2773]: I0912 22:59:12.245085 2773 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:12.246791 kubelet[2773]: E0912 22:59:12.246767 2773 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-a-49ee7b6560\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:14.245423 systemd[1]: Reload requested from client PID 3046 ('systemctl') (unit session-9.scope)... Sep 12 22:59:14.245437 systemd[1]: Reloading... Sep 12 22:59:14.328750 zram_generator::config[3093]: No configuration found. Sep 12 22:59:14.539431 systemd[1]: Reloading finished in 293 ms. Sep 12 22:59:14.571919 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:59:14.585475 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 22:59:14.585735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:59:14.585781 systemd[1]: kubelet.service: Consumed 633ms CPU time, 131.4M memory peak. Sep 12 22:59:14.588026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 22:59:15.000913 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 22:59:15.010979 (kubelet)[3160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 22:59:15.050210 kubelet[3160]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:59:15.050597 kubelet[3160]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 22:59:15.050597 kubelet[3160]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 22:59:15.050597 kubelet[3160]: I0912 22:59:15.050521 3160 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 22:59:15.059884 kubelet[3160]: I0912 22:59:15.059858 3160 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 22:59:15.059884 kubelet[3160]: I0912 22:59:15.059877 3160 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 22:59:15.101255 kubelet[3160]: I0912 22:59:15.060053 3160 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 22:59:15.101963 kubelet[3160]: I0912 22:59:15.101943 3160 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 22:59:15.103994 kubelet[3160]: I0912 22:59:15.103891 3160 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 22:59:15.107478 kubelet[3160]: I0912 22:59:15.107435 3160 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 22:59:15.109808 kubelet[3160]: I0912 22:59:15.109789 3160 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 22:59:15.110008 kubelet[3160]: I0912 22:59:15.109980 3160 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 22:59:15.110197 kubelet[3160]: I0912 22:59:15.110006 3160 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.0.0-a-49ee7b6560","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 22:59:15.110309 kubelet[3160]: I0912 22:59:15.110202 3160 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 22:59:15.110309 kubelet[3160]: I0912 22:59:15.110213 3160 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 22:59:15.110309 kubelet[3160]: I0912 22:59:15.110255 3160 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:59:15.110395 kubelet[3160]: I0912 22:59:15.110388 3160 kubelet.go:446] "Attempting to sync node with API server" Sep 12 22:59:15.110417 kubelet[3160]: I0912 22:59:15.110411 3160 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 22:59:15.110723 kubelet[3160]: I0912 22:59:15.110430 3160 kubelet.go:352] "Adding apiserver pod source" Sep 12 22:59:15.110723 kubelet[3160]: I0912 22:59:15.110465 3160 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 22:59:15.112348 kubelet[3160]: I0912 22:59:15.112314 3160 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 22:59:15.113164 kubelet[3160]: I0912 22:59:15.113154 3160 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 22:59:15.122444 kubelet[3160]: I0912 22:59:15.122426 3160 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 22:59:15.122605 kubelet[3160]: I0912 22:59:15.122457 3160 server.go:1287] "Started kubelet" Sep 12 22:59:15.125729 kubelet[3160]: I0912 22:59:15.124178 3160 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 22:59:15.128917 kubelet[3160]: I0912 22:59:15.128884 3160 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 22:59:15.139982 kubelet[3160]: I0912 22:59:15.139968 3160 server.go:479] "Adding debug handlers to kubelet server" Sep 12 22:59:15.141534 kubelet[3160]: I0912 22:59:15.129286 3160 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 22:59:15.141611 kubelet[3160]: I0912 22:59:15.129031 3160 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 22:59:15.141851 kubelet[3160]: I0912 22:59:15.141771 3160 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 22:59:15.141851 kubelet[3160]: E0912 22:59:15.130101 3160 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459.0.0-a-49ee7b6560\" not found" Sep 12 22:59:15.141851 kubelet[3160]: I0912 22:59:15.135683 3160 factory.go:221] Registration of the systemd container factory successfully Sep 12 22:59:15.141944 kubelet[3160]: I0912 22:59:15.141883 3160 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 22:59:15.142066 kubelet[3160]: I0912 22:59:15.129976 3160 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 22:59:15.142192 kubelet[3160]: I0912 22:59:15.129988 3160 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 22:59:15.142376 kubelet[3160]: I0912 22:59:15.142360 3160 reconciler.go:26] "Reconciler: start to sync state" Sep 12 22:59:15.143894 kubelet[3160]: I0912 22:59:15.143497 3160 factory.go:221] Registration of the containerd container factory successfully Sep 12 22:59:15.145040 kubelet[3160]: I0912 22:59:15.145006 3160 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 22:59:15.146112 kubelet[3160]: I0912 22:59:15.146095 3160 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 22:59:15.146201 kubelet[3160]: I0912 22:59:15.146194 3160 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 22:59:15.146249 kubelet[3160]: I0912 22:59:15.146243 3160 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 22:59:15.146285 kubelet[3160]: I0912 22:59:15.146281 3160 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 22:59:15.146350 kubelet[3160]: E0912 22:59:15.146339 3160 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 22:59:15.192856 kubelet[3160]: I0912 22:59:15.192840 3160 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 22:59:15.192856 kubelet[3160]: I0912 22:59:15.192851 3160 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 22:59:15.192945 kubelet[3160]: I0912 22:59:15.192866 3160 state_mem.go:36] "Initialized new in-memory state store" Sep 12 22:59:15.192999 kubelet[3160]: I0912 22:59:15.192987 3160 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 22:59:15.193030 kubelet[3160]: I0912 22:59:15.192998 3160 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 22:59:15.193030 kubelet[3160]: I0912 22:59:15.193013 3160 policy_none.go:49] "None policy: Start" Sep 12 22:59:15.193030 kubelet[3160]: I0912 22:59:15.193022 3160 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 22:59:15.193091 kubelet[3160]: I0912 22:59:15.193030 3160 state_mem.go:35] "Initializing new in-memory state store" Sep 12 22:59:15.193144 kubelet[3160]: I0912 22:59:15.193135 3160 state_mem.go:75] "Updated machine memory state" Sep 12 22:59:15.195907 kubelet[3160]: I0912 22:59:15.195887 3160 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 22:59:15.196014 kubelet[3160]: I0912 22:59:15.196002 3160 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 22:59:15.196050 kubelet[3160]: I0912 22:59:15.196014 3160 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 22:59:15.196416 kubelet[3160]: I0912 22:59:15.196399 3160 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 22:59:15.197722 kubelet[3160]: E0912 22:59:15.197550 3160 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 22:59:15.246778 kubelet[3160]: I0912 22:59:15.246757 3160 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.247086 kubelet[3160]: I0912 22:59:15.246866 3160 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.247086 kubelet[3160]: I0912 22:59:15.247024 3160 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.255421 kubelet[3160]: W0912 22:59:15.254364 3160 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 22:59:15.259748 kubelet[3160]: W0912 22:59:15.259498 3160 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 22:59:15.259987 kubelet[3160]: W0912 22:59:15.259823 3160 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 22:59:15.261185 sudo[3191]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 22:59:15.261420 sudo[3191]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 22:59:15.298983 kubelet[3160]: I0912 22:59:15.298930 3160 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.309929 kubelet[3160]: I0912 22:59:15.309842 3160 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.309929 kubelet[3160]: I0912 22:59:15.309894 3160 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.344229 kubelet[3160]: I0912 22:59:15.344206 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ebf390b4738ffd9e2138b2542439fc6-ca-certs\") pod \"kube-controller-manager-ci-4459.0.0-a-49ee7b6560\" (UID: \"2ebf390b4738ffd9e2138b2542439fc6\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.344292 kubelet[3160]: I0912 22:59:15.344243 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ebf390b4738ffd9e2138b2542439fc6-k8s-certs\") pod \"kube-controller-manager-ci-4459.0.0-a-49ee7b6560\" (UID: \"2ebf390b4738ffd9e2138b2542439fc6\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.344292 kubelet[3160]: I0912 22:59:15.344265 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ebf390b4738ffd9e2138b2542439fc6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.0.0-a-49ee7b6560\" (UID: \"2ebf390b4738ffd9e2138b2542439fc6\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.344357 kubelet[3160]: I0912 22:59:15.344304 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c91909bf2c0bdbaa7b510848b0f60f7-k8s-certs\") pod \"kube-apiserver-ci-4459.0.0-a-49ee7b6560\" (UID: \"9c91909bf2c0bdbaa7b510848b0f60f7\") " pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.344357 kubelet[3160]: I0912 22:59:15.344321 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2ebf390b4738ffd9e2138b2542439fc6-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.0.0-a-49ee7b6560\" (UID: \"2ebf390b4738ffd9e2138b2542439fc6\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.344404 kubelet[3160]: I0912 22:59:15.344370 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2ebf390b4738ffd9e2138b2542439fc6-kubeconfig\") pod \"kube-controller-manager-ci-4459.0.0-a-49ee7b6560\" (UID: \"2ebf390b4738ffd9e2138b2542439fc6\") " pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.344404 kubelet[3160]: I0912 22:59:15.344388 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16b96d1f0f2b4a6152f3e3c61621a46f-kubeconfig\") pod \"kube-scheduler-ci-4459.0.0-a-49ee7b6560\" (UID: \"16b96d1f0f2b4a6152f3e3c61621a46f\") " pod="kube-system/kube-scheduler-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.344456 kubelet[3160]: I0912 22:59:15.344406 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c91909bf2c0bdbaa7b510848b0f60f7-ca-certs\") pod \"kube-apiserver-ci-4459.0.0-a-49ee7b6560\" (UID: \"9c91909bf2c0bdbaa7b510848b0f60f7\") " pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.344483 kubelet[3160]: I0912 22:59:15.344453 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c91909bf2c0bdbaa7b510848b0f60f7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.0.0-a-49ee7b6560\" (UID: \"9c91909bf2c0bdbaa7b510848b0f60f7\") " pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:15.605443 sudo[3191]: pam_unix(sudo:session): session closed for user root Sep 12 22:59:16.117649 kubelet[3160]: I0912 22:59:16.117615 3160 apiserver.go:52] "Watching apiserver" Sep 12 22:59:16.142291 kubelet[3160]: I0912 22:59:16.142246 3160 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 22:59:16.168589 kubelet[3160]: I0912 22:59:16.168567 3160 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:16.168879 kubelet[3160]: I0912 22:59:16.168866 3160 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:16.183225 kubelet[3160]: W0912 22:59:16.183157 3160 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 22:59:16.183732 kubelet[3160]: E0912 22:59:16.183656 3160 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.0.0-a-49ee7b6560\" already exists" pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:16.184083 kubelet[3160]: W0912 22:59:16.184037 3160 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 22:59:16.184295 kubelet[3160]: E0912 22:59:16.184179 3160 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.0.0-a-49ee7b6560\" already exists" pod="kube-system/kube-scheduler-ci-4459.0.0-a-49ee7b6560" Sep 12 22:59:16.204226 kubelet[3160]: I0912 22:59:16.204155 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.0.0-a-49ee7b6560" podStartSLOduration=1.204141627 podStartE2EDuration="1.204141627s" podCreationTimestamp="2025-09-12 22:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:59:16.193731167 +0000 UTC m=+1.178695604" watchObservedRunningTime="2025-09-12 22:59:16.204141627 +0000 UTC m=+1.189106064" Sep 12 22:59:16.204639 kubelet[3160]: I0912 22:59:16.204561 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.0.0-a-49ee7b6560" podStartSLOduration=1.204551331 podStartE2EDuration="1.204551331s" podCreationTimestamp="2025-09-12 22:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:59:16.203928499 +0000 UTC m=+1.188892939" watchObservedRunningTime="2025-09-12 22:59:16.204551331 +0000 UTC m=+1.189515835" Sep 12 22:59:16.226800 kubelet[3160]: I0912 22:59:16.226766 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.0.0-a-49ee7b6560" podStartSLOduration=1.226752779 podStartE2EDuration="1.226752779s" podCreationTimestamp="2025-09-12 22:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:59:16.218100203 +0000 UTC m=+1.203064640" watchObservedRunningTime="2025-09-12 22:59:16.226752779 +0000 UTC m=+1.211717215" Sep 12 22:59:16.943256 sudo[2154]: pam_unix(sudo:session): session closed for user root Sep 12 22:59:17.042343 sshd[2153]: Connection closed by 10.200.16.10 port 57434 Sep 12 22:59:17.042809 sshd-session[2150]: pam_unix(sshd:session): session closed for user core Sep 12 22:59:17.045544 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:57434.service: Deactivated successfully. Sep 12 22:59:17.047475 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 22:59:17.047666 systemd[1]: session-9.scope: Consumed 3.324s CPU time, 268.5M memory peak. Sep 12 22:59:17.050139 systemd-logind[1688]: Session 9 logged out. Waiting for processes to exit. Sep 12 22:59:17.051019 systemd-logind[1688]: Removed session 9. Sep 12 22:59:18.598363 kubelet[3160]: I0912 22:59:18.598317 3160 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 22:59:18.598741 containerd[1729]: time="2025-09-12T22:59:18.598593854Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 22:59:18.599215 kubelet[3160]: I0912 22:59:18.598767 3160 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 22:59:19.609894 systemd[1]: Created slice kubepods-burstable-pod6da88806_732d_45b2_b144_3ee5c53f33c7.slice - libcontainer container kubepods-burstable-pod6da88806_732d_45b2_b144_3ee5c53f33c7.slice. Sep 12 22:59:19.618257 systemd[1]: Created slice kubepods-besteffort-podde4c6ea0_5e0b_4a1f_ab5b_22290cd91c47.slice - libcontainer container kubepods-besteffort-podde4c6ea0_5e0b_4a1f_ab5b_22290cd91c47.slice. Sep 12 22:59:19.674289 kubelet[3160]: I0912 22:59:19.674254 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cni-path\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674569 kubelet[3160]: I0912 22:59:19.674295 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-xtables-lock\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674569 kubelet[3160]: I0912 22:59:19.674313 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6da88806-732d-45b2-b144-3ee5c53f33c7-clustermesh-secrets\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674569 kubelet[3160]: I0912 22:59:19.674330 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de4c6ea0-5e0b-4a1f-ab5b-22290cd91c47-kube-proxy\") pod \"kube-proxy-f9xbl\" (UID: \"de4c6ea0-5e0b-4a1f-ab5b-22290cd91c47\") " pod="kube-system/kube-proxy-f9xbl" Sep 12 22:59:19.674569 kubelet[3160]: I0912 22:59:19.674349 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-bpf-maps\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674569 kubelet[3160]: I0912 22:59:19.674367 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de4c6ea0-5e0b-4a1f-ab5b-22290cd91c47-xtables-lock\") pod \"kube-proxy-f9xbl\" (UID: \"de4c6ea0-5e0b-4a1f-ab5b-22290cd91c47\") " pod="kube-system/kube-proxy-f9xbl" Sep 12 22:59:19.674569 kubelet[3160]: I0912 22:59:19.674385 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-hostproc\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674741 kubelet[3160]: I0912 22:59:19.674414 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-cgroup\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674741 kubelet[3160]: I0912 22:59:19.674433 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-host-proc-sys-kernel\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674741 kubelet[3160]: I0912 22:59:19.674450 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpjmt\" (UniqueName: \"kubernetes.io/projected/6da88806-732d-45b2-b144-3ee5c53f33c7-kube-api-access-bpjmt\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674741 kubelet[3160]: I0912 22:59:19.674468 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz8jn\" (UniqueName: \"kubernetes.io/projected/de4c6ea0-5e0b-4a1f-ab5b-22290cd91c47-kube-api-access-jz8jn\") pod \"kube-proxy-f9xbl\" (UID: \"de4c6ea0-5e0b-4a1f-ab5b-22290cd91c47\") " pod="kube-system/kube-proxy-f9xbl" Sep 12 22:59:19.674741 kubelet[3160]: I0912 22:59:19.674485 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-etc-cni-netd\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674856 kubelet[3160]: I0912 22:59:19.674501 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-host-proc-sys-net\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674856 kubelet[3160]: I0912 22:59:19.674521 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6da88806-732d-45b2-b144-3ee5c53f33c7-hubble-tls\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674856 kubelet[3160]: I0912 22:59:19.674536 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de4c6ea0-5e0b-4a1f-ab5b-22290cd91c47-lib-modules\") pod \"kube-proxy-f9xbl\" (UID: \"de4c6ea0-5e0b-4a1f-ab5b-22290cd91c47\") " pod="kube-system/kube-proxy-f9xbl" Sep 12 22:59:19.674856 kubelet[3160]: I0912 22:59:19.674555 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-run\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674856 kubelet[3160]: I0912 22:59:19.674570 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-config-path\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.674856 kubelet[3160]: I0912 22:59:19.674589 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-lib-modules\") pod \"cilium-c7w7b\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " pod="kube-system/cilium-c7w7b" Sep 12 22:59:19.688739 systemd[1]: Created slice kubepods-besteffort-pod7ba6f6cf_5902_4bae_acd8_754abe3dcec5.slice - libcontainer container kubepods-besteffort-pod7ba6f6cf_5902_4bae_acd8_754abe3dcec5.slice. Sep 12 22:59:19.775975 kubelet[3160]: I0912 22:59:19.775467 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngglv\" (UniqueName: \"kubernetes.io/projected/7ba6f6cf-5902-4bae-acd8-754abe3dcec5-kube-api-access-ngglv\") pod \"cilium-operator-6c4d7847fc-pvzfr\" (UID: \"7ba6f6cf-5902-4bae-acd8-754abe3dcec5\") " pod="kube-system/cilium-operator-6c4d7847fc-pvzfr" Sep 12 22:59:19.775975 kubelet[3160]: I0912 22:59:19.775571 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ba6f6cf-5902-4bae-acd8-754abe3dcec5-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pvzfr\" (UID: \"7ba6f6cf-5902-4bae-acd8-754abe3dcec5\") " pod="kube-system/cilium-operator-6c4d7847fc-pvzfr" Sep 12 22:59:19.916880 containerd[1729]: time="2025-09-12T22:59:19.916442406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7w7b,Uid:6da88806-732d-45b2-b144-3ee5c53f33c7,Namespace:kube-system,Attempt:0,}" Sep 12 22:59:19.925321 containerd[1729]: time="2025-09-12T22:59:19.925294428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9xbl,Uid:de4c6ea0-5e0b-4a1f-ab5b-22290cd91c47,Namespace:kube-system,Attempt:0,}" Sep 12 22:59:19.968962 containerd[1729]: time="2025-09-12T22:59:19.968922142Z" level=info msg="connecting to shim 85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6" address="unix:///run/containerd/s/54fac6e4bb414542ebedcdcd40be5fc89b296f2c1510c53ed76a17ffe95418eb" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:59:19.986730 containerd[1729]: time="2025-09-12T22:59:19.986478090Z" level=info msg="connecting to shim f4e304241b5dc5ebd46a9d155c8f1b3aa99706b16240571143cf830e96365a1d" address="unix:///run/containerd/s/5e620b570249f0235f14674414cfddbf4c4f852a166922da0334efda17107f64" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:59:19.989919 systemd[1]: Started cri-containerd-85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6.scope - libcontainer container 85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6. Sep 12 22:59:19.993259 containerd[1729]: time="2025-09-12T22:59:19.993230463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pvzfr,Uid:7ba6f6cf-5902-4bae-acd8-754abe3dcec5,Namespace:kube-system,Attempt:0,}" Sep 12 22:59:20.015987 systemd[1]: Started cri-containerd-f4e304241b5dc5ebd46a9d155c8f1b3aa99706b16240571143cf830e96365a1d.scope - libcontainer container f4e304241b5dc5ebd46a9d155c8f1b3aa99706b16240571143cf830e96365a1d. Sep 12 22:59:20.030041 containerd[1729]: time="2025-09-12T22:59:20.030000605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c7w7b,Uid:6da88806-732d-45b2-b144-3ee5c53f33c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\"" Sep 12 22:59:20.033653 containerd[1729]: time="2025-09-12T22:59:20.033454394Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 22:59:20.039478 containerd[1729]: time="2025-09-12T22:59:20.039428079Z" level=info msg="connecting to shim f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5" address="unix:///run/containerd/s/93e1ced617bb50ad2e56e50f250676801855a8faacaeaa17e09e82a724e5b396" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:59:20.046369 containerd[1729]: time="2025-09-12T22:59:20.046338112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f9xbl,Uid:de4c6ea0-5e0b-4a1f-ab5b-22290cd91c47,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4e304241b5dc5ebd46a9d155c8f1b3aa99706b16240571143cf830e96365a1d\"" Sep 12 22:59:20.051696 containerd[1729]: time="2025-09-12T22:59:20.049540945Z" level=info msg="CreateContainer within sandbox \"f4e304241b5dc5ebd46a9d155c8f1b3aa99706b16240571143cf830e96365a1d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 22:59:20.064909 systemd[1]: Started cri-containerd-f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5.scope - libcontainer container f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5. Sep 12 22:59:20.071809 containerd[1729]: time="2025-09-12T22:59:20.071785648Z" level=info msg="Container 796fceca0710a8b7a98e7eef2e914a43d0a388032e6fb489c026b2fb7f6379c2: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:20.087413 containerd[1729]: time="2025-09-12T22:59:20.087386732Z" level=info msg="CreateContainer within sandbox \"f4e304241b5dc5ebd46a9d155c8f1b3aa99706b16240571143cf830e96365a1d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"796fceca0710a8b7a98e7eef2e914a43d0a388032e6fb489c026b2fb7f6379c2\"" Sep 12 22:59:20.087849 containerd[1729]: time="2025-09-12T22:59:20.087826012Z" level=info msg="StartContainer for \"796fceca0710a8b7a98e7eef2e914a43d0a388032e6fb489c026b2fb7f6379c2\"" Sep 12 22:59:20.090210 containerd[1729]: time="2025-09-12T22:59:20.090187456Z" level=info msg="connecting to shim 796fceca0710a8b7a98e7eef2e914a43d0a388032e6fb489c026b2fb7f6379c2" address="unix:///run/containerd/s/5e620b570249f0235f14674414cfddbf4c4f852a166922da0334efda17107f64" protocol=ttrpc version=3 Sep 12 22:59:20.108844 systemd[1]: Started cri-containerd-796fceca0710a8b7a98e7eef2e914a43d0a388032e6fb489c026b2fb7f6379c2.scope - libcontainer container 796fceca0710a8b7a98e7eef2e914a43d0a388032e6fb489c026b2fb7f6379c2. Sep 12 22:59:20.117352 containerd[1729]: time="2025-09-12T22:59:20.117316217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pvzfr,Uid:7ba6f6cf-5902-4bae-acd8-754abe3dcec5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5\"" Sep 12 22:59:20.144215 containerd[1729]: time="2025-09-12T22:59:20.144143653Z" level=info msg="StartContainer for \"796fceca0710a8b7a98e7eef2e914a43d0a388032e6fb489c026b2fb7f6379c2\" returns successfully" Sep 12 22:59:20.190105 kubelet[3160]: I0912 22:59:20.190011 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f9xbl" podStartSLOduration=1.189993982 podStartE2EDuration="1.189993982s" podCreationTimestamp="2025-09-12 22:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:59:20.187963654 +0000 UTC m=+5.172928113" watchObservedRunningTime="2025-09-12 22:59:20.189993982 +0000 UTC m=+5.174958419" Sep 12 22:59:24.033922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645197089.mount: Deactivated successfully. Sep 12 22:59:25.629186 containerd[1729]: time="2025-09-12T22:59:25.629144867Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:25.631692 containerd[1729]: time="2025-09-12T22:59:25.631593557Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 22:59:25.634315 containerd[1729]: time="2025-09-12T22:59:25.634291160Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:25.635484 containerd[1729]: time="2025-09-12T22:59:25.635384009Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.601898475s" Sep 12 22:59:25.635484 containerd[1729]: time="2025-09-12T22:59:25.635417965Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 22:59:25.636679 containerd[1729]: time="2025-09-12T22:59:25.636402835Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 22:59:25.637823 containerd[1729]: time="2025-09-12T22:59:25.637796440Z" level=info msg="CreateContainer within sandbox \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 22:59:25.672446 containerd[1729]: time="2025-09-12T22:59:25.672415467Z" level=info msg="Container f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:25.675968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1367041618.mount: Deactivated successfully. Sep 12 22:59:25.686478 containerd[1729]: time="2025-09-12T22:59:25.686446407Z" level=info msg="CreateContainer within sandbox \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\"" Sep 12 22:59:25.687009 containerd[1729]: time="2025-09-12T22:59:25.686899165Z" level=info msg="StartContainer for \"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\"" Sep 12 22:59:25.688422 containerd[1729]: time="2025-09-12T22:59:25.688393409Z" level=info msg="connecting to shim f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575" address="unix:///run/containerd/s/54fac6e4bb414542ebedcdcd40be5fc89b296f2c1510c53ed76a17ffe95418eb" protocol=ttrpc version=3 Sep 12 22:59:25.711880 systemd[1]: Started cri-containerd-f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575.scope - libcontainer container f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575. Sep 12 22:59:25.740639 containerd[1729]: time="2025-09-12T22:59:25.740607495Z" level=info msg="StartContainer for \"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\" returns successfully" Sep 12 22:59:25.749006 systemd[1]: cri-containerd-f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575.scope: Deactivated successfully. Sep 12 22:59:25.752367 containerd[1729]: time="2025-09-12T22:59:25.752338346Z" level=info msg="received exit event container_id:\"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\" id:\"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\" pid:3569 exited_at:{seconds:1757717965 nanos:751965068}" Sep 12 22:59:25.752533 containerd[1729]: time="2025-09-12T22:59:25.752441823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\" id:\"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\" pid:3569 exited_at:{seconds:1757717965 nanos:751965068}" Sep 12 22:59:25.768077 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575-rootfs.mount: Deactivated successfully. Sep 12 22:59:29.691172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42877917.mount: Deactivated successfully. Sep 12 22:59:30.140423 containerd[1729]: time="2025-09-12T22:59:30.140372637Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:30.142974 containerd[1729]: time="2025-09-12T22:59:30.142876700Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 22:59:30.148545 containerd[1729]: time="2025-09-12T22:59:30.148520628Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 22:59:30.149654 containerd[1729]: time="2025-09-12T22:59:30.149479919Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.513048097s" Sep 12 22:59:30.149654 containerd[1729]: time="2025-09-12T22:59:30.149511629Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 22:59:30.152017 containerd[1729]: time="2025-09-12T22:59:30.151984905Z" level=info msg="CreateContainer within sandbox \"f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 22:59:30.184549 containerd[1729]: time="2025-09-12T22:59:30.184102014Z" level=info msg="Container 3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:30.197415 containerd[1729]: time="2025-09-12T22:59:30.197387427Z" level=info msg="CreateContainer within sandbox \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 22:59:30.213143 containerd[1729]: time="2025-09-12T22:59:30.213118544Z" level=info msg="CreateContainer within sandbox \"f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\"" Sep 12 22:59:30.213467 containerd[1729]: time="2025-09-12T22:59:30.213441863Z" level=info msg="StartContainer for \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\"" Sep 12 22:59:30.214473 containerd[1729]: time="2025-09-12T22:59:30.214407595Z" level=info msg="connecting to shim 3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072" address="unix:///run/containerd/s/93e1ced617bb50ad2e56e50f250676801855a8faacaeaa17e09e82a724e5b396" protocol=ttrpc version=3 Sep 12 22:59:30.232476 containerd[1729]: time="2025-09-12T22:59:30.232324023Z" level=info msg="Container 0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:30.239828 systemd[1]: Started cri-containerd-3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072.scope - libcontainer container 3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072. Sep 12 22:59:30.245517 containerd[1729]: time="2025-09-12T22:59:30.245361054Z" level=info msg="CreateContainer within sandbox \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\"" Sep 12 22:59:30.246104 containerd[1729]: time="2025-09-12T22:59:30.246084929Z" level=info msg="StartContainer for \"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\"" Sep 12 22:59:30.248280 containerd[1729]: time="2025-09-12T22:59:30.248215618Z" level=info msg="connecting to shim 0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944" address="unix:///run/containerd/s/54fac6e4bb414542ebedcdcd40be5fc89b296f2c1510c53ed76a17ffe95418eb" protocol=ttrpc version=3 Sep 12 22:59:30.268977 systemd[1]: Started cri-containerd-0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944.scope - libcontainer container 0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944. Sep 12 22:59:30.285465 containerd[1729]: time="2025-09-12T22:59:30.284902759Z" level=info msg="StartContainer for \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\" returns successfully" Sep 12 22:59:30.304304 containerd[1729]: time="2025-09-12T22:59:30.304234056Z" level=info msg="StartContainer for \"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\" returns successfully" Sep 12 22:59:30.318392 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 22:59:30.318612 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:59:30.319074 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:59:30.320283 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 22:59:30.324536 systemd[1]: cri-containerd-0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944.scope: Deactivated successfully. Sep 12 22:59:30.326025 containerd[1729]: time="2025-09-12T22:59:30.326003875Z" level=info msg="received exit event container_id:\"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\" id:\"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\" pid:3658 exited_at:{seconds:1757717970 nanos:325631052}" Sep 12 22:59:30.326353 containerd[1729]: time="2025-09-12T22:59:30.326156309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\" id:\"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\" pid:3658 exited_at:{seconds:1757717970 nanos:325631052}" Sep 12 22:59:30.343608 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 22:59:31.203854 containerd[1729]: time="2025-09-12T22:59:31.203795818Z" level=info msg="CreateContainer within sandbox \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 22:59:31.216231 kubelet[3160]: I0912 22:59:31.216088 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pvzfr" podStartSLOduration=2.184000593 podStartE2EDuration="12.216069621s" podCreationTimestamp="2025-09-12 22:59:19 +0000 UTC" firstStartedPulling="2025-09-12 22:59:20.118143523 +0000 UTC m=+5.103107965" lastFinishedPulling="2025-09-12 22:59:30.150212553 +0000 UTC m=+15.135176993" observedRunningTime="2025-09-12 22:59:31.212952164 +0000 UTC m=+16.197916604" watchObservedRunningTime="2025-09-12 22:59:31.216069621 +0000 UTC m=+16.201034059" Sep 12 22:59:31.228873 containerd[1729]: time="2025-09-12T22:59:31.228839986Z" level=info msg="Container ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:31.233766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933965466.mount: Deactivated successfully. Sep 12 22:59:31.251384 containerd[1729]: time="2025-09-12T22:59:31.251357422Z" level=info msg="CreateContainer within sandbox \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\"" Sep 12 22:59:31.251853 containerd[1729]: time="2025-09-12T22:59:31.251820578Z" level=info msg="StartContainer for \"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\"" Sep 12 22:59:31.252870 containerd[1729]: time="2025-09-12T22:59:31.252845294Z" level=info msg="connecting to shim ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1" address="unix:///run/containerd/s/54fac6e4bb414542ebedcdcd40be5fc89b296f2c1510c53ed76a17ffe95418eb" protocol=ttrpc version=3 Sep 12 22:59:31.278845 systemd[1]: Started cri-containerd-ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1.scope - libcontainer container ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1. Sep 12 22:59:31.304559 systemd[1]: cri-containerd-ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1.scope: Deactivated successfully. Sep 12 22:59:31.305964 containerd[1729]: time="2025-09-12T22:59:31.305937973Z" level=info msg="received exit event container_id:\"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\" id:\"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\" pid:3712 exited_at:{seconds:1757717971 nanos:305796052}" Sep 12 22:59:31.306136 containerd[1729]: time="2025-09-12T22:59:31.306107746Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\" id:\"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\" pid:3712 exited_at:{seconds:1757717971 nanos:305796052}" Sep 12 22:59:31.309478 containerd[1729]: time="2025-09-12T22:59:31.309452207Z" level=info msg="StartContainer for \"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\" returns successfully" Sep 12 22:59:31.324898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1-rootfs.mount: Deactivated successfully. Sep 12 22:59:32.207544 containerd[1729]: time="2025-09-12T22:59:32.207480835Z" level=info msg="CreateContainer within sandbox \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 22:59:32.229892 containerd[1729]: time="2025-09-12T22:59:32.229245725Z" level=info msg="Container 930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:32.245580 containerd[1729]: time="2025-09-12T22:59:32.245550997Z" level=info msg="CreateContainer within sandbox \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\"" Sep 12 22:59:32.248677 containerd[1729]: time="2025-09-12T22:59:32.248647435Z" level=info msg="StartContainer for \"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\"" Sep 12 22:59:32.249773 containerd[1729]: time="2025-09-12T22:59:32.249736454Z" level=info msg="connecting to shim 930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560" address="unix:///run/containerd/s/54fac6e4bb414542ebedcdcd40be5fc89b296f2c1510c53ed76a17ffe95418eb" protocol=ttrpc version=3 Sep 12 22:59:32.271849 systemd[1]: Started cri-containerd-930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560.scope - libcontainer container 930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560. Sep 12 22:59:32.290938 systemd[1]: cri-containerd-930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560.scope: Deactivated successfully. Sep 12 22:59:32.292821 containerd[1729]: time="2025-09-12T22:59:32.292737145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\" id:\"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\" pid:3752 exited_at:{seconds:1757717972 nanos:292255962}" Sep 12 22:59:32.299733 containerd[1729]: time="2025-09-12T22:59:32.299648267Z" level=info msg="received exit event container_id:\"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\" id:\"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\" pid:3752 exited_at:{seconds:1757717972 nanos:292255962}" Sep 12 22:59:32.301147 containerd[1729]: time="2025-09-12T22:59:32.301123387Z" level=info msg="StartContainer for \"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\" returns successfully" Sep 12 22:59:32.315907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560-rootfs.mount: Deactivated successfully. Sep 12 22:59:33.211484 containerd[1729]: time="2025-09-12T22:59:33.211440677Z" level=info msg="CreateContainer within sandbox \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 22:59:33.238719 containerd[1729]: time="2025-09-12T22:59:33.237946888Z" level=info msg="Container 6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:33.252805 containerd[1729]: time="2025-09-12T22:59:33.252777079Z" level=info msg="CreateContainer within sandbox \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\"" Sep 12 22:59:33.253183 containerd[1729]: time="2025-09-12T22:59:33.253121836Z" level=info msg="StartContainer for \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\"" Sep 12 22:59:33.254027 containerd[1729]: time="2025-09-12T22:59:33.253995780Z" level=info msg="connecting to shim 6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf" address="unix:///run/containerd/s/54fac6e4bb414542ebedcdcd40be5fc89b296f2c1510c53ed76a17ffe95418eb" protocol=ttrpc version=3 Sep 12 22:59:33.273857 systemd[1]: Started cri-containerd-6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf.scope - libcontainer container 6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf. Sep 12 22:59:33.307174 containerd[1729]: time="2025-09-12T22:59:33.307147209Z" level=info msg="StartContainer for \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" returns successfully" Sep 12 22:59:33.359298 containerd[1729]: time="2025-09-12T22:59:33.359267234Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" id:\"8fca8ee9af21c1247d26571d9fd6292e50a604500019bd006788e9a0819d85e4\" pid:3822 exited_at:{seconds:1757717973 nanos:358897564}" Sep 12 22:59:33.387147 kubelet[3160]: I0912 22:59:33.387076 3160 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 22:59:33.426394 systemd[1]: Created slice kubepods-burstable-pod7282f672_ac34_42d0_987d_ba0c1831029e.slice - libcontainer container kubepods-burstable-pod7282f672_ac34_42d0_987d_ba0c1831029e.slice. Sep 12 22:59:33.436128 systemd[1]: Created slice kubepods-burstable-pod9eccb961_cbcf_4ef8_995a_6c1fcf1d6314.slice - libcontainer container kubepods-burstable-pod9eccb961_cbcf_4ef8_995a_6c1fcf1d6314.slice. Sep 12 22:59:33.463375 kubelet[3160]: I0912 22:59:33.463296 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7282f672-ac34-42d0-987d-ba0c1831029e-config-volume\") pod \"coredns-668d6bf9bc-crszb\" (UID: \"7282f672-ac34-42d0-987d-ba0c1831029e\") " pod="kube-system/coredns-668d6bf9bc-crszb" Sep 12 22:59:33.463375 kubelet[3160]: I0912 22:59:33.463371 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shdhs\" (UniqueName: \"kubernetes.io/projected/9eccb961-cbcf-4ef8-995a-6c1fcf1d6314-kube-api-access-shdhs\") pod \"coredns-668d6bf9bc-tthb6\" (UID: \"9eccb961-cbcf-4ef8-995a-6c1fcf1d6314\") " pod="kube-system/coredns-668d6bf9bc-tthb6" Sep 12 22:59:33.463474 kubelet[3160]: I0912 22:59:33.463393 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9eccb961-cbcf-4ef8-995a-6c1fcf1d6314-config-volume\") pod \"coredns-668d6bf9bc-tthb6\" (UID: \"9eccb961-cbcf-4ef8-995a-6c1fcf1d6314\") " pod="kube-system/coredns-668d6bf9bc-tthb6" Sep 12 22:59:33.463474 kubelet[3160]: I0912 22:59:33.463415 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srt7p\" (UniqueName: \"kubernetes.io/projected/7282f672-ac34-42d0-987d-ba0c1831029e-kube-api-access-srt7p\") pod \"coredns-668d6bf9bc-crszb\" (UID: \"7282f672-ac34-42d0-987d-ba0c1831029e\") " pod="kube-system/coredns-668d6bf9bc-crszb" Sep 12 22:59:33.735821 containerd[1729]: time="2025-09-12T22:59:33.735575730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-crszb,Uid:7282f672-ac34-42d0-987d-ba0c1831029e,Namespace:kube-system,Attempt:0,}" Sep 12 22:59:33.743308 containerd[1729]: time="2025-09-12T22:59:33.743076665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tthb6,Uid:9eccb961-cbcf-4ef8-995a-6c1fcf1d6314,Namespace:kube-system,Attempt:0,}" Sep 12 22:59:35.376972 systemd-networkd[1486]: cilium_host: Link UP Sep 12 22:59:35.378304 systemd-networkd[1486]: cilium_net: Link UP Sep 12 22:59:35.380084 systemd-networkd[1486]: cilium_net: Gained carrier Sep 12 22:59:35.380545 systemd-networkd[1486]: cilium_host: Gained carrier Sep 12 22:59:35.514369 systemd-networkd[1486]: cilium_vxlan: Link UP Sep 12 22:59:35.514511 systemd-networkd[1486]: cilium_vxlan: Gained carrier Sep 12 22:59:35.701820 systemd-networkd[1486]: cilium_host: Gained IPv6LL Sep 12 22:59:35.779743 kernel: NET: Registered PF_ALG protocol family Sep 12 22:59:36.253865 systemd-networkd[1486]: cilium_net: Gained IPv6LL Sep 12 22:59:36.415068 systemd-networkd[1486]: lxc_health: Link UP Sep 12 22:59:36.421052 systemd-networkd[1486]: lxc_health: Gained carrier Sep 12 22:59:36.766847 systemd-networkd[1486]: cilium_vxlan: Gained IPv6LL Sep 12 22:59:36.783726 kernel: eth0: renamed from tmp0ce83 Sep 12 22:59:36.786544 systemd-networkd[1486]: lxc5a774e54cd84: Link UP Sep 12 22:59:36.788043 systemd-networkd[1486]: lxc5a774e54cd84: Gained carrier Sep 12 22:59:36.806446 kernel: eth0: renamed from tmp0d492 Sep 12 22:59:36.808652 systemd-networkd[1486]: lxc3cb8b0a5fb64: Link UP Sep 12 22:59:36.811130 systemd-networkd[1486]: lxc3cb8b0a5fb64: Gained carrier Sep 12 22:59:37.853847 systemd-networkd[1486]: lxc_health: Gained IPv6LL Sep 12 22:59:37.944224 kubelet[3160]: I0912 22:59:37.943748 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c7w7b" podStartSLOduration=13.339480229 podStartE2EDuration="18.943730106s" podCreationTimestamp="2025-09-12 22:59:19 +0000 UTC" firstStartedPulling="2025-09-12 22:59:20.032051652 +0000 UTC m=+5.017016092" lastFinishedPulling="2025-09-12 22:59:25.636301531 +0000 UTC m=+10.621265969" observedRunningTime="2025-09-12 22:59:34.230545919 +0000 UTC m=+19.215510384" watchObservedRunningTime="2025-09-12 22:59:37.943730106 +0000 UTC m=+22.928694550" Sep 12 22:59:38.237847 systemd-networkd[1486]: lxc3cb8b0a5fb64: Gained IPv6LL Sep 12 22:59:38.493833 systemd-networkd[1486]: lxc5a774e54cd84: Gained IPv6LL Sep 12 22:59:39.843760 containerd[1729]: time="2025-09-12T22:59:39.841942696Z" level=info msg="connecting to shim 0d492f6db664f86263b2da4a9fde9691cce17c0d243a99d0dd2e07a586e7b6af" address="unix:///run/containerd/s/f04663ed56554ed87a7b57ac4e038e45b0fd3b718525e5fd30233aa0c5a49126" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:59:39.851777 containerd[1729]: time="2025-09-12T22:59:39.851737370Z" level=info msg="connecting to shim 0ce83e449ed23c68587b1c4e5d1009607d3f00c188504218c5bf59ff86a134f7" address="unix:///run/containerd/s/89745e8b3aa4175f83ae43868863370d63be3c0343b1a12a63aab298adbdd6d1" namespace=k8s.io protocol=ttrpc version=3 Sep 12 22:59:39.880952 systemd[1]: Started cri-containerd-0d492f6db664f86263b2da4a9fde9691cce17c0d243a99d0dd2e07a586e7b6af.scope - libcontainer container 0d492f6db664f86263b2da4a9fde9691cce17c0d243a99d0dd2e07a586e7b6af. Sep 12 22:59:39.886839 systemd[1]: Started cri-containerd-0ce83e449ed23c68587b1c4e5d1009607d3f00c188504218c5bf59ff86a134f7.scope - libcontainer container 0ce83e449ed23c68587b1c4e5d1009607d3f00c188504218c5bf59ff86a134f7. Sep 12 22:59:39.940291 containerd[1729]: time="2025-09-12T22:59:39.940259474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tthb6,Uid:9eccb961-cbcf-4ef8-995a-6c1fcf1d6314,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d492f6db664f86263b2da4a9fde9691cce17c0d243a99d0dd2e07a586e7b6af\"" Sep 12 22:59:39.943499 containerd[1729]: time="2025-09-12T22:59:39.943472676Z" level=info msg="CreateContainer within sandbox \"0d492f6db664f86263b2da4a9fde9691cce17c0d243a99d0dd2e07a586e7b6af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:59:39.946355 containerd[1729]: time="2025-09-12T22:59:39.946303659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-crszb,Uid:7282f672-ac34-42d0-987d-ba0c1831029e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ce83e449ed23c68587b1c4e5d1009607d3f00c188504218c5bf59ff86a134f7\"" Sep 12 22:59:39.948058 containerd[1729]: time="2025-09-12T22:59:39.948004408Z" level=info msg="CreateContainer within sandbox \"0ce83e449ed23c68587b1c4e5d1009607d3f00c188504218c5bf59ff86a134f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 22:59:39.966344 containerd[1729]: time="2025-09-12T22:59:39.965855626Z" level=info msg="Container 7c21a276a12716f1c10741e0db3b4f623d9685a6973f6bd1abf16ddf1cd18369: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:39.969230 containerd[1729]: time="2025-09-12T22:59:39.969204001Z" level=info msg="Container 17a2b21b581e981a69aedc2db3c03393c3454b3aaf349d1fe51e45e5e33037c7: CDI devices from CRI Config.CDIDevices: []" Sep 12 22:59:39.994249 containerd[1729]: time="2025-09-12T22:59:39.994225036Z" level=info msg="CreateContainer within sandbox \"0d492f6db664f86263b2da4a9fde9691cce17c0d243a99d0dd2e07a586e7b6af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c21a276a12716f1c10741e0db3b4f623d9685a6973f6bd1abf16ddf1cd18369\"" Sep 12 22:59:39.994735 containerd[1729]: time="2025-09-12T22:59:39.994673836Z" level=info msg="StartContainer for \"7c21a276a12716f1c10741e0db3b4f623d9685a6973f6bd1abf16ddf1cd18369\"" Sep 12 22:59:39.996738 containerd[1729]: time="2025-09-12T22:59:39.996696674Z" level=info msg="connecting to shim 7c21a276a12716f1c10741e0db3b4f623d9685a6973f6bd1abf16ddf1cd18369" address="unix:///run/containerd/s/f04663ed56554ed87a7b57ac4e038e45b0fd3b718525e5fd30233aa0c5a49126" protocol=ttrpc version=3 Sep 12 22:59:39.998851 containerd[1729]: time="2025-09-12T22:59:39.998794140Z" level=info msg="CreateContainer within sandbox \"0ce83e449ed23c68587b1c4e5d1009607d3f00c188504218c5bf59ff86a134f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17a2b21b581e981a69aedc2db3c03393c3454b3aaf349d1fe51e45e5e33037c7\"" Sep 12 22:59:40.000870 containerd[1729]: time="2025-09-12T22:59:39.999371821Z" level=info msg="StartContainer for \"17a2b21b581e981a69aedc2db3c03393c3454b3aaf349d1fe51e45e5e33037c7\"" Sep 12 22:59:40.000870 containerd[1729]: time="2025-09-12T22:59:40.000222602Z" level=info msg="connecting to shim 17a2b21b581e981a69aedc2db3c03393c3454b3aaf349d1fe51e45e5e33037c7" address="unix:///run/containerd/s/89745e8b3aa4175f83ae43868863370d63be3c0343b1a12a63aab298adbdd6d1" protocol=ttrpc version=3 Sep 12 22:59:40.018844 systemd[1]: Started cri-containerd-7c21a276a12716f1c10741e0db3b4f623d9685a6973f6bd1abf16ddf1cd18369.scope - libcontainer container 7c21a276a12716f1c10741e0db3b4f623d9685a6973f6bd1abf16ddf1cd18369. Sep 12 22:59:40.021811 systemd[1]: Started cri-containerd-17a2b21b581e981a69aedc2db3c03393c3454b3aaf349d1fe51e45e5e33037c7.scope - libcontainer container 17a2b21b581e981a69aedc2db3c03393c3454b3aaf349d1fe51e45e5e33037c7. Sep 12 22:59:40.058416 containerd[1729]: time="2025-09-12T22:59:40.058328282Z" level=info msg="StartContainer for \"7c21a276a12716f1c10741e0db3b4f623d9685a6973f6bd1abf16ddf1cd18369\" returns successfully" Sep 12 22:59:40.063339 containerd[1729]: time="2025-09-12T22:59:40.063217745Z" level=info msg="StartContainer for \"17a2b21b581e981a69aedc2db3c03393c3454b3aaf349d1fe51e45e5e33037c7\" returns successfully" Sep 12 22:59:40.242512 kubelet[3160]: I0912 22:59:40.242385 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tthb6" podStartSLOduration=21.242368385 podStartE2EDuration="21.242368385s" podCreationTimestamp="2025-09-12 22:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:59:40.241071825 +0000 UTC m=+25.226036266" watchObservedRunningTime="2025-09-12 22:59:40.242368385 +0000 UTC m=+25.227332825" Sep 12 22:59:40.277322 kubelet[3160]: I0912 22:59:40.277163 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-crszb" podStartSLOduration=21.277145986 podStartE2EDuration="21.277145986s" podCreationTimestamp="2025-09-12 22:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 22:59:40.259651926 +0000 UTC m=+25.244616364" watchObservedRunningTime="2025-09-12 22:59:40.277145986 +0000 UTC m=+25.262110506" Sep 12 23:00:51.764873 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:48482.service - OpenSSH per-connection server daemon (10.200.16.10:48482). Sep 12 23:00:52.390375 sshd[4478]: Accepted publickey for core from 10.200.16.10 port 48482 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:00:52.391521 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:00:52.395983 systemd-logind[1688]: New session 10 of user core. Sep 12 23:00:52.405845 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 23:00:52.910955 sshd[4481]: Connection closed by 10.200.16.10 port 48482 Sep 12 23:00:52.911433 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Sep 12 23:00:52.914616 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:48482.service: Deactivated successfully. Sep 12 23:00:52.916309 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 23:00:52.917916 systemd-logind[1688]: Session 10 logged out. Waiting for processes to exit. Sep 12 23:00:52.919568 systemd-logind[1688]: Removed session 10. Sep 12 23:00:58.027810 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:48498.service - OpenSSH per-connection server daemon (10.200.16.10:48498). Sep 12 23:00:58.651222 sshd[4494]: Accepted publickey for core from 10.200.16.10 port 48498 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:00:58.652365 sshd-session[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:00:58.657045 systemd-logind[1688]: New session 11 of user core. Sep 12 23:00:58.661896 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 23:00:59.139755 sshd[4497]: Connection closed by 10.200.16.10 port 48498 Sep 12 23:00:59.140295 sshd-session[4494]: pam_unix(sshd:session): session closed for user core Sep 12 23:00:59.143633 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:48498.service: Deactivated successfully. Sep 12 23:00:59.145497 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 23:00:59.146772 systemd-logind[1688]: Session 11 logged out. Waiting for processes to exit. Sep 12 23:00:59.147906 systemd-logind[1688]: Removed session 11. Sep 12 23:01:04.254865 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:48610.service - OpenSSH per-connection server daemon (10.200.16.10:48610). Sep 12 23:01:04.883293 sshd[4510]: Accepted publickey for core from 10.200.16.10 port 48610 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:04.884435 sshd-session[4510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:04.888891 systemd-logind[1688]: New session 12 of user core. Sep 12 23:01:04.893032 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 23:01:05.375173 sshd[4513]: Connection closed by 10.200.16.10 port 48610 Sep 12 23:01:05.375907 sshd-session[4510]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:05.380359 systemd-logind[1688]: Session 12 logged out. Waiting for processes to exit. Sep 12 23:01:05.381143 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:48610.service: Deactivated successfully. Sep 12 23:01:05.384535 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 23:01:05.389196 systemd-logind[1688]: Removed session 12. Sep 12 23:01:10.487167 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:54804.service - OpenSSH per-connection server daemon (10.200.16.10:54804). Sep 12 23:01:11.114533 sshd[4527]: Accepted publickey for core from 10.200.16.10 port 54804 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:11.115634 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:11.120085 systemd-logind[1688]: New session 13 of user core. Sep 12 23:01:11.123853 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 23:01:11.604174 sshd[4530]: Connection closed by 10.200.16.10 port 54804 Sep 12 23:01:11.604645 sshd-session[4527]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:11.607772 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:54804.service: Deactivated successfully. Sep 12 23:01:11.609375 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 23:01:11.610177 systemd-logind[1688]: Session 13 logged out. Waiting for processes to exit. Sep 12 23:01:11.611302 systemd-logind[1688]: Removed session 13. Sep 12 23:01:11.722004 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:54818.service - OpenSSH per-connection server daemon (10.200.16.10:54818). Sep 12 23:01:12.357481 sshd[4543]: Accepted publickey for core from 10.200.16.10 port 54818 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:12.357923 sshd-session[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:12.361576 systemd-logind[1688]: New session 14 of user core. Sep 12 23:01:12.368826 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 23:01:12.881150 sshd[4546]: Connection closed by 10.200.16.10 port 54818 Sep 12 23:01:12.881653 sshd-session[4543]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:12.885216 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:54818.service: Deactivated successfully. Sep 12 23:01:12.887121 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 23:01:12.888340 systemd-logind[1688]: Session 14 logged out. Waiting for processes to exit. Sep 12 23:01:12.889559 systemd-logind[1688]: Removed session 14. Sep 12 23:01:12.991404 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:54832.service - OpenSSH per-connection server daemon (10.200.16.10:54832). Sep 12 23:01:13.615807 sshd[4556]: Accepted publickey for core from 10.200.16.10 port 54832 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:13.616934 sshd-session[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:13.621100 systemd-logind[1688]: New session 15 of user core. Sep 12 23:01:13.630847 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 23:01:14.105554 sshd[4559]: Connection closed by 10.200.16.10 port 54832 Sep 12 23:01:14.106971 sshd-session[4556]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:14.110219 systemd-logind[1688]: Session 15 logged out. Waiting for processes to exit. Sep 12 23:01:14.110607 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:54832.service: Deactivated successfully. Sep 12 23:01:14.112441 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 23:01:14.114129 systemd-logind[1688]: Removed session 15. Sep 12 23:01:19.221557 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:54840.service - OpenSSH per-connection server daemon (10.200.16.10:54840). Sep 12 23:01:19.847792 sshd[4573]: Accepted publickey for core from 10.200.16.10 port 54840 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:19.848928 sshd-session[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:19.853224 systemd-logind[1688]: New session 16 of user core. Sep 12 23:01:19.855857 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 23:01:20.340977 sshd[4576]: Connection closed by 10.200.16.10 port 54840 Sep 12 23:01:20.341479 sshd-session[4573]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:20.344263 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:54840.service: Deactivated successfully. Sep 12 23:01:20.345933 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 23:01:20.348158 systemd-logind[1688]: Session 16 logged out. Waiting for processes to exit. Sep 12 23:01:20.349034 systemd-logind[1688]: Removed session 16. Sep 12 23:01:20.456454 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:46486.service - OpenSSH per-connection server daemon (10.200.16.10:46486). Sep 12 23:01:21.088385 sshd[4590]: Accepted publickey for core from 10.200.16.10 port 46486 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:21.089421 sshd-session[4590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:21.093785 systemd-logind[1688]: New session 17 of user core. Sep 12 23:01:21.099844 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 23:01:21.628339 sshd[4593]: Connection closed by 10.200.16.10 port 46486 Sep 12 23:01:21.628825 sshd-session[4590]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:21.632035 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:46486.service: Deactivated successfully. Sep 12 23:01:21.633856 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 23:01:21.634577 systemd-logind[1688]: Session 17 logged out. Waiting for processes to exit. Sep 12 23:01:21.635882 systemd-logind[1688]: Removed session 17. Sep 12 23:01:21.745445 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:46490.service - OpenSSH per-connection server daemon (10.200.16.10:46490). Sep 12 23:01:22.371741 sshd[4602]: Accepted publickey for core from 10.200.16.10 port 46490 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:22.372924 sshd-session[4602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:22.376796 systemd-logind[1688]: New session 18 of user core. Sep 12 23:01:22.384825 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 23:01:23.323929 sshd[4605]: Connection closed by 10.200.16.10 port 46490 Sep 12 23:01:23.324453 sshd-session[4602]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:23.327917 systemd-logind[1688]: Session 18 logged out. Waiting for processes to exit. Sep 12 23:01:23.328046 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:46490.service: Deactivated successfully. Sep 12 23:01:23.329894 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 23:01:23.331636 systemd-logind[1688]: Removed session 18. Sep 12 23:01:23.434655 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:46500.service - OpenSSH per-connection server daemon (10.200.16.10:46500). Sep 12 23:01:24.066782 sshd[4622]: Accepted publickey for core from 10.200.16.10 port 46500 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:24.067931 sshd-session[4622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:24.072266 systemd-logind[1688]: New session 19 of user core. Sep 12 23:01:24.078815 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 23:01:24.637922 sshd[4625]: Connection closed by 10.200.16.10 port 46500 Sep 12 23:01:24.638404 sshd-session[4622]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:24.641601 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:46500.service: Deactivated successfully. Sep 12 23:01:24.643444 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 23:01:24.644224 systemd-logind[1688]: Session 19 logged out. Waiting for processes to exit. Sep 12 23:01:24.645492 systemd-logind[1688]: Removed session 19. Sep 12 23:01:24.751552 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:46512.service - OpenSSH per-connection server daemon (10.200.16.10:46512). Sep 12 23:01:25.378335 sshd[4635]: Accepted publickey for core from 10.200.16.10 port 46512 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:25.379412 sshd-session[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:25.383687 systemd-logind[1688]: New session 20 of user core. Sep 12 23:01:25.389836 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 23:01:25.862760 sshd[4638]: Connection closed by 10.200.16.10 port 46512 Sep 12 23:01:25.863237 sshd-session[4635]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:25.866557 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:46512.service: Deactivated successfully. Sep 12 23:01:25.868379 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 23:01:25.869223 systemd-logind[1688]: Session 20 logged out. Waiting for processes to exit. Sep 12 23:01:25.870665 systemd-logind[1688]: Removed session 20. Sep 12 23:01:30.981868 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:36478.service - OpenSSH per-connection server daemon (10.200.16.10:36478). Sep 12 23:01:31.611299 sshd[4652]: Accepted publickey for core from 10.200.16.10 port 36478 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:31.612498 sshd-session[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:31.616751 systemd-logind[1688]: New session 21 of user core. Sep 12 23:01:31.620849 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 23:01:32.100350 sshd[4655]: Connection closed by 10.200.16.10 port 36478 Sep 12 23:01:32.100853 sshd-session[4652]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:32.103498 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:36478.service: Deactivated successfully. Sep 12 23:01:32.105134 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 23:01:32.107339 systemd-logind[1688]: Session 21 logged out. Waiting for processes to exit. Sep 12 23:01:32.108479 systemd-logind[1688]: Removed session 21. Sep 12 23:01:37.212906 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:36490.service - OpenSSH per-connection server daemon (10.200.16.10:36490). Sep 12 23:01:37.843189 sshd[4667]: Accepted publickey for core from 10.200.16.10 port 36490 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:37.844347 sshd-session[4667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:37.848584 systemd-logind[1688]: New session 22 of user core. Sep 12 23:01:37.853865 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 23:01:38.327692 sshd[4670]: Connection closed by 10.200.16.10 port 36490 Sep 12 23:01:38.328228 sshd-session[4667]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:38.331754 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:36490.service: Deactivated successfully. Sep 12 23:01:38.333403 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 23:01:38.334191 systemd-logind[1688]: Session 22 logged out. Waiting for processes to exit. Sep 12 23:01:38.335540 systemd-logind[1688]: Removed session 22. Sep 12 23:01:43.440752 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:55198.service - OpenSSH per-connection server daemon (10.200.16.10:55198). Sep 12 23:01:44.070229 sshd[4682]: Accepted publickey for core from 10.200.16.10 port 55198 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:44.071380 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:44.075632 systemd-logind[1688]: New session 23 of user core. Sep 12 23:01:44.085850 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 23:01:44.557430 sshd[4685]: Connection closed by 10.200.16.10 port 55198 Sep 12 23:01:44.557973 sshd-session[4682]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:44.561223 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:55198.service: Deactivated successfully. Sep 12 23:01:44.562941 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 23:01:44.563854 systemd-logind[1688]: Session 23 logged out. Waiting for processes to exit. Sep 12 23:01:44.565529 systemd-logind[1688]: Removed session 23. Sep 12 23:01:44.672961 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:55210.service - OpenSSH per-connection server daemon (10.200.16.10:55210). Sep 12 23:01:45.293423 sshd[4697]: Accepted publickey for core from 10.200.16.10 port 55210 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:45.294553 sshd-session[4697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:45.298890 systemd-logind[1688]: New session 24 of user core. Sep 12 23:01:45.306829 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 23:01:46.920888 containerd[1729]: time="2025-09-12T23:01:46.920842496Z" level=info msg="StopContainer for \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\" with timeout 30 (s)" Sep 12 23:01:46.922158 containerd[1729]: time="2025-09-12T23:01:46.921971000Z" level=info msg="Stop container \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\" with signal terminated" Sep 12 23:01:46.934462 containerd[1729]: time="2025-09-12T23:01:46.934405807Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:01:46.939322 containerd[1729]: time="2025-09-12T23:01:46.939270793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" id:\"f4428e68e864bb05e01b6e857d419d649a8ac24b9a789cac5fce3d152eef48be\" pid:4720 exited_at:{seconds:1757718106 nanos:938794009}" Sep 12 23:01:46.940279 systemd[1]: cri-containerd-3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072.scope: Deactivated successfully. Sep 12 23:01:46.941973 containerd[1729]: time="2025-09-12T23:01:46.941947846Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\" id:\"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\" pid:3631 exited_at:{seconds:1757718106 nanos:941641267}" Sep 12 23:01:46.943591 containerd[1729]: time="2025-09-12T23:01:46.943531113Z" level=info msg="received exit event container_id:\"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\" id:\"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\" pid:3631 exited_at:{seconds:1757718106 nanos:941641267}" Sep 12 23:01:46.943779 containerd[1729]: time="2025-09-12T23:01:46.943760269Z" level=info msg="StopContainer for \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" with timeout 2 (s)" Sep 12 23:01:46.944051 containerd[1729]: time="2025-09-12T23:01:46.944036193Z" level=info msg="Stop container \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" with signal terminated" Sep 12 23:01:46.952539 systemd-networkd[1486]: lxc_health: Link DOWN Sep 12 23:01:46.952548 systemd-networkd[1486]: lxc_health: Lost carrier Sep 12 23:01:46.970310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072-rootfs.mount: Deactivated successfully. Sep 12 23:01:46.972156 systemd[1]: cri-containerd-6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf.scope: Deactivated successfully. Sep 12 23:01:46.972430 systemd[1]: cri-containerd-6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf.scope: Consumed 5.366s CPU time, 124.4M memory peak, 144K read from disk, 13.3M written to disk. Sep 12 23:01:46.974798 containerd[1729]: time="2025-09-12T23:01:46.974767283Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" id:\"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" pid:3791 exited_at:{seconds:1757718106 nanos:974129471}" Sep 12 23:01:46.975185 containerd[1729]: time="2025-09-12T23:01:46.975161080Z" level=info msg="received exit event container_id:\"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" id:\"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" pid:3791 exited_at:{seconds:1757718106 nanos:974129471}" Sep 12 23:01:46.989822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf-rootfs.mount: Deactivated successfully. Sep 12 23:01:47.060312 containerd[1729]: time="2025-09-12T23:01:47.060287582Z" level=info msg="StopContainer for \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\" returns successfully" Sep 12 23:01:47.061067 containerd[1729]: time="2025-09-12T23:01:47.060850260Z" level=info msg="StopPodSandbox for \"f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5\"" Sep 12 23:01:47.061067 containerd[1729]: time="2025-09-12T23:01:47.060908123Z" level=info msg="Container to stop \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:01:47.066998 containerd[1729]: time="2025-09-12T23:01:47.066973518Z" level=info msg="StopContainer for \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" returns successfully" Sep 12 23:01:47.068033 containerd[1729]: time="2025-09-12T23:01:47.067753214Z" level=info msg="StopPodSandbox for \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\"" Sep 12 23:01:47.068123 containerd[1729]: time="2025-09-12T23:01:47.068107922Z" level=info msg="Container to stop \"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:01:47.068175 containerd[1729]: time="2025-09-12T23:01:47.068125013Z" level=info msg="Container to stop \"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:01:47.068175 containerd[1729]: time="2025-09-12T23:01:47.068135926Z" level=info msg="Container to stop \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:01:47.068175 containerd[1729]: time="2025-09-12T23:01:47.068145069Z" level=info msg="Container to stop \"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:01:47.068175 containerd[1729]: time="2025-09-12T23:01:47.068154701Z" level=info msg="Container to stop \"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 23:01:47.073376 systemd[1]: cri-containerd-f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5.scope: Deactivated successfully. Sep 12 23:01:47.074772 containerd[1729]: time="2025-09-12T23:01:47.074735539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5\" id:\"f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5\" pid:3357 exit_status:137 exited_at:{seconds:1757718107 nanos:74140259}" Sep 12 23:01:47.077778 systemd[1]: cri-containerd-85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6.scope: Deactivated successfully. Sep 12 23:01:47.105006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6-rootfs.mount: Deactivated successfully. Sep 12 23:01:47.109388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5-rootfs.mount: Deactivated successfully. Sep 12 23:01:47.144212 containerd[1729]: time="2025-09-12T23:01:47.143922297Z" level=info msg="TearDown network for sandbox \"f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5\" successfully" Sep 12 23:01:47.144212 containerd[1729]: time="2025-09-12T23:01:47.143946401Z" level=info msg="StopPodSandbox for \"f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5\" returns successfully" Sep 12 23:01:47.144212 containerd[1729]: time="2025-09-12T23:01:47.143984949Z" level=info msg="received exit event sandbox_id:\"f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5\" exit_status:137 exited_at:{seconds:1757718107 nanos:74140259}" Sep 12 23:01:47.144212 containerd[1729]: time="2025-09-12T23:01:47.144071901Z" level=info msg="shim disconnected" id=85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6 namespace=k8s.io Sep 12 23:01:47.144212 containerd[1729]: time="2025-09-12T23:01:47.144085401Z" level=warning msg="cleaning up after shim disconnected" id=85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6 namespace=k8s.io Sep 12 23:01:47.144212 containerd[1729]: time="2025-09-12T23:01:47.144092592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:01:47.145263 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5-shm.mount: Deactivated successfully. Sep 12 23:01:47.147730 containerd[1729]: time="2025-09-12T23:01:47.146227196Z" level=info msg="shim disconnected" id=f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5 namespace=k8s.io Sep 12 23:01:47.147730 containerd[1729]: time="2025-09-12T23:01:47.146295611Z" level=warning msg="cleaning up after shim disconnected" id=f545252b770642ca436b4fafa4c16df76d1185395948b11481016cff5a5a8ed5 namespace=k8s.io Sep 12 23:01:47.147730 containerd[1729]: time="2025-09-12T23:01:47.146305113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:01:47.165765 containerd[1729]: time="2025-09-12T23:01:47.165742708Z" level=info msg="received exit event sandbox_id:\"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" exit_status:137 exited_at:{seconds:1757718107 nanos:79351615}" Sep 12 23:01:47.166301 containerd[1729]: time="2025-09-12T23:01:47.165941678Z" level=info msg="TearDown network for sandbox \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" successfully" Sep 12 23:01:47.166379 containerd[1729]: time="2025-09-12T23:01:47.166368820Z" level=info msg="StopPodSandbox for \"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" returns successfully" Sep 12 23:01:47.168339 containerd[1729]: time="2025-09-12T23:01:47.168287661Z" level=info msg="TaskExit event in podsandbox handler container_id:\"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" id:\"85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6\" pid:3283 exit_status:137 exited_at:{seconds:1757718107 nanos:79351615}" Sep 12 23:01:47.245746 kubelet[3160]: I0912 23:01:47.245664 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-host-proc-sys-net\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.246665 kubelet[3160]: I0912 23:01:47.245729 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:01:47.246799 kubelet[3160]: I0912 23:01:47.246739 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-run\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.246799 kubelet[3160]: I0912 23:01:47.246742 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:01:47.246799 kubelet[3160]: I0912 23:01:47.246769 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-xtables-lock\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.246799 kubelet[3160]: I0912 23:01:47.246784 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:01:47.247913 kubelet[3160]: I0912 23:01:47.246798 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-cgroup\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.247913 kubelet[3160]: I0912 23:01:47.246817 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-lib-modules\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.247913 kubelet[3160]: I0912 23:01:47.246834 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cni-path\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.247913 kubelet[3160]: I0912 23:01:47.246867 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:01:47.247913 kubelet[3160]: I0912 23:01:47.246869 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ba6f6cf-5902-4bae-acd8-754abe3dcec5-cilium-config-path\") pod \"7ba6f6cf-5902-4bae-acd8-754abe3dcec5\" (UID: \"7ba6f6cf-5902-4bae-acd8-754abe3dcec5\") " Sep 12 23:01:47.247913 kubelet[3160]: I0912 23:01:47.246881 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:01:47.248071 kubelet[3160]: I0912 23:01:47.246893 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngglv\" (UniqueName: \"kubernetes.io/projected/7ba6f6cf-5902-4bae-acd8-754abe3dcec5-kube-api-access-ngglv\") pod \"7ba6f6cf-5902-4bae-acd8-754abe3dcec5\" (UID: \"7ba6f6cf-5902-4bae-acd8-754abe3dcec5\") " Sep 12 23:01:47.248071 kubelet[3160]: I0912 23:01:47.246895 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cni-path" (OuterVolumeSpecName: "cni-path") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:01:47.248071 kubelet[3160]: I0912 23:01:47.246912 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-bpf-maps\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.248071 kubelet[3160]: I0912 23:01:47.246944 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-config-path\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.248071 kubelet[3160]: I0912 23:01:47.246963 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6da88806-732d-45b2-b144-3ee5c53f33c7-clustermesh-secrets\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.248071 kubelet[3160]: I0912 23:01:47.246980 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpjmt\" (UniqueName: \"kubernetes.io/projected/6da88806-732d-45b2-b144-3ee5c53f33c7-kube-api-access-bpjmt\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.248217 kubelet[3160]: I0912 23:01:47.247012 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-etc-cni-netd\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.248217 kubelet[3160]: I0912 23:01:47.247031 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-host-proc-sys-kernel\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.248217 kubelet[3160]: I0912 23:01:47.247049 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-hostproc\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.248217 kubelet[3160]: I0912 23:01:47.247069 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6da88806-732d-45b2-b144-3ee5c53f33c7-hubble-tls\") pod \"6da88806-732d-45b2-b144-3ee5c53f33c7\" (UID: \"6da88806-732d-45b2-b144-3ee5c53f33c7\") " Sep 12 23:01:47.248217 kubelet[3160]: I0912 23:01:47.247119 3160 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-host-proc-sys-net\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.248217 kubelet[3160]: I0912 23:01:47.247133 3160 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-run\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.248217 kubelet[3160]: I0912 23:01:47.247145 3160 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-xtables-lock\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.248387 kubelet[3160]: I0912 23:01:47.247166 3160 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-cgroup\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.248387 kubelet[3160]: I0912 23:01:47.247177 3160 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-lib-modules\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.248387 kubelet[3160]: I0912 23:01:47.247187 3160 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-cni-path\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.249721 kubelet[3160]: I0912 23:01:47.249680 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ba6f6cf-5902-4bae-acd8-754abe3dcec5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7ba6f6cf-5902-4bae-acd8-754abe3dcec5" (UID: "7ba6f6cf-5902-4bae-acd8-754abe3dcec5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:01:47.251162 kubelet[3160]: I0912 23:01:47.251133 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:01:47.251235 kubelet[3160]: I0912 23:01:47.251183 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:01:47.251235 kubelet[3160]: I0912 23:01:47.251203 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-hostproc" (OuterVolumeSpecName: "hostproc") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:01:47.251289 kubelet[3160]: I0912 23:01:47.251263 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da88806-732d-45b2-b144-3ee5c53f33c7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:01:47.252530 kubelet[3160]: I0912 23:01:47.252505 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 23:01:47.252699 kubelet[3160]: I0912 23:01:47.252682 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da88806-732d-45b2-b144-3ee5c53f33c7-kube-api-access-bpjmt" (OuterVolumeSpecName: "kube-api-access-bpjmt") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "kube-api-access-bpjmt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:01:47.253132 kubelet[3160]: I0912 23:01:47.253116 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ba6f6cf-5902-4bae-acd8-754abe3dcec5-kube-api-access-ngglv" (OuterVolumeSpecName: "kube-api-access-ngglv") pod "7ba6f6cf-5902-4bae-acd8-754abe3dcec5" (UID: "7ba6f6cf-5902-4bae-acd8-754abe3dcec5"). InnerVolumeSpecName "kube-api-access-ngglv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:01:47.254212 kubelet[3160]: I0912 23:01:47.254172 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:01:47.254842 kubelet[3160]: I0912 23:01:47.254817 3160 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da88806-732d-45b2-b144-3ee5c53f33c7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6da88806-732d-45b2-b144-3ee5c53f33c7" (UID: "6da88806-732d-45b2-b144-3ee5c53f33c7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 23:01:47.348288 kubelet[3160]: I0912 23:01:47.348247 3160 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ba6f6cf-5902-4bae-acd8-754abe3dcec5-cilium-config-path\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.348288 kubelet[3160]: I0912 23:01:47.348284 3160 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ngglv\" (UniqueName: \"kubernetes.io/projected/7ba6f6cf-5902-4bae-acd8-754abe3dcec5-kube-api-access-ngglv\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.348441 kubelet[3160]: I0912 23:01:47.348294 3160 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-bpf-maps\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.348441 kubelet[3160]: I0912 23:01:47.348306 3160 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6da88806-732d-45b2-b144-3ee5c53f33c7-cilium-config-path\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.348441 kubelet[3160]: I0912 23:01:47.348318 3160 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6da88806-732d-45b2-b144-3ee5c53f33c7-clustermesh-secrets\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.348441 kubelet[3160]: I0912 23:01:47.348327 3160 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bpjmt\" (UniqueName: \"kubernetes.io/projected/6da88806-732d-45b2-b144-3ee5c53f33c7-kube-api-access-bpjmt\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.348441 kubelet[3160]: I0912 23:01:47.348336 3160 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-etc-cni-netd\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.348441 kubelet[3160]: I0912 23:01:47.348346 3160 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-host-proc-sys-kernel\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.348441 kubelet[3160]: I0912 23:01:47.348355 3160 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6da88806-732d-45b2-b144-3ee5c53f33c7-hostproc\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.348441 kubelet[3160]: I0912 23:01:47.348364 3160 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6da88806-732d-45b2-b144-3ee5c53f33c7-hubble-tls\") on node \"ci-4459.0.0-a-49ee7b6560\" DevicePath \"\"" Sep 12 23:01:47.434305 kubelet[3160]: I0912 23:01:47.434279 3160 scope.go:117] "RemoveContainer" containerID="3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072" Sep 12 23:01:47.438029 containerd[1729]: time="2025-09-12T23:01:47.437693233Z" level=info msg="RemoveContainer for \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\"" Sep 12 23:01:47.440791 systemd[1]: Removed slice kubepods-besteffort-pod7ba6f6cf_5902_4bae_acd8_754abe3dcec5.slice - libcontainer container kubepods-besteffort-pod7ba6f6cf_5902_4bae_acd8_754abe3dcec5.slice. Sep 12 23:01:47.449547 systemd[1]: Removed slice kubepods-burstable-pod6da88806_732d_45b2_b144_3ee5c53f33c7.slice - libcontainer container kubepods-burstable-pod6da88806_732d_45b2_b144_3ee5c53f33c7.slice. Sep 12 23:01:47.449797 systemd[1]: kubepods-burstable-pod6da88806_732d_45b2_b144_3ee5c53f33c7.slice: Consumed 5.434s CPU time, 124.9M memory peak, 144K read from disk, 13.3M written to disk. Sep 12 23:01:47.462512 containerd[1729]: time="2025-09-12T23:01:47.462394202Z" level=info msg="RemoveContainer for \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\" returns successfully" Sep 12 23:01:47.462774 kubelet[3160]: I0912 23:01:47.462727 3160 scope.go:117] "RemoveContainer" containerID="3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072" Sep 12 23:01:47.463246 containerd[1729]: time="2025-09-12T23:01:47.463145224Z" level=error msg="ContainerStatus for \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\": not found" Sep 12 23:01:47.463371 kubelet[3160]: E0912 23:01:47.463350 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\": not found" containerID="3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072" Sep 12 23:01:47.463543 kubelet[3160]: I0912 23:01:47.463388 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072"} err="failed to get container status \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\": rpc error: code = NotFound desc = an error occurred when try to find container \"3530ee3956f896704d3f77f11cf1786c03440f494633a2171e2a0147ad6f7072\": not found" Sep 12 23:01:47.463543 kubelet[3160]: I0912 23:01:47.463471 3160 scope.go:117] "RemoveContainer" containerID="6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf" Sep 12 23:01:47.465080 containerd[1729]: time="2025-09-12T23:01:47.465032016Z" level=info msg="RemoveContainer for \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\"" Sep 12 23:01:47.474273 containerd[1729]: time="2025-09-12T23:01:47.474235897Z" level=info msg="RemoveContainer for \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" returns successfully" Sep 12 23:01:47.474433 kubelet[3160]: I0912 23:01:47.474414 3160 scope.go:117] "RemoveContainer" containerID="930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560" Sep 12 23:01:47.476154 containerd[1729]: time="2025-09-12T23:01:47.475698422Z" level=info msg="RemoveContainer for \"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\"" Sep 12 23:01:47.482460 containerd[1729]: time="2025-09-12T23:01:47.482423724Z" level=info msg="RemoveContainer for \"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\" returns successfully" Sep 12 23:01:47.482587 kubelet[3160]: I0912 23:01:47.482569 3160 scope.go:117] "RemoveContainer" containerID="ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1" Sep 12 23:01:47.484719 containerd[1729]: time="2025-09-12T23:01:47.484549816Z" level=info msg="RemoveContainer for \"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\"" Sep 12 23:01:47.492016 containerd[1729]: time="2025-09-12T23:01:47.491992509Z" level=info msg="RemoveContainer for \"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\" returns successfully" Sep 12 23:01:47.492172 kubelet[3160]: I0912 23:01:47.492129 3160 scope.go:117] "RemoveContainer" containerID="0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944" Sep 12 23:01:47.493497 containerd[1729]: time="2025-09-12T23:01:47.493472246Z" level=info msg="RemoveContainer for \"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\"" Sep 12 23:01:47.504721 containerd[1729]: time="2025-09-12T23:01:47.504604247Z" level=info msg="RemoveContainer for \"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\" returns successfully" Sep 12 23:01:47.506290 kubelet[3160]: I0912 23:01:47.504785 3160 scope.go:117] "RemoveContainer" containerID="f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575" Sep 12 23:01:47.507345 containerd[1729]: time="2025-09-12T23:01:47.507309214Z" level=info msg="RemoveContainer for \"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\"" Sep 12 23:01:47.522053 containerd[1729]: time="2025-09-12T23:01:47.521984191Z" level=info msg="RemoveContainer for \"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\" returns successfully" Sep 12 23:01:47.522441 kubelet[3160]: I0912 23:01:47.522351 3160 scope.go:117] "RemoveContainer" containerID="6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf" Sep 12 23:01:47.522803 containerd[1729]: time="2025-09-12T23:01:47.522697756Z" level=error msg="ContainerStatus for \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\": not found" Sep 12 23:01:47.523814 kubelet[3160]: E0912 23:01:47.523774 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\": not found" containerID="6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf" Sep 12 23:01:47.523978 kubelet[3160]: I0912 23:01:47.523821 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf"} err="failed to get container status \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d3b874c0fab6c43ae5d5da32b31b14b3d84877ecb98de9b15aff8fcd9b40aaf\": not found" Sep 12 23:01:47.523978 kubelet[3160]: I0912 23:01:47.523840 3160 scope.go:117] "RemoveContainer" containerID="930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560" Sep 12 23:01:47.524954 containerd[1729]: time="2025-09-12T23:01:47.524668154Z" level=error msg="ContainerStatus for \"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\": not found" Sep 12 23:01:47.525024 kubelet[3160]: E0912 23:01:47.524847 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\": not found" containerID="930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560" Sep 12 23:01:47.525024 kubelet[3160]: I0912 23:01:47.524867 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560"} err="failed to get container status \"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\": rpc error: code = NotFound desc = an error occurred when try to find container \"930f61f7da41ff17847a813fa35b9f6f1d2deb1586d0596cab23fe4257eb8560\": not found" Sep 12 23:01:47.525024 kubelet[3160]: I0912 23:01:47.524884 3160 scope.go:117] "RemoveContainer" containerID="ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1" Sep 12 23:01:47.525254 containerd[1729]: time="2025-09-12T23:01:47.525232198Z" level=error msg="ContainerStatus for \"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\": not found" Sep 12 23:01:47.525543 kubelet[3160]: E0912 23:01:47.525478 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\": not found" containerID="ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1" Sep 12 23:01:47.525543 kubelet[3160]: I0912 23:01:47.525497 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1"} err="failed to get container status \"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec55ebff6482ad773500bc644257c7ffad68006d6e089bf6deafeedf715c73a1\": not found" Sep 12 23:01:47.525543 kubelet[3160]: I0912 23:01:47.525514 3160 scope.go:117] "RemoveContainer" containerID="0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944" Sep 12 23:01:47.526078 containerd[1729]: time="2025-09-12T23:01:47.526022145Z" level=error msg="ContainerStatus for \"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\": not found" Sep 12 23:01:47.526425 kubelet[3160]: E0912 23:01:47.526193 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\": not found" containerID="0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944" Sep 12 23:01:47.526565 kubelet[3160]: I0912 23:01:47.526480 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944"} err="failed to get container status \"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e6db29217097c0cbd4cf82d50a5ef03c0774f24ccd4871b7a87656e7a990944\": not found" Sep 12 23:01:47.526565 kubelet[3160]: I0912 23:01:47.526498 3160 scope.go:117] "RemoveContainer" containerID="f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575" Sep 12 23:01:47.526807 containerd[1729]: time="2025-09-12T23:01:47.526782178Z" level=error msg="ContainerStatus for \"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\": not found" Sep 12 23:01:47.526900 kubelet[3160]: E0912 23:01:47.526886 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\": not found" containerID="f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575" Sep 12 23:01:47.526949 kubelet[3160]: I0912 23:01:47.526907 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575"} err="failed to get container status \"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\": rpc error: code = NotFound desc = an error occurred when try to find container \"f889332b5149faba176479976af4abb6e3dc78f0277857c48eaacf4715ba6575\": not found" Sep 12 23:01:47.970544 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-85be06ee1ab54f8a9adac2bafbb820ef507983e61d996df65bf6ea90e23d7ac6-shm.mount: Deactivated successfully. Sep 12 23:01:47.970955 systemd[1]: var-lib-kubelet-pods-7ba6f6cf\x2d5902\x2d4bae\x2dacd8\x2d754abe3dcec5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dngglv.mount: Deactivated successfully. Sep 12 23:01:47.971027 systemd[1]: var-lib-kubelet-pods-6da88806\x2d732d\x2d45b2\x2db144\x2d3ee5c53f33c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbpjmt.mount: Deactivated successfully. Sep 12 23:01:47.971098 systemd[1]: var-lib-kubelet-pods-6da88806\x2d732d\x2d45b2\x2db144\x2d3ee5c53f33c7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 23:01:47.971161 systemd[1]: var-lib-kubelet-pods-6da88806\x2d732d\x2d45b2\x2db144\x2d3ee5c53f33c7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 23:01:48.163886 update_engine[1689]: I20250912 23:01:48.163836 1689 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 12 23:01:48.163886 update_engine[1689]: I20250912 23:01:48.163882 1689 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 12 23:01:48.164254 update_engine[1689]: I20250912 23:01:48.164032 1689 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 12 23:01:48.164412 update_engine[1689]: I20250912 23:01:48.164374 1689 omaha_request_params.cc:62] Current group set to alpha Sep 12 23:01:48.164680 update_engine[1689]: I20250912 23:01:48.164513 1689 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 12 23:01:48.164680 update_engine[1689]: I20250912 23:01:48.164525 1689 update_attempter.cc:643] Scheduling an action processor start. Sep 12 23:01:48.164680 update_engine[1689]: I20250912 23:01:48.164544 1689 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 23:01:48.164680 update_engine[1689]: I20250912 23:01:48.164579 1689 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 12 23:01:48.164680 update_engine[1689]: I20250912 23:01:48.164633 1689 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 23:01:48.164680 update_engine[1689]: I20250912 23:01:48.164637 1689 omaha_request_action.cc:272] Request: Sep 12 23:01:48.164680 update_engine[1689]: Sep 12 23:01:48.164680 update_engine[1689]: Sep 12 23:01:48.164680 update_engine[1689]: Sep 12 23:01:48.164680 update_engine[1689]: Sep 12 23:01:48.164680 update_engine[1689]: Sep 12 23:01:48.164680 update_engine[1689]: Sep 12 23:01:48.164680 update_engine[1689]: Sep 12 23:01:48.164680 update_engine[1689]: Sep 12 23:01:48.164680 update_engine[1689]: I20250912 23:01:48.164645 1689 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 23:01:48.165659 locksmithd[1759]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 12 23:01:48.165855 update_engine[1689]: I20250912 23:01:48.165757 1689 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 23:01:48.166296 update_engine[1689]: I20250912 23:01:48.166270 1689 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 23:01:48.187522 update_engine[1689]: E20250912 23:01:48.187465 1689 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 23:01:48.187602 update_engine[1689]: I20250912 23:01:48.187561 1689 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 12 23:01:48.966409 sshd[4700]: Connection closed by 10.200.16.10 port 55210 Sep 12 23:01:48.967097 sshd-session[4697]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:48.970843 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:55210.service: Deactivated successfully. Sep 12 23:01:48.972606 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 23:01:48.973485 systemd-logind[1688]: Session 24 logged out. Waiting for processes to exit. Sep 12 23:01:48.975202 systemd-logind[1688]: Removed session 24. Sep 12 23:01:49.080822 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:55226.service - OpenSSH per-connection server daemon (10.200.16.10:55226). Sep 12 23:01:49.149070 kubelet[3160]: I0912 23:01:49.149040 3160 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da88806-732d-45b2-b144-3ee5c53f33c7" path="/var/lib/kubelet/pods/6da88806-732d-45b2-b144-3ee5c53f33c7/volumes" Sep 12 23:01:49.149521 kubelet[3160]: I0912 23:01:49.149500 3160 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ba6f6cf-5902-4bae-acd8-754abe3dcec5" path="/var/lib/kubelet/pods/7ba6f6cf-5902-4bae-acd8-754abe3dcec5/volumes" Sep 12 23:01:49.707767 sshd[4853]: Accepted publickey for core from 10.200.16.10 port 55226 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:49.708937 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:49.713366 systemd-logind[1688]: New session 25 of user core. Sep 12 23:01:49.717978 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 23:01:50.228999 kubelet[3160]: E0912 23:01:50.228963 3160 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 23:01:50.523062 kubelet[3160]: I0912 23:01:50.522955 3160 memory_manager.go:355] "RemoveStaleState removing state" podUID="6da88806-732d-45b2-b144-3ee5c53f33c7" containerName="cilium-agent" Sep 12 23:01:50.523062 kubelet[3160]: I0912 23:01:50.522994 3160 memory_manager.go:355] "RemoveStaleState removing state" podUID="7ba6f6cf-5902-4bae-acd8-754abe3dcec5" containerName="cilium-operator" Sep 12 23:01:50.534485 systemd[1]: Created slice kubepods-burstable-podd3ed578d_b67f_43ad_b34a_f77c59139eae.slice - libcontainer container kubepods-burstable-podd3ed578d_b67f_43ad_b34a_f77c59139eae.slice. Sep 12 23:01:50.566423 kubelet[3160]: I0912 23:01:50.566392 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3ed578d-b67f-43ad-b34a-f77c59139eae-cilium-cgroup\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.566539 kubelet[3160]: I0912 23:01:50.566429 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3ed578d-b67f-43ad-b34a-f77c59139eae-cni-path\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.566539 kubelet[3160]: I0912 23:01:50.566449 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3ed578d-b67f-43ad-b34a-f77c59139eae-etc-cni-netd\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.566539 kubelet[3160]: I0912 23:01:50.566466 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3ed578d-b67f-43ad-b34a-f77c59139eae-xtables-lock\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.566539 kubelet[3160]: I0912 23:01:50.566485 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d3ed578d-b67f-43ad-b34a-f77c59139eae-cilium-ipsec-secrets\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.566539 kubelet[3160]: I0912 23:01:50.566501 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3ed578d-b67f-43ad-b34a-f77c59139eae-host-proc-sys-kernel\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.566539 kubelet[3160]: I0912 23:01:50.566517 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3ed578d-b67f-43ad-b34a-f77c59139eae-cilium-run\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.567905 kubelet[3160]: I0912 23:01:50.566532 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3ed578d-b67f-43ad-b34a-f77c59139eae-hostproc\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.567905 kubelet[3160]: I0912 23:01:50.566548 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3ed578d-b67f-43ad-b34a-f77c59139eae-clustermesh-secrets\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.567905 kubelet[3160]: I0912 23:01:50.566566 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3ed578d-b67f-43ad-b34a-f77c59139eae-cilium-config-path\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.567905 kubelet[3160]: I0912 23:01:50.566584 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3ed578d-b67f-43ad-b34a-f77c59139eae-hubble-tls\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.567905 kubelet[3160]: I0912 23:01:50.566599 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwphl\" (UniqueName: \"kubernetes.io/projected/d3ed578d-b67f-43ad-b34a-f77c59139eae-kube-api-access-rwphl\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.567905 kubelet[3160]: I0912 23:01:50.566617 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3ed578d-b67f-43ad-b34a-f77c59139eae-lib-modules\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.568103 kubelet[3160]: I0912 23:01:50.566636 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3ed578d-b67f-43ad-b34a-f77c59139eae-bpf-maps\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.568103 kubelet[3160]: I0912 23:01:50.566654 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3ed578d-b67f-43ad-b34a-f77c59139eae-host-proc-sys-net\") pod \"cilium-jgdhn\" (UID: \"d3ed578d-b67f-43ad-b34a-f77c59139eae\") " pod="kube-system/cilium-jgdhn" Sep 12 23:01:50.633187 sshd[4856]: Connection closed by 10.200.16.10 port 55226 Sep 12 23:01:50.633624 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:50.636202 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:55226.service: Deactivated successfully. Sep 12 23:01:50.638365 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 23:01:50.639622 systemd-logind[1688]: Session 25 logged out. Waiting for processes to exit. Sep 12 23:01:50.640307 systemd-logind[1688]: Removed session 25. Sep 12 23:01:50.747733 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.16.10:34438.service - OpenSSH per-connection server daemon (10.200.16.10:34438). Sep 12 23:01:50.839860 containerd[1729]: time="2025-09-12T23:01:50.839814868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgdhn,Uid:d3ed578d-b67f-43ad-b34a-f77c59139eae,Namespace:kube-system,Attempt:0,}" Sep 12 23:01:50.875179 containerd[1729]: time="2025-09-12T23:01:50.875094793Z" level=info msg="connecting to shim 092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177" address="unix:///run/containerd/s/fc57ce2486dbb447383fb2faf5cae592190f0cbd688503d31698564fc56a1a07" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:01:50.897834 systemd[1]: Started cri-containerd-092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177.scope - libcontainer container 092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177. Sep 12 23:01:50.920340 containerd[1729]: time="2025-09-12T23:01:50.920314231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jgdhn,Uid:d3ed578d-b67f-43ad-b34a-f77c59139eae,Namespace:kube-system,Attempt:0,} returns sandbox id \"092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177\"" Sep 12 23:01:50.922460 containerd[1729]: time="2025-09-12T23:01:50.922432957Z" level=info msg="CreateContainer within sandbox \"092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 23:01:50.935192 containerd[1729]: time="2025-09-12T23:01:50.935167976Z" level=info msg="Container 30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:01:50.950698 containerd[1729]: time="2025-09-12T23:01:50.950671258Z" level=info msg="CreateContainer within sandbox \"092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7\"" Sep 12 23:01:50.951266 containerd[1729]: time="2025-09-12T23:01:50.951056938Z" level=info msg="StartContainer for \"30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7\"" Sep 12 23:01:50.952340 containerd[1729]: time="2025-09-12T23:01:50.952313359Z" level=info msg="connecting to shim 30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7" address="unix:///run/containerd/s/fc57ce2486dbb447383fb2faf5cae592190f0cbd688503d31698564fc56a1a07" protocol=ttrpc version=3 Sep 12 23:01:50.968943 systemd[1]: Started cri-containerd-30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7.scope - libcontainer container 30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7. Sep 12 23:01:50.994898 containerd[1729]: time="2025-09-12T23:01:50.994863953Z" level=info msg="StartContainer for \"30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7\" returns successfully" Sep 12 23:01:50.999158 systemd[1]: cri-containerd-30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7.scope: Deactivated successfully. Sep 12 23:01:51.000876 containerd[1729]: time="2025-09-12T23:01:51.000823270Z" level=info msg="received exit event container_id:\"30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7\" id:\"30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7\" pid:4939 exited_at:{seconds:1757718111 nanos:604087}" Sep 12 23:01:51.001101 containerd[1729]: time="2025-09-12T23:01:51.001077089Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7\" id:\"30a161aa48bcd71662f3a65b8472addb3a8d3ca6cbd2eebf04fb757f0477d9f7\" pid:4939 exited_at:{seconds:1757718111 nanos:604087}" Sep 12 23:01:51.377655 sshd[4874]: Accepted publickey for core from 10.200.16.10 port 34438 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:51.378866 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:51.383415 systemd-logind[1688]: New session 26 of user core. Sep 12 23:01:51.396821 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 23:01:51.459111 containerd[1729]: time="2025-09-12T23:01:51.459077038Z" level=info msg="CreateContainer within sandbox \"092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 23:01:51.479569 containerd[1729]: time="2025-09-12T23:01:51.479541304Z" level=info msg="Container fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:01:51.500401 containerd[1729]: time="2025-09-12T23:01:51.500367268Z" level=info msg="CreateContainer within sandbox \"092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0\"" Sep 12 23:01:51.500914 containerd[1729]: time="2025-09-12T23:01:51.500893358Z" level=info msg="StartContainer for \"fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0\"" Sep 12 23:01:51.501761 containerd[1729]: time="2025-09-12T23:01:51.501733300Z" level=info msg="connecting to shim fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0" address="unix:///run/containerd/s/fc57ce2486dbb447383fb2faf5cae592190f0cbd688503d31698564fc56a1a07" protocol=ttrpc version=3 Sep 12 23:01:51.519862 systemd[1]: Started cri-containerd-fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0.scope - libcontainer container fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0. Sep 12 23:01:51.546767 systemd[1]: cri-containerd-fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0.scope: Deactivated successfully. Sep 12 23:01:51.549082 containerd[1729]: time="2025-09-12T23:01:51.548942842Z" level=info msg="received exit event container_id:\"fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0\" id:\"fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0\" pid:4987 exited_at:{seconds:1757718111 nanos:548030291}" Sep 12 23:01:51.549082 containerd[1729]: time="2025-09-12T23:01:51.549034235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0\" id:\"fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0\" pid:4987 exited_at:{seconds:1757718111 nanos:548030291}" Sep 12 23:01:51.551152 containerd[1729]: time="2025-09-12T23:01:51.550763819Z" level=info msg="StartContainer for \"fd32f92d29bab857350152037058bdc17485308408d9b989cd37f293023a83b0\" returns successfully" Sep 12 23:01:51.817951 sshd[4973]: Connection closed by 10.200.16.10 port 34438 Sep 12 23:01:51.818583 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Sep 12 23:01:51.821297 systemd[1]: sshd@23-10.200.8.39:22-10.200.16.10:34438.service: Deactivated successfully. Sep 12 23:01:51.823137 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 23:01:51.824381 systemd-logind[1688]: Session 26 logged out. Waiting for processes to exit. Sep 12 23:01:51.825912 systemd-logind[1688]: Removed session 26. Sep 12 23:01:51.928524 systemd[1]: Started sshd@24-10.200.8.39:22-10.200.16.10:34452.service - OpenSSH per-connection server daemon (10.200.16.10:34452). Sep 12 23:01:52.464934 containerd[1729]: time="2025-09-12T23:01:52.464717680Z" level=info msg="CreateContainer within sandbox \"092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 23:01:52.492698 containerd[1729]: time="2025-09-12T23:01:52.491114919Z" level=info msg="Container df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:01:52.510825 containerd[1729]: time="2025-09-12T23:01:52.510799642Z" level=info msg="CreateContainer within sandbox \"092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5\"" Sep 12 23:01:52.511419 containerd[1729]: time="2025-09-12T23:01:52.511368464Z" level=info msg="StartContainer for \"df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5\"" Sep 12 23:01:52.513874 containerd[1729]: time="2025-09-12T23:01:52.513844964Z" level=info msg="connecting to shim df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5" address="unix:///run/containerd/s/fc57ce2486dbb447383fb2faf5cae592190f0cbd688503d31698564fc56a1a07" protocol=ttrpc version=3 Sep 12 23:01:52.547842 systemd[1]: Started cri-containerd-df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5.scope - libcontainer container df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5. Sep 12 23:01:52.565279 sshd[5025]: Accepted publickey for core from 10.200.16.10 port 34452 ssh2: RSA SHA256:iICXlNsXx8loOwZfJIFtboVHRCxNi5XgLL8FA7cZeOk Sep 12 23:01:52.566926 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:01:52.572645 systemd-logind[1688]: New session 27 of user core. Sep 12 23:01:52.580725 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 23:01:52.601206 systemd[1]: cri-containerd-df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5.scope: Deactivated successfully. Sep 12 23:01:52.604351 containerd[1729]: time="2025-09-12T23:01:52.604328600Z" level=info msg="received exit event container_id:\"df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5\" id:\"df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5\" pid:5041 exited_at:{seconds:1757718112 nanos:603667837}" Sep 12 23:01:52.604498 containerd[1729]: time="2025-09-12T23:01:52.604478560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5\" id:\"df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5\" pid:5041 exited_at:{seconds:1757718112 nanos:603667837}" Sep 12 23:01:52.604802 containerd[1729]: time="2025-09-12T23:01:52.604736085Z" level=info msg="StartContainer for \"df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5\" returns successfully" Sep 12 23:01:52.620068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df339d244efec8c340ce136d5f2081d75b6bb7695feb07402380ebfbd23d26c5-rootfs.mount: Deactivated successfully. Sep 12 23:01:53.473012 containerd[1729]: time="2025-09-12T23:01:53.472936757Z" level=info msg="CreateContainer within sandbox \"092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 23:01:53.500946 containerd[1729]: time="2025-09-12T23:01:53.500336071Z" level=info msg="Container ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:01:53.507351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3826749620.mount: Deactivated successfully. Sep 12 23:01:53.518131 containerd[1729]: time="2025-09-12T23:01:53.518105058Z" level=info msg="CreateContainer within sandbox \"092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb\"" Sep 12 23:01:53.518687 containerd[1729]: time="2025-09-12T23:01:53.518664441Z" level=info msg="StartContainer for \"ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb\"" Sep 12 23:01:53.519995 containerd[1729]: time="2025-09-12T23:01:53.519968500Z" level=info msg="connecting to shim ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb" address="unix:///run/containerd/s/fc57ce2486dbb447383fb2faf5cae592190f0cbd688503d31698564fc56a1a07" protocol=ttrpc version=3 Sep 12 23:01:53.540842 systemd[1]: Started cri-containerd-ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb.scope - libcontainer container ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb. Sep 12 23:01:53.562861 systemd[1]: cri-containerd-ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb.scope: Deactivated successfully. Sep 12 23:01:53.565028 containerd[1729]: time="2025-09-12T23:01:53.565001940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb\" id:\"ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb\" pid:5087 exited_at:{seconds:1757718113 nanos:563282914}" Sep 12 23:01:53.582721 containerd[1729]: time="2025-09-12T23:01:53.582093886Z" level=info msg="received exit event container_id:\"ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb\" id:\"ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb\" pid:5087 exited_at:{seconds:1757718113 nanos:563282914}" Sep 12 23:01:53.589834 containerd[1729]: time="2025-09-12T23:01:53.589795470Z" level=info msg="StartContainer for \"ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb\" returns successfully" Sep 12 23:01:53.600193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac4cce99c456d2bb2047c579b24838f55d33deedd1056d126eca0e7b4aec41cb-rootfs.mount: Deactivated successfully. Sep 12 23:01:54.481094 containerd[1729]: time="2025-09-12T23:01:54.479868873Z" level=info msg="CreateContainer within sandbox \"092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 23:01:54.500622 containerd[1729]: time="2025-09-12T23:01:54.500584329Z" level=info msg="Container b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:01:54.520530 containerd[1729]: time="2025-09-12T23:01:54.520497546Z" level=info msg="CreateContainer within sandbox \"092a379166cedabdca0d9f2d83e774a62db7e7e9f640e92e1a9da4f826156177\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854\"" Sep 12 23:01:54.521164 containerd[1729]: time="2025-09-12T23:01:54.521118477Z" level=info msg="StartContainer for \"b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854\"" Sep 12 23:01:54.522145 containerd[1729]: time="2025-09-12T23:01:54.522101731Z" level=info msg="connecting to shim b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854" address="unix:///run/containerd/s/fc57ce2486dbb447383fb2faf5cae592190f0cbd688503d31698564fc56a1a07" protocol=ttrpc version=3 Sep 12 23:01:54.541865 systemd[1]: Started cri-containerd-b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854.scope - libcontainer container b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854. Sep 12 23:01:54.572711 containerd[1729]: time="2025-09-12T23:01:54.572654634Z" level=info msg="StartContainer for \"b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854\" returns successfully" Sep 12 23:01:54.644736 containerd[1729]: time="2025-09-12T23:01:54.644690448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854\" id:\"3848d7ec4ffce7d9320028f51f9150ca41397b7085220142afe236c64ed5733e\" pid:5155 exited_at:{seconds:1757718114 nanos:644472963}" Sep 12 23:01:54.874740 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Sep 12 23:01:57.095081 containerd[1729]: time="2025-09-12T23:01:57.094942325Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854\" id:\"849c7e9588d4b5bcd30f4b190c973db02099de2ad0d13c63e91e71029314ac12\" pid:5567 exit_status:1 exited_at:{seconds:1757718117 nanos:93597979}" Sep 12 23:01:57.348502 systemd-networkd[1486]: lxc_health: Link UP Sep 12 23:01:57.348687 systemd-networkd[1486]: lxc_health: Gained carrier Sep 12 23:01:58.166733 update_engine[1689]: I20250912 23:01:58.165917 1689 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 23:01:58.166733 update_engine[1689]: I20250912 23:01:58.166019 1689 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 23:01:58.166733 update_engine[1689]: I20250912 23:01:58.166381 1689 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 23:01:58.240728 update_engine[1689]: E20250912 23:01:58.240568 1689 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 23:01:58.240728 update_engine[1689]: I20250912 23:01:58.240674 1689 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 12 23:01:58.863635 kubelet[3160]: I0912 23:01:58.863567 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jgdhn" podStartSLOduration=8.86353841 podStartE2EDuration="8.86353841s" podCreationTimestamp="2025-09-12 23:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:01:55.495713764 +0000 UTC m=+160.480678210" watchObservedRunningTime="2025-09-12 23:01:58.86353841 +0000 UTC m=+163.848502841" Sep 12 23:01:58.973910 systemd-networkd[1486]: lxc_health: Gained IPv6LL Sep 12 23:01:59.247842 containerd[1729]: time="2025-09-12T23:01:59.247663328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854\" id:\"c3e61010494610abb8e2e0c47e2f2fde2b6a1944a7a94e1beb606a51df3fd272\" pid:5691 exited_at:{seconds:1757718119 nanos:247395294}" Sep 12 23:02:01.341383 containerd[1729]: time="2025-09-12T23:02:01.341314541Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854\" id:\"783a84ff1554052b5390f0b559dd20d4a8292e16d87e20b18261becdd7fcbe8c\" pid:5724 exited_at:{seconds:1757718121 nanos:340948463}" Sep 12 23:02:03.431166 containerd[1729]: time="2025-09-12T23:02:03.431070527Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b504f6f6647405a2ae7a753fc4367d4071db4be37d06de5d776750bba7675854\" id:\"639eb198d99cffea7ccaa1de4251adf06613eb473f3e184d5bc63b13ec22a873\" pid:5748 exited_at:{seconds:1757718123 nanos:430813225}" Sep 12 23:02:03.533325 sshd[5048]: Connection closed by 10.200.16.10 port 34452 Sep 12 23:02:03.533956 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Sep 12 23:02:03.537570 systemd[1]: sshd@24-10.200.8.39:22-10.200.16.10:34452.service: Deactivated successfully. Sep 12 23:02:03.539376 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 23:02:03.540473 systemd-logind[1688]: Session 27 logged out. Waiting for processes to exit. Sep 12 23:02:03.541699 systemd-logind[1688]: Removed session 27. Sep 12 23:02:08.165790 update_engine[1689]: I20250912 23:02:08.165730 1689 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 23:02:08.166148 update_engine[1689]: I20250912 23:02:08.165814 1689 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 23:02:08.166183 update_engine[1689]: I20250912 23:02:08.166169 1689 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 23:02:08.193542 update_engine[1689]: E20250912 23:02:08.193392 1689 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 23:02:08.193542 update_engine[1689]: I20250912 23:02:08.193495 1689 libcurl_http_fetcher.cc:283] No HTTP response, retry 3