Jul 10 00:23:18.957196 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 22:15:30 -00 2025 Jul 10 00:23:18.957224 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:23:18.957235 kernel: BIOS-provided physical RAM map: Jul 10 00:23:18.957243 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 10 00:23:18.957250 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 10 00:23:18.957257 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jul 10 00:23:18.957266 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jul 10 00:23:18.957274 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jul 10 00:23:18.957282 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jul 10 00:23:18.957289 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 10 00:23:18.957296 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 10 00:23:18.957304 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 10 00:23:18.957311 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 10 00:23:18.957319 kernel: printk: legacy bootconsole [earlyser0] enabled Jul 10 00:23:18.957329 kernel: NX (Execute Disable) protection: active Jul 10 00:23:18.957337 kernel: APIC: Static calls initialized Jul 10 00:23:18.957345 kernel: efi: EFI v2.7 by Microsoft Jul 10 00:23:18.957353 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3e9da718 RNG=0x3ffd2018 Jul 10 00:23:18.957361 kernel: random: crng init done Jul 10 00:23:18.957369 kernel: secureboot: Secure boot disabled Jul 10 00:23:18.957377 kernel: SMBIOS 3.1.0 present. Jul 10 00:23:18.957385 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jul 10 00:23:18.957393 kernel: DMI: Memory slots populated: 2/2 Jul 10 00:23:18.957401 kernel: Hypervisor detected: Microsoft Hyper-V Jul 10 00:23:18.957410 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jul 10 00:23:18.957417 kernel: Hyper-V: Nested features: 0x3e0101 Jul 10 00:23:18.957425 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 10 00:23:18.957433 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 10 00:23:18.957441 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 10 00:23:18.957449 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 10 00:23:18.957457 kernel: tsc: Detected 2299.999 MHz processor Jul 10 00:23:18.957465 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 00:23:18.957473 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 00:23:18.957483 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jul 10 00:23:18.957491 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 10 00:23:18.957499 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 00:23:18.957507 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jul 10 00:23:18.957515 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jul 10 00:23:18.957523 kernel: Using GB pages for direct mapping Jul 10 00:23:18.957531 kernel: ACPI: Early table checksum verification disabled Jul 10 00:23:18.957542 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 10 00:23:18.957552 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:18.957560 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:18.957569 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 10 00:23:18.957577 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 10 00:23:18.957586 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:18.957595 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:18.957604 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:18.957612 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 10 00:23:18.957621 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 10 00:23:18.957630 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 10 00:23:18.957638 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 10 00:23:18.957647 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jul 10 00:23:18.957655 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 10 00:23:18.957663 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 10 00:23:18.957672 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 10 00:23:18.957681 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 10 00:23:18.957689 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jul 10 00:23:18.957698 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jul 10 00:23:18.957706 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 10 00:23:18.957715 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 10 00:23:18.957723 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jul 10 00:23:18.957732 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jul 10 00:23:18.957740 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jul 10 00:23:18.957749 kernel: Zone ranges: Jul 10 00:23:18.957758 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 00:23:18.957766 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 10 00:23:18.957775 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 10 00:23:18.957782 kernel: Device empty Jul 10 00:23:18.957791 kernel: Movable zone start for each node Jul 10 00:23:18.957799 kernel: Early memory node ranges Jul 10 00:23:18.957807 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 10 00:23:18.957816 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jul 10 00:23:18.957824 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jul 10 00:23:18.957834 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 10 00:23:18.957842 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 10 00:23:18.957850 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 10 00:23:18.957859 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 00:23:18.957867 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 10 00:23:18.957875 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 10 00:23:18.957884 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jul 10 00:23:18.957892 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 10 00:23:18.957900 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 10 00:23:18.957909 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 10 00:23:18.957917 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 00:23:18.957924 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 10 00:23:18.957931 kernel: TSC deadline timer available Jul 10 00:23:18.957938 kernel: CPU topo: Max. logical packages: 1 Jul 10 00:23:18.957946 kernel: CPU topo: Max. logical dies: 1 Jul 10 00:23:18.957952 kernel: CPU topo: Max. dies per package: 1 Jul 10 00:23:18.957960 kernel: CPU topo: Max. threads per core: 2 Jul 10 00:23:18.957967 kernel: CPU topo: Num. cores per package: 1 Jul 10 00:23:18.957975 kernel: CPU topo: Num. threads per package: 2 Jul 10 00:23:18.957982 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 10 00:23:18.957989 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 10 00:23:18.957996 kernel: Booting paravirtualized kernel on Hyper-V Jul 10 00:23:18.958003 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 00:23:18.958010 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 10 00:23:18.958018 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 10 00:23:18.958025 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 10 00:23:18.958032 kernel: pcpu-alloc: [0] 0 1 Jul 10 00:23:18.958040 kernel: Hyper-V: PV spinlocks enabled Jul 10 00:23:18.958047 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 10 00:23:18.958056 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:23:18.958064 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:23:18.958071 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 10 00:23:18.958078 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:23:18.958102 kernel: Fallback order for Node 0: 0 Jul 10 00:23:18.958111 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jul 10 00:23:18.958120 kernel: Policy zone: Normal Jul 10 00:23:18.958126 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:23:18.958133 kernel: software IO TLB: area num 2. Jul 10 00:23:18.958139 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 10 00:23:18.958146 kernel: ftrace: allocating 40095 entries in 157 pages Jul 10 00:23:18.958152 kernel: ftrace: allocated 157 pages with 5 groups Jul 10 00:23:18.958158 kernel: Dynamic Preempt: voluntary Jul 10 00:23:18.958165 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:23:18.958172 kernel: rcu: RCU event tracing is enabled. Jul 10 00:23:18.958186 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 10 00:23:18.958193 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:23:18.958200 kernel: Rude variant of Tasks RCU enabled. Jul 10 00:23:18.958209 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:23:18.958216 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:23:18.958223 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 10 00:23:18.958230 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:23:18.958237 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:23:18.958272 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 10 00:23:18.958280 kernel: Using NULL legacy PIC Jul 10 00:23:18.958290 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 10 00:23:18.958298 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:23:18.958306 kernel: Console: colour dummy device 80x25 Jul 10 00:23:18.958314 kernel: printk: legacy console [tty1] enabled Jul 10 00:23:18.958321 kernel: printk: legacy console [ttyS0] enabled Jul 10 00:23:18.958329 kernel: printk: legacy bootconsole [earlyser0] disabled Jul 10 00:23:18.958337 kernel: ACPI: Core revision 20240827 Jul 10 00:23:18.958346 kernel: Failed to register legacy timer interrupt Jul 10 00:23:18.958354 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 00:23:18.958361 kernel: x2apic enabled Jul 10 00:23:18.958369 kernel: APIC: Switched APIC routing to: physical x2apic Jul 10 00:23:18.958377 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 10 00:23:18.958385 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 10 00:23:18.958392 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jul 10 00:23:18.958400 kernel: Hyper-V: Using IPI hypercalls Jul 10 00:23:18.958408 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 10 00:23:18.958417 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 10 00:23:18.958425 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 10 00:23:18.958433 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 10 00:23:18.958440 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 10 00:23:18.958448 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 10 00:23:18.958456 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jul 10 00:23:18.958464 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Jul 10 00:23:18.958472 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 10 00:23:18.958481 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 10 00:23:18.958489 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 10 00:23:18.958497 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 00:23:18.958504 kernel: Spectre V2 : Mitigation: Retpolines Jul 10 00:23:18.958512 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 10 00:23:18.958519 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 10 00:23:18.958527 kernel: RETBleed: Vulnerable Jul 10 00:23:18.958535 kernel: Speculative Store Bypass: Vulnerable Jul 10 00:23:18.958542 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 10 00:23:18.958550 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 00:23:18.958557 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 00:23:18.958566 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 00:23:18.958573 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 10 00:23:18.958581 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 10 00:23:18.958589 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 10 00:23:18.958596 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jul 10 00:23:18.958604 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jul 10 00:23:18.958612 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jul 10 00:23:18.958619 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 00:23:18.958627 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 10 00:23:18.958635 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 10 00:23:18.958642 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 10 00:23:18.958651 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jul 10 00:23:18.958658 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jul 10 00:23:18.958666 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jul 10 00:23:18.958673 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jul 10 00:23:18.958681 kernel: Freeing SMP alternatives memory: 32K Jul 10 00:23:18.958688 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:23:18.958696 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 00:23:18.958703 kernel: landlock: Up and running. Jul 10 00:23:18.958711 kernel: SELinux: Initializing. Jul 10 00:23:18.958718 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:23:18.958726 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 00:23:18.958734 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jul 10 00:23:18.958743 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jul 10 00:23:18.958751 kernel: signal: max sigframe size: 11952 Jul 10 00:23:18.958758 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:23:18.958767 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:23:18.958775 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 00:23:18.958783 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 10 00:23:18.958790 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:23:18.958798 kernel: smpboot: x86: Booting SMP configuration: Jul 10 00:23:18.958806 kernel: .... node #0, CPUs: #1 Jul 10 00:23:18.958815 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 00:23:18.958822 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 10 00:23:18.958830 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54420K init, 2548K bss, 299988K reserved, 0K cma-reserved) Jul 10 00:23:18.958838 kernel: devtmpfs: initialized Jul 10 00:23:18.958846 kernel: x86/mm: Memory block size: 128MB Jul 10 00:23:18.958853 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 10 00:23:18.958862 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:23:18.958869 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 10 00:23:18.958877 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:23:18.958886 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:23:18.958894 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:23:18.958902 kernel: audit: type=2000 audit(1752106996.053:1): state=initialized audit_enabled=0 res=1 Jul 10 00:23:18.958909 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:23:18.958917 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 00:23:18.958925 kernel: cpuidle: using governor menu Jul 10 00:23:18.958933 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:23:18.958940 kernel: dca service started, version 1.12.1 Jul 10 00:23:18.958948 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jul 10 00:23:18.958957 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jul 10 00:23:18.958964 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 00:23:18.958972 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:23:18.958980 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:23:18.958987 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:23:18.958995 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:23:18.959003 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:23:18.959011 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:23:18.959020 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:23:18.959027 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:23:18.959035 kernel: ACPI: Interpreter enabled Jul 10 00:23:18.959043 kernel: ACPI: PM: (supports S0 S5) Jul 10 00:23:18.959050 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 00:23:18.959058 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 00:23:18.959067 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 10 00:23:18.959074 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 10 00:23:18.959097 kernel: iommu: Default domain type: Translated Jul 10 00:23:18.959105 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 00:23:18.959114 kernel: efivars: Registered efivars operations Jul 10 00:23:18.959122 kernel: PCI: Using ACPI for IRQ routing Jul 10 00:23:18.959130 kernel: PCI: System does not support PCI Jul 10 00:23:18.959137 kernel: vgaarb: loaded Jul 10 00:23:18.959145 kernel: clocksource: Switched to clocksource tsc-early Jul 10 00:23:18.959153 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:23:18.959160 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:23:18.959168 kernel: pnp: PnP ACPI init Jul 10 00:23:18.959176 kernel: pnp: PnP ACPI: found 3 devices Jul 10 00:23:18.959185 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 00:23:18.959193 kernel: NET: Registered PF_INET protocol family Jul 10 00:23:18.959201 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 10 00:23:18.959209 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 10 00:23:18.959217 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:23:18.959225 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:23:18.959233 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 10 00:23:18.959240 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 10 00:23:18.959249 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 10 00:23:18.959257 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 10 00:23:18.959265 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:23:18.959273 kernel: NET: Registered PF_XDP protocol family Jul 10 00:23:18.959280 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:23:18.959288 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 10 00:23:18.959296 kernel: software IO TLB: mapped [mem 0x000000003a9da000-0x000000003e9da000] (64MB) Jul 10 00:23:18.959304 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jul 10 00:23:18.959311 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jul 10 00:23:18.959320 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jul 10 00:23:18.959328 kernel: clocksource: Switched to clocksource tsc Jul 10 00:23:18.959336 kernel: Initialise system trusted keyrings Jul 10 00:23:18.959343 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 10 00:23:18.959351 kernel: Key type asymmetric registered Jul 10 00:23:18.959358 kernel: Asymmetric key parser 'x509' registered Jul 10 00:23:18.959366 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:23:18.959374 kernel: io scheduler mq-deadline registered Jul 10 00:23:18.959382 kernel: io scheduler kyber registered Jul 10 00:23:18.959391 kernel: io scheduler bfq registered Jul 10 00:23:18.959399 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 00:23:18.959406 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:23:18.959414 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:23:18.959422 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 10 00:23:18.959430 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 00:23:18.959437 kernel: i8042: PNP: No PS/2 controller found. Jul 10 00:23:18.959554 kernel: rtc_cmos 00:02: registered as rtc0 Jul 10 00:23:18.959624 kernel: rtc_cmos 00:02: setting system clock to 2025-07-10T00:23:18 UTC (1752106998) Jul 10 00:23:18.959686 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 10 00:23:18.959695 kernel: intel_pstate: Intel P-state driver initializing Jul 10 00:23:18.959703 kernel: efifb: probing for efifb Jul 10 00:23:18.959711 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 10 00:23:18.959719 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 10 00:23:18.959726 kernel: efifb: scrolling: redraw Jul 10 00:23:18.959734 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 10 00:23:18.959742 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 00:23:18.959751 kernel: fb0: EFI VGA frame buffer device Jul 10 00:23:18.959758 kernel: pstore: Using crash dump compression: deflate Jul 10 00:23:18.959766 kernel: pstore: Registered efi_pstore as persistent store backend Jul 10 00:23:18.959774 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:23:18.959782 kernel: Segment Routing with IPv6 Jul 10 00:23:18.959789 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:23:18.959797 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:23:18.959805 kernel: Key type dns_resolver registered Jul 10 00:23:18.959812 kernel: IPI shorthand broadcast: enabled Jul 10 00:23:18.959821 kernel: sched_clock: Marking stable (2805003587, 86509247)->(3182525512, -291012678) Jul 10 00:23:18.959829 kernel: registered taskstats version 1 Jul 10 00:23:18.959836 kernel: Loading compiled-in X.509 certificates Jul 10 00:23:18.959844 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: f515550de55d4e43b2ea11ae212aa0cb3a4e55cf' Jul 10 00:23:18.959852 kernel: Demotion targets for Node 0: null Jul 10 00:23:18.959860 kernel: Key type .fscrypt registered Jul 10 00:23:18.959867 kernel: Key type fscrypt-provisioning registered Jul 10 00:23:18.959875 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:23:18.959883 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:23:18.959891 kernel: ima: No architecture policies found Jul 10 00:23:18.959899 kernel: clk: Disabling unused clocks Jul 10 00:23:18.959907 kernel: Warning: unable to open an initial console. Jul 10 00:23:18.959914 kernel: Freeing unused kernel image (initmem) memory: 54420K Jul 10 00:23:18.959922 kernel: Write protecting the kernel read-only data: 24576k Jul 10 00:23:18.959930 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 10 00:23:18.959938 kernel: Run /init as init process Jul 10 00:23:18.959945 kernel: with arguments: Jul 10 00:23:18.959953 kernel: /init Jul 10 00:23:18.959961 kernel: with environment: Jul 10 00:23:18.959969 kernel: HOME=/ Jul 10 00:23:18.959976 kernel: TERM=linux Jul 10 00:23:18.959984 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:23:18.959993 systemd[1]: Successfully made /usr/ read-only. Jul 10 00:23:18.960004 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:23:18.960013 systemd[1]: Detected virtualization microsoft. Jul 10 00:23:18.960022 systemd[1]: Detected architecture x86-64. Jul 10 00:23:18.960030 systemd[1]: Running in initrd. Jul 10 00:23:18.960038 systemd[1]: No hostname configured, using default hostname. Jul 10 00:23:18.960047 systemd[1]: Hostname set to . Jul 10 00:23:18.960055 systemd[1]: Initializing machine ID from random generator. Jul 10 00:23:18.960063 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:23:18.960071 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:23:18.960079 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:23:18.962609 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:23:18.962624 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:23:18.962633 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:23:18.962643 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:23:18.962653 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:23:18.962663 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:23:18.962674 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:23:18.962686 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:23:18.962696 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:23:18.962705 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:23:18.962714 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:23:18.962723 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:23:18.962731 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:23:18.962741 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:23:18.962750 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:23:18.962758 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 00:23:18.962770 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:23:18.962779 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:23:18.962788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:23:18.962797 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:23:18.962806 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:23:18.962815 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:23:18.962823 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:23:18.962833 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 00:23:18.962844 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:23:18.962853 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:23:18.962862 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:23:18.962880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:18.962911 systemd-journald[205]: Collecting audit messages is disabled. Jul 10 00:23:18.962936 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:23:18.962946 systemd-journald[205]: Journal started Jul 10 00:23:18.962971 systemd-journald[205]: Runtime Journal (/run/log/journal/77e356d079754d27b1411705f05a7190) is 8M, max 158.9M, 150.9M free. Jul 10 00:23:18.953191 systemd-modules-load[206]: Inserted module 'overlay' Jul 10 00:23:18.966916 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:23:18.972713 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:23:18.973216 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:23:18.978199 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:23:18.986202 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:23:18.992299 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:23:18.992320 kernel: Bridge firewalling registered Jul 10 00:23:18.992291 systemd-modules-load[206]: Inserted module 'br_netfilter' Jul 10 00:23:18.993103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:23:18.993704 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:23:19.001275 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:19.004185 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:23:19.009862 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 00:23:19.015826 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:23:19.020590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:23:19.020765 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:23:19.024191 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:23:19.024825 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:23:19.042004 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:23:19.051582 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:23:19.057161 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:23:19.069845 systemd-resolved[230]: Positive Trust Anchors: Jul 10 00:23:19.071275 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:23:19.075792 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=844005237fb9709f65a093d5533c4229fb6c54e8e257736d9c3d041b6d3080ea Jul 10 00:23:19.071310 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:23:19.083470 systemd-resolved[230]: Defaulting to hostname 'linux'. Jul 10 00:23:19.084213 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:23:19.088914 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:23:19.139102 kernel: SCSI subsystem initialized Jul 10 00:23:19.145095 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:23:19.154100 kernel: iscsi: registered transport (tcp) Jul 10 00:23:19.171098 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:23:19.171136 kernel: QLogic iSCSI HBA Driver Jul 10 00:23:19.183412 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:23:19.198788 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:23:19.200887 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:23:19.229200 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:23:19.232192 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:23:19.273104 kernel: raid6: avx512x4 gen() 44837 MB/s Jul 10 00:23:19.290093 kernel: raid6: avx512x2 gen() 43860 MB/s Jul 10 00:23:19.308091 kernel: raid6: avx512x1 gen() 29762 MB/s Jul 10 00:23:19.326093 kernel: raid6: avx2x4 gen() 41617 MB/s Jul 10 00:23:19.343091 kernel: raid6: avx2x2 gen() 42717 MB/s Jul 10 00:23:19.360496 kernel: raid6: avx2x1 gen() 31155 MB/s Jul 10 00:23:19.360515 kernel: raid6: using algorithm avx512x4 gen() 44837 MB/s Jul 10 00:23:19.379104 kernel: raid6: .... xor() 7960 MB/s, rmw enabled Jul 10 00:23:19.379183 kernel: raid6: using avx512x2 recovery algorithm Jul 10 00:23:19.395099 kernel: xor: automatically using best checksumming function avx Jul 10 00:23:19.498097 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:23:19.502612 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:23:19.505198 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:23:19.523268 systemd-udevd[454]: Using default interface naming scheme 'v255'. Jul 10 00:23:19.527217 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:23:19.533673 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:23:19.557149 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jul 10 00:23:19.572522 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:23:19.575191 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:23:19.607321 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:23:19.613927 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:23:19.651105 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:23:19.663127 kernel: AES CTR mode by8 optimization enabled Jul 10 00:23:19.678496 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:23:19.680688 kernel: hv_vmbus: Vmbus version:5.3 Jul 10 00:23:19.678635 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:19.688026 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:19.695196 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 10 00:23:19.702419 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:19.707723 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:23:19.707791 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:19.713050 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:19.723622 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:23:19.723692 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:23:19.726097 kernel: hv_vmbus: registering driver hv_netvsc Jul 10 00:23:19.732098 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 10 00:23:19.734184 kernel: hv_vmbus: registering driver hv_pci Jul 10 00:23:19.739108 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:23:19.743806 kernel: hv_vmbus: registering driver hid_hyperv Jul 10 00:23:19.743840 kernel: PTP clock support registered Jul 10 00:23:19.745867 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 10 00:23:19.748098 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 10 00:23:19.751099 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jul 10 00:23:19.757688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:19.770238 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddd658c (unnamed net_device) (uninitialized): VF slot 1 added Jul 10 00:23:19.770415 kernel: hv_utils: Registering HyperV Utility Driver Jul 10 00:23:19.772736 kernel: hv_vmbus: registering driver hv_utils Jul 10 00:23:19.776093 kernel: hv_vmbus: registering driver hv_storvsc Jul 10 00:23:19.781040 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jul 10 00:23:19.781228 kernel: hv_utils: Shutdown IC version 3.2 Jul 10 00:23:19.781247 kernel: hv_utils: Heartbeat IC version 3.0 Jul 10 00:23:19.783579 kernel: scsi host0: storvsc_host_t Jul 10 00:23:19.783734 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jul 10 00:23:19.786420 kernel: hv_utils: TimeSync IC version 4.0 Jul 10 00:23:20.128345 systemd-resolved[230]: Clock change detected. Flushing caches. Jul 10 00:23:20.131395 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jul 10 00:23:20.131523 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 10 00:23:20.136971 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jul 10 00:23:20.139944 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jul 10 00:23:20.149263 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 10 00:23:20.149403 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 00:23:20.151926 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 10 00:23:20.152051 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jul 10 00:23:20.158287 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jul 10 00:23:20.158402 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jul 10 00:23:20.168983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#294 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:23:20.179479 kernel: nvme nvme0: pci function c05b:00:00.0 Jul 10 00:23:20.179624 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jul 10 00:23:20.188196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#268 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:23:20.429926 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 10 00:23:20.436095 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:23:20.648992 kernel: nvme nvme0: using unchecked data buffer Jul 10 00:23:20.813628 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jul 10 00:23:20.840328 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 10 00:23:20.848315 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jul 10 00:23:20.867673 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 10 00:23:20.872978 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 10 00:23:20.873627 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:23:20.878710 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:23:20.882784 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:23:20.886162 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:23:20.890688 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:23:20.905019 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:23:20.915934 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:23:20.923171 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:23:21.127978 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jul 10 00:23:21.128142 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jul 10 00:23:21.130492 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jul 10 00:23:21.131957 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jul 10 00:23:21.136064 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jul 10 00:23:21.140020 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jul 10 00:23:21.145043 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jul 10 00:23:21.145062 kernel: pci 7870:00:00.0: enabling Extended Tags Jul 10 00:23:21.162925 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jul 10 00:23:21.163112 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jul 10 00:23:21.170157 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jul 10 00:23:21.176410 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jul 10 00:23:21.187932 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jul 10 00:23:21.191357 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddd658c eth0: VF registering: eth1 Jul 10 00:23:21.191514 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jul 10 00:23:21.194933 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jul 10 00:23:21.926150 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 10 00:23:21.926722 disk-uuid[668]: The operation has completed successfully. Jul 10 00:23:21.972961 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:23:21.973043 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:23:22.002276 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:23:22.019793 sh[711]: Success Jul 10 00:23:22.047102 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:23:22.047152 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:23:22.048491 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 00:23:22.056935 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 10 00:23:22.246851 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:23:22.250695 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:23:22.262602 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:23:22.273942 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 00:23:22.273974 kernel: BTRFS: device fsid c4cb30b0-bb74-4f98-aab6-7a1c6f47edee devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (724) Jul 10 00:23:22.276981 kernel: BTRFS info (device dm-0): first mount of filesystem c4cb30b0-bb74-4f98-aab6-7a1c6f47edee Jul 10 00:23:22.278392 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:22.279273 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 00:23:22.487246 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:23:22.489207 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:23:22.489342 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:23:22.491016 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:23:22.491639 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:23:22.523050 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (757) Jul 10 00:23:22.523093 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:22.525307 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:22.526643 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:23:22.551959 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:22.551950 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:23:22.559041 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:23:22.572785 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:23:22.576251 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:23:22.608848 systemd-networkd[893]: lo: Link UP Jul 10 00:23:22.608856 systemd-networkd[893]: lo: Gained carrier Jul 10 00:23:22.610360 systemd-networkd[893]: Enumeration completed Jul 10 00:23:22.616201 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 10 00:23:22.610684 systemd-networkd[893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:22.610687 systemd-networkd[893]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:23:22.611172 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:23:22.618015 systemd[1]: Reached target network.target - Network. Jul 10 00:23:22.637783 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:23:22.638001 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddd658c eth0: Data path switched to VF: enP30832s1 Jul 10 00:23:22.639001 systemd-networkd[893]: enP30832s1: Link UP Jul 10 00:23:22.639517 systemd-networkd[893]: eth0: Link UP Jul 10 00:23:22.639595 systemd-networkd[893]: eth0: Gained carrier Jul 10 00:23:22.639604 systemd-networkd[893]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:22.650085 systemd-networkd[893]: enP30832s1: Gained carrier Jul 10 00:23:22.660956 systemd-networkd[893]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:23:23.254332 ignition[875]: Ignition 2.21.0 Jul 10 00:23:23.254343 ignition[875]: Stage: fetch-offline Jul 10 00:23:23.256466 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:23:23.254429 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:23.259676 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 10 00:23:23.254436 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:23.254529 ignition[875]: parsed url from cmdline: "" Jul 10 00:23:23.254531 ignition[875]: no config URL provided Jul 10 00:23:23.254535 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:23:23.254540 ignition[875]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:23:23.254544 ignition[875]: failed to fetch config: resource requires networking Jul 10 00:23:23.255452 ignition[875]: Ignition finished successfully Jul 10 00:23:23.285983 ignition[903]: Ignition 2.21.0 Jul 10 00:23:23.285993 ignition[903]: Stage: fetch Jul 10 00:23:23.286180 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:23.286187 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:23.286259 ignition[903]: parsed url from cmdline: "" Jul 10 00:23:23.286263 ignition[903]: no config URL provided Jul 10 00:23:23.286267 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:23:23.286272 ignition[903]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:23:23.286302 ignition[903]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 10 00:23:23.336965 ignition[903]: GET result: OK Jul 10 00:23:23.337052 ignition[903]: config has been read from IMDS userdata Jul 10 00:23:23.337073 ignition[903]: parsing config with SHA512: 87b4146f15c7dc2b90c2bf297fc82d19f60f38c7b59c41aa1e3e2c46e3286b3df7ccdc4e821a7c499b85384444eaeb4be5983fd199bafd2dc987125d099e993e Jul 10 00:23:23.342409 unknown[903]: fetched base config from "system" Jul 10 00:23:23.342418 unknown[903]: fetched base config from "system" Jul 10 00:23:23.342711 ignition[903]: fetch: fetch complete Jul 10 00:23:23.342422 unknown[903]: fetched user config from "azure" Jul 10 00:23:23.342715 ignition[903]: fetch: fetch passed Jul 10 00:23:23.344535 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 10 00:23:23.342746 ignition[903]: Ignition finished successfully Jul 10 00:23:23.350355 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:23:23.366494 ignition[909]: Ignition 2.21.0 Jul 10 00:23:23.366499 ignition[909]: Stage: kargs Jul 10 00:23:23.368478 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:23:23.366652 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:23.366659 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:23.367303 ignition[909]: kargs: kargs passed Jul 10 00:23:23.382398 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:23:23.367334 ignition[909]: Ignition finished successfully Jul 10 00:23:23.398989 ignition[915]: Ignition 2.21.0 Jul 10 00:23:23.398997 ignition[915]: Stage: disks Jul 10 00:23:23.399174 ignition[915]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:23.400889 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:23:23.399181 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:23.404278 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:23:23.400117 ignition[915]: disks: disks passed Jul 10 00:23:23.407970 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:23:23.400154 ignition[915]: Ignition finished successfully Jul 10 00:23:23.410190 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:23:23.411182 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:23:23.413297 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:23:23.416020 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:23:23.470304 systemd-fsck[924]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 10 00:23:23.473589 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:23:23.478670 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:23:23.723740 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:23:23.726994 kernel: EXT4-fs (nvme0n1p9): mounted filesystem a310c019-7915-47f5-9fce-db4a09ac26c2 r/w with ordered data mode. Quota mode: none. Jul 10 00:23:23.724432 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:23:23.744894 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:23:23.753994 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:23:23.758642 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 10 00:23:23.764976 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:23:23.765005 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:23:23.771460 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:23:23.781997 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (933) Jul 10 00:23:23.782014 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:23.782024 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:23.782032 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:23:23.779047 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:23:23.789116 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:23:24.098026 systemd-networkd[893]: eth0: Gained IPv6LL Jul 10 00:23:24.151394 coreos-metadata[935]: Jul 10 00:23:24.151 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 00:23:24.154992 coreos-metadata[935]: Jul 10 00:23:24.154 INFO Fetch successful Jul 10 00:23:24.154992 coreos-metadata[935]: Jul 10 00:23:24.154 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 10 00:23:24.162011 systemd-networkd[893]: enP30832s1: Gained IPv6LL Jul 10 00:23:24.163845 coreos-metadata[935]: Jul 10 00:23:24.163 INFO Fetch successful Jul 10 00:23:24.174594 coreos-metadata[935]: Jul 10 00:23:24.174 INFO wrote hostname ci-4344.1.1-n-4c73015a89 to /sysroot/etc/hostname Jul 10 00:23:24.176674 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:23:24.246680 initrd-setup-root[963]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:23:24.275277 initrd-setup-root[970]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:23:24.295463 initrd-setup-root[977]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:23:24.299645 initrd-setup-root[984]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:23:25.072743 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:23:25.075118 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:23:25.089024 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:23:25.095755 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:23:25.097701 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:25.114369 ignition[1051]: INFO : Ignition 2.21.0 Jul 10 00:23:25.114369 ignition[1051]: INFO : Stage: mount Jul 10 00:23:25.120495 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:25.120495 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:25.120495 ignition[1051]: INFO : mount: mount passed Jul 10 00:23:25.120495 ignition[1051]: INFO : Ignition finished successfully Jul 10 00:23:25.120552 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:23:25.125248 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:23:25.129407 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:23:25.142453 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:23:25.159931 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:12) scanned by mount (1064) Jul 10 00:23:25.161972 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 66535909-6865-4f30-ad42-a3000fffd5f6 Jul 10 00:23:25.162122 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 10 00:23:25.163145 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 10 00:23:25.167895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:23:25.189579 ignition[1080]: INFO : Ignition 2.21.0 Jul 10 00:23:25.189579 ignition[1080]: INFO : Stage: files Jul 10 00:23:25.193950 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:25.193950 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:25.193950 ignition[1080]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:23:25.205109 ignition[1080]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:23:25.205109 ignition[1080]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:23:25.230124 ignition[1080]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:23:25.232485 ignition[1080]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:23:25.232485 ignition[1080]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:23:25.230597 unknown[1080]: wrote ssh authorized keys file for user: core Jul 10 00:23:25.271621 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:23:25.274963 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 10 00:23:25.331000 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:23:25.594901 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 00:23:25.594901 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:23:25.602969 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 10 00:23:26.180126 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:23:26.459200 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 00:23:26.463003 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:23:26.463003 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:23:26.463003 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:23:26.463003 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:23:26.463003 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:23:26.463003 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:23:26.463003 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:23:26.463003 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:23:26.488017 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:23:26.488017 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:23:26.488017 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:23:26.488017 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:23:26.488017 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:23:26.488017 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 10 00:23:27.360260 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:23:28.333302 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 00:23:28.333302 ignition[1080]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 00:23:28.346361 ignition[1080]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:23:28.355201 ignition[1080]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:23:28.355201 ignition[1080]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 00:23:28.355201 ignition[1080]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:23:28.364714 ignition[1080]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:23:28.364714 ignition[1080]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:23:28.364714 ignition[1080]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:23:28.364714 ignition[1080]: INFO : files: files passed Jul 10 00:23:28.364714 ignition[1080]: INFO : Ignition finished successfully Jul 10 00:23:28.356924 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:23:28.366044 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:23:28.370033 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:23:28.386272 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:23:28.386348 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:23:28.391459 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:23:28.391459 initrd-setup-root-after-ignition[1111]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:23:28.396957 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:23:28.394832 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:23:28.400047 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:23:28.403867 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:23:28.428072 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:23:28.428149 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:23:28.428596 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:23:28.428882 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:23:28.434730 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:23:28.440024 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:23:28.463123 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:23:28.466021 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:23:28.484667 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:23:28.486623 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:23:28.491031 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:23:28.492980 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:23:28.493090 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:23:28.498995 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:23:28.500254 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:23:28.504063 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:23:28.508057 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:23:28.512046 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:23:28.516048 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 00:23:28.520061 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:23:28.524034 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:23:28.528081 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:23:28.532061 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:23:28.536070 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:23:28.540016 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:23:28.540129 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:23:28.540494 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:23:28.540778 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:23:28.545208 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:23:28.547224 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:23:28.549394 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:23:28.549486 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:23:28.561141 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:23:28.561278 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:23:28.565080 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:23:28.565194 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:23:28.569064 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 10 00:23:28.569198 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 10 00:23:28.575090 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:23:28.577420 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:23:28.577967 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:23:28.586072 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:23:28.589248 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:23:28.589378 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:23:28.592931 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:23:28.603896 ignition[1135]: INFO : Ignition 2.21.0 Jul 10 00:23:28.603896 ignition[1135]: INFO : Stage: umount Jul 10 00:23:28.603896 ignition[1135]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:23:28.603896 ignition[1135]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 10 00:23:28.603896 ignition[1135]: INFO : umount: umount passed Jul 10 00:23:28.603896 ignition[1135]: INFO : Ignition finished successfully Jul 10 00:23:28.593035 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:23:28.614517 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:23:28.615998 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:23:28.618246 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:23:28.618379 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:23:28.621863 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:23:28.621903 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:23:28.623560 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 10 00:23:28.623602 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 10 00:23:28.629034 systemd[1]: Stopped target network.target - Network. Jul 10 00:23:28.630848 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:23:28.630894 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:23:28.634983 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:23:28.638946 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:23:28.642988 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:23:28.645973 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:23:28.647875 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:23:28.649046 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:23:28.649096 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:23:28.651178 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:23:28.651207 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:23:28.651949 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:23:28.651990 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:23:28.652223 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:23:28.652253 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:23:28.652564 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:23:28.652837 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:23:28.662734 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:23:28.663269 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:23:28.663339 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:23:28.669547 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 00:23:28.669737 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:23:28.669823 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:23:28.672467 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 00:23:28.672647 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:23:28.672711 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:23:28.678549 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 00:23:28.731013 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddd658c eth0: Data path switched from VF: enP30832s1 Jul 10 00:23:28.680836 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:23:28.680867 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:23:28.688386 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:23:28.741191 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:23:28.691957 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:23:28.692011 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:23:28.696029 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:23:28.696072 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:23:28.698227 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:23:28.698262 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:23:28.701983 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:23:28.702022 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:23:28.706128 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:23:28.715532 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:23:28.715574 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:23:28.719277 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:23:28.719397 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:23:28.724948 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:23:28.724978 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:23:28.730292 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:23:28.730327 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:23:28.738984 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:23:28.739029 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:23:28.745949 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:23:28.745993 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:23:28.750209 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:23:28.750245 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:23:28.752967 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:23:28.758954 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 00:23:28.759006 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:23:28.762307 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:23:28.762346 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:23:28.764820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:23:28.764855 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:28.773215 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 10 00:23:28.773266 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:23:28.773298 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 10 00:23:28.773554 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:23:28.773636 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:23:28.777223 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:23:28.777295 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:23:28.818371 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:23:28.818444 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:23:28.820582 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:23:28.836685 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:23:28.836740 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:23:28.840782 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:23:28.922475 systemd[1]: Switching root. Jul 10 00:23:28.992072 systemd-journald[205]: Journal stopped Jul 10 00:23:32.120500 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jul 10 00:23:32.120532 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:23:32.120543 kernel: SELinux: policy capability open_perms=1 Jul 10 00:23:32.120550 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:23:32.120559 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:23:32.120567 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:23:32.120577 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:23:32.120586 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:23:32.120594 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:23:32.120602 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 00:23:32.120721 kernel: audit: type=1403 audit(1752107010.128:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:23:32.120795 systemd[1]: Successfully loaded SELinux policy in 126.682ms. Jul 10 00:23:32.120807 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.991ms. Jul 10 00:23:32.120819 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 00:23:32.120830 systemd[1]: Detected virtualization microsoft. Jul 10 00:23:32.120839 systemd[1]: Detected architecture x86-64. Jul 10 00:23:32.120907 systemd[1]: Detected first boot. Jul 10 00:23:32.120937 systemd[1]: Hostname set to . Jul 10 00:23:32.121005 systemd[1]: Initializing machine ID from random generator. Jul 10 00:23:32.121015 zram_generator::config[1179]: No configuration found. Jul 10 00:23:32.121026 kernel: Guest personality initialized and is inactive Jul 10 00:23:32.121035 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jul 10 00:23:32.121043 kernel: Initialized host personality Jul 10 00:23:32.121051 kernel: NET: Registered PF_VSOCK protocol family Jul 10 00:23:32.121135 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:23:32.121207 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 00:23:32.121225 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:23:32.121237 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:23:32.121249 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:23:32.121261 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:23:32.121274 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:23:32.122346 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:23:32.122366 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:23:32.122379 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:23:32.122399 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:23:32.122411 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:23:32.123969 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:23:32.123991 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:23:32.124001 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:23:32.124011 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:23:32.124025 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:23:32.124036 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:23:32.124047 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:23:32.124056 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 10 00:23:32.124066 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:23:32.124076 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:23:32.124085 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:23:32.124094 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:23:32.124105 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:23:32.124114 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:23:32.124123 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:23:32.124132 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:23:32.124140 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:23:32.124149 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:23:32.124159 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:23:32.124167 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:23:32.124178 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 00:23:32.124187 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:23:32.124196 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:23:32.124204 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:23:32.124213 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:23:32.124224 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:23:32.124233 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:23:32.124242 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:23:32.124251 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:32.124260 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:23:32.124269 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:23:32.124279 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:23:32.124289 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:23:32.124298 systemd[1]: Reached target machines.target - Containers. Jul 10 00:23:32.124308 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:23:32.124317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:23:32.124326 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:23:32.124336 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:23:32.124345 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:23:32.124354 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:23:32.124363 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:23:32.124372 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:23:32.124382 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:23:32.124392 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:23:32.124401 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:23:32.124411 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:23:32.124420 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:23:32.124429 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:23:32.124439 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:23:32.124448 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:23:32.124459 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:23:32.124468 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:23:32.124477 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:23:32.124486 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 00:23:32.124495 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:23:32.124504 kernel: loop: module loaded Jul 10 00:23:32.124513 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:23:32.124523 systemd[1]: Stopped verity-setup.service. Jul 10 00:23:32.124534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:32.124542 kernel: fuse: init (API version 7.41) Jul 10 00:23:32.124551 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:23:32.124560 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:23:32.124569 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:23:32.124600 systemd-journald[1262]: Collecting audit messages is disabled. Jul 10 00:23:32.124623 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:23:32.124633 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:23:32.124642 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:23:32.124652 systemd-journald[1262]: Journal started Jul 10 00:23:32.124674 systemd-journald[1262]: Runtime Journal (/run/log/journal/0afd0242443040d7b61a5726e5d25bea) is 8M, max 158.9M, 150.9M free. Jul 10 00:23:31.752257 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:23:31.763406 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 10 00:23:31.763765 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:23:32.128386 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:23:32.128371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:23:32.132260 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:23:32.132408 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:23:32.134618 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:23:32.134752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:23:32.136593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:23:32.137105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:23:32.140280 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:23:32.141110 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:23:32.142924 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:23:32.143174 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:23:32.145238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:23:32.147108 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:23:32.151248 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:23:32.169793 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:23:32.174059 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:23:32.179997 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:23:32.181984 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:23:32.182011 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:23:32.185329 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 00:23:32.193044 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:23:32.206574 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:23:32.206928 kernel: ACPI: bus type drm_connector registered Jul 10 00:23:32.209516 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:23:32.214065 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:23:32.215670 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:23:32.221661 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:23:32.223458 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:23:32.226044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:23:32.227694 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:23:32.231015 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:23:32.233113 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:23:32.233244 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:23:32.240086 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 00:23:32.242408 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:23:32.246183 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:23:32.250784 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:23:32.257830 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:23:32.265093 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:23:32.267424 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:23:32.273118 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 00:23:32.275929 kernel: loop0: detected capacity change from 0 to 28496 Jul 10 00:23:32.292274 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:23:32.295261 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 00:23:32.330599 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:23:32.335033 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:23:32.363042 systemd-journald[1262]: Time spent on flushing to /var/log/journal/0afd0242443040d7b61a5726e5d25bea is 11.785ms for 997 entries. Jul 10 00:23:32.363042 systemd-journald[1262]: System Journal (/var/log/journal/0afd0242443040d7b61a5726e5d25bea) is 8M, max 2.6G, 2.6G free. Jul 10 00:23:32.397409 systemd-journald[1262]: Received client request to flush runtime journal. Jul 10 00:23:32.398230 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:23:32.404011 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jul 10 00:23:32.404026 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jul 10 00:23:32.406929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:23:32.543934 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:23:32.577931 kernel: loop1: detected capacity change from 0 to 221472 Jul 10 00:23:32.641928 kernel: loop2: detected capacity change from 0 to 146240 Jul 10 00:23:32.764800 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:23:32.944932 kernel: loop3: detected capacity change from 0 to 113872 Jul 10 00:23:32.986630 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:23:32.989041 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:23:33.016841 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Jul 10 00:23:33.140597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:23:33.148024 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:23:33.201658 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:23:33.223298 kernel: loop4: detected capacity change from 0 to 28496 Jul 10 00:23:33.238716 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 10 00:23:33.242987 kernel: loop5: detected capacity change from 0 to 221472 Jul 10 00:23:33.280333 kernel: loop6: detected capacity change from 0 to 146240 Jul 10 00:23:33.293931 kernel: hv_vmbus: registering driver hyperv_fb Jul 10 00:23:33.296932 kernel: hv_vmbus: registering driver hv_balloon Jul 10 00:23:33.302936 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 10 00:23:33.304582 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:23:33.309537 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 10 00:23:33.309584 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 10 00:23:33.313563 kernel: Console: switching to colour dummy device 80x25 Jul 10 00:23:33.313617 kernel: loop7: detected capacity change from 0 to 113872 Jul 10 00:23:33.318101 kernel: Console: switching to colour frame buffer device 128x48 Jul 10 00:23:33.329190 (sd-merge)[1376]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 10 00:23:33.329512 (sd-merge)[1376]: Merged extensions into '/usr'. Jul 10 00:23:33.337434 systemd[1]: Reload requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:23:33.337445 systemd[1]: Reloading... Jul 10 00:23:33.341130 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#56 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 10 00:23:33.348963 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 00:23:33.461930 zram_generator::config[1444]: No configuration found. Jul 10 00:23:33.519889 systemd-networkd[1354]: lo: Link UP Jul 10 00:23:33.521428 systemd-networkd[1354]: lo: Gained carrier Jul 10 00:23:33.524660 systemd-networkd[1354]: Enumeration completed Jul 10 00:23:33.526046 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:33.526053 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:23:33.530964 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 10 00:23:33.542947 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 10 00:23:33.548950 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddd658c eth0: Data path switched to VF: enP30832s1 Jul 10 00:23:33.550659 systemd-networkd[1354]: enP30832s1: Link UP Jul 10 00:23:33.550720 systemd-networkd[1354]: eth0: Link UP Jul 10 00:23:33.550722 systemd-networkd[1354]: eth0: Gained carrier Jul 10 00:23:33.550736 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:33.557539 systemd-networkd[1354]: enP30832s1: Gained carrier Jul 10 00:23:33.574982 systemd-networkd[1354]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:23:33.658906 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:23:33.728927 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 10 00:23:33.769231 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 10 00:23:33.770850 systemd[1]: Reloading finished in 433 ms. Jul 10 00:23:33.789508 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:23:33.792237 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:23:33.835672 systemd[1]: Starting ensure-sysext.service... Jul 10 00:23:33.839026 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:23:33.843061 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 00:23:33.845564 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:23:33.850025 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:23:33.854688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:23:33.867019 systemd[1]: Reload requested from client PID 1514 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:23:33.867034 systemd[1]: Reloading... Jul 10 00:23:33.875762 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 00:23:33.876330 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 00:23:33.876648 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:23:33.877009 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:23:33.877671 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:23:33.877943 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jul 10 00:23:33.878033 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Jul 10 00:23:33.885858 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:23:33.885957 systemd-tmpfiles[1518]: Skipping /boot Jul 10 00:23:33.893580 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:23:33.893591 systemd-tmpfiles[1518]: Skipping /boot Jul 10 00:23:33.930936 zram_generator::config[1551]: No configuration found. Jul 10 00:23:34.003988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:23:34.084434 systemd[1]: Reloading finished in 217 ms. Jul 10 00:23:34.119744 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:23:34.120479 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 00:23:34.120700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:23:34.127509 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:23:34.132255 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:23:34.135026 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:23:34.140260 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:23:34.142079 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:23:34.147860 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:34.148568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:23:34.154193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:23:34.156382 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:23:34.158856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:23:34.159367 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:23:34.159458 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:23:34.159533 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:34.164549 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:34.164692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:23:34.164829 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:23:34.164929 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:23:34.165010 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:34.171347 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:34.171560 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:23:34.173746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:23:34.174294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:23:34.174381 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 00:23:34.174516 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:23:34.174596 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 00:23:34.175260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:23:34.175380 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:23:34.186106 systemd[1]: Finished ensure-sysext.service. Jul 10 00:23:34.190668 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:23:34.190983 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:23:34.193737 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:23:34.193876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:23:34.195266 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:23:34.195393 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:23:34.198722 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:23:34.203271 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:23:34.203311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:23:34.205493 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:23:34.267689 systemd-resolved[1620]: Positive Trust Anchors: Jul 10 00:23:34.267698 systemd-resolved[1620]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:23:34.267727 systemd-resolved[1620]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:23:34.270669 systemd-resolved[1620]: Using system hostname 'ci-4344.1.1-n-4c73015a89'. Jul 10 00:23:34.272178 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:23:34.272655 systemd[1]: Reached target network.target - Network. Jul 10 00:23:34.272924 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:23:34.530623 augenrules[1653]: No rules Jul 10 00:23:34.531341 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:23:34.531514 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:23:34.885876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:23:35.042021 systemd-networkd[1354]: enP30832s1: Gained IPv6LL Jul 10 00:23:35.479384 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:23:35.483154 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:23:35.554117 systemd-networkd[1354]: eth0: Gained IPv6LL Jul 10 00:23:35.556507 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:23:35.558415 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:23:39.399998 ldconfig[1312]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:23:39.408475 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:23:39.411136 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:23:39.430727 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:23:39.433136 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:23:39.434363 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:23:39.436975 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:23:39.439978 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 10 00:23:39.441402 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:23:39.444006 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:23:39.445234 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:23:39.447955 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:23:39.447985 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:23:39.448827 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:23:39.468752 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:23:39.471069 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:23:39.474656 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 00:23:39.476099 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 00:23:39.479055 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 00:23:39.481551 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:23:39.483393 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 00:23:39.485186 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:23:39.487037 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:23:39.489985 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:23:39.493006 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:23:39.493030 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:23:39.494952 systemd[1]: Starting chronyd.service - NTP client/server... Jul 10 00:23:39.498742 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:23:39.503691 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 10 00:23:39.510053 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:23:39.513497 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:23:39.517072 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:23:39.520068 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:23:39.522999 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:23:39.528771 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 10 00:23:39.531998 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jul 10 00:23:39.535096 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 10 00:23:39.536628 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 10 00:23:39.539171 jq[1678]: false Jul 10 00:23:39.540038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:23:39.544793 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:23:39.550023 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:23:39.551737 KVP[1681]: KVP starting; pid is:1681 Jul 10 00:23:39.554649 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:23:39.560353 KVP[1681]: KVP LIC Version: 3.1 Jul 10 00:23:39.561015 kernel: hv_utils: KVP IC version 4.0 Jul 10 00:23:39.560586 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:23:39.563415 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:23:39.569220 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:23:39.572536 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:23:39.575165 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:23:39.576697 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:23:39.582995 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:23:39.590344 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:23:39.590687 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:23:39.597494 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:23:39.599041 (chronyd)[1670]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 10 00:23:39.602116 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:23:39.605689 chronyd[1703]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 10 00:23:39.609252 chronyd[1703]: Timezone right/UTC failed leap second check, ignoring Jul 10 00:23:39.609390 chronyd[1703]: Loaded seccomp filter (level 2) Jul 10 00:23:39.610399 jq[1696]: true Jul 10 00:23:39.611003 systemd[1]: Started chronyd.service - NTP client/server. Jul 10 00:23:39.627475 jq[1706]: true Jul 10 00:23:39.737296 extend-filesystems[1679]: Found /dev/nvme0n1p6 Jul 10 00:23:39.737318 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:23:39.748031 extend-filesystems[1679]: Found /dev/nvme0n1p9 Jul 10 00:23:39.750431 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:23:39.752348 extend-filesystems[1679]: Checking size of /dev/nvme0n1p9 Jul 10 00:23:39.751905 (ntainerd)[1733]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:23:39.752041 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:23:39.836132 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:23:39.839812 update_engine[1694]: I20250710 00:23:39.838869 1694 main.cc:92] Flatcar Update Engine starting Jul 10 00:23:39.841619 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Refreshing passwd entry cache Jul 10 00:23:39.841032 oslogin_cache_refresh[1680]: Refreshing passwd entry cache Jul 10 00:23:39.858162 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Failure getting users, quitting Jul 10 00:23:39.858162 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:23:39.858162 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Refreshing group entry cache Jul 10 00:23:39.857758 oslogin_cache_refresh[1680]: Failure getting users, quitting Jul 10 00:23:39.857774 oslogin_cache_refresh[1680]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 10 00:23:39.857810 oslogin_cache_refresh[1680]: Refreshing group entry cache Jul 10 00:23:39.865925 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Failure getting groups, quitting Jul 10 00:23:39.865906 oslogin_cache_refresh[1680]: Failure getting groups, quitting Jul 10 00:23:39.866024 google_oslogin_nss_cache[1680]: oslogin_cache_refresh[1680]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:23:39.865936 oslogin_cache_refresh[1680]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 10 00:23:39.867170 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 10 00:23:39.867396 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 10 00:23:39.893202 extend-filesystems[1679]: Old size kept for /dev/nvme0n1p9 Jul 10 00:23:39.896184 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:23:39.896377 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:23:40.040880 systemd-logind[1691]: New seat seat0. Jul 10 00:23:40.043450 systemd-logind[1691]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 00:23:40.043588 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:23:40.227011 dbus-daemon[1673]: [system] SELinux support is enabled Jul 10 00:23:40.324669 update_engine[1694]: I20250710 00:23:40.230871 1694 update_check_scheduler.cc:74] Next update check in 8m58s Jul 10 00:23:40.324719 tar[1700]: linux-amd64/helm Jul 10 00:23:40.233984 dbus-daemon[1673]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 10 00:23:40.227117 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:23:40.231101 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:23:40.231123 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:23:40.233381 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:23:40.233398 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:23:40.237094 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:23:40.241126 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:23:40.338200 coreos-metadata[1672]: Jul 10 00:23:40.337 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 10 00:23:40.343336 coreos-metadata[1672]: Jul 10 00:23:40.342 INFO Fetch successful Jul 10 00:23:40.343336 coreos-metadata[1672]: Jul 10 00:23:40.343 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 10 00:23:40.346414 coreos-metadata[1672]: Jul 10 00:23:40.346 INFO Fetch successful Jul 10 00:23:40.346746 coreos-metadata[1672]: Jul 10 00:23:40.346 INFO Fetching http://168.63.129.16/machine/fe5bdd84-998b-4b22-a602-6ebd4e300a5b/d2d9307e%2D3a46%2D4d21%2Db9f8%2Dbfd8723e4973.%5Fci%2D4344.1.1%2Dn%2D4c73015a89?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 10 00:23:40.347795 coreos-metadata[1672]: Jul 10 00:23:40.347 INFO Fetch successful Jul 10 00:23:40.347982 coreos-metadata[1672]: Jul 10 00:23:40.347 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 10 00:23:40.357143 coreos-metadata[1672]: Jul 10 00:23:40.357 INFO Fetch successful Jul 10 00:23:40.387063 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 10 00:23:40.389575 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:23:40.852825 tar[1700]: linux-amd64/LICENSE Jul 10 00:23:40.852992 tar[1700]: linux-amd64/README.md Jul 10 00:23:40.867443 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:23:41.013701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:23:41.026168 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:23:41.033367 locksmithd[1777]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:23:41.521891 sshd_keygen[1741]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:23:41.535523 bash[1727]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:23:41.539369 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:23:41.543613 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:23:41.546290 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:23:41.550907 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:23:41.555345 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 10 00:23:41.567846 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:23:41.568208 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:23:41.575130 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:23:41.591702 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 10 00:23:41.594828 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:23:41.603168 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:23:41.606117 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 10 00:23:41.609602 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:23:41.639581 kubelet[1793]: E0710 00:23:41.639558 1793 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:23:41.641099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:23:41.641251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:23:41.641504 systemd[1]: kubelet.service: Consumed 831ms CPU time, 261.9M memory peak. Jul 10 00:23:42.921317 containerd[1733]: time="2025-07-10T00:23:42Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 00:23:42.922160 containerd[1733]: time="2025-07-10T00:23:42.922125247Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 00:23:42.928254 containerd[1733]: time="2025-07-10T00:23:42.928221432Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.088µs" Jul 10 00:23:42.928254 containerd[1733]: time="2025-07-10T00:23:42.928244567Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 00:23:42.928254 containerd[1733]: time="2025-07-10T00:23:42.928260728Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 00:23:42.928386 containerd[1733]: time="2025-07-10T00:23:42.928381388Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 00:23:42.928405 containerd[1733]: time="2025-07-10T00:23:42.928391476Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 00:23:42.928420 containerd[1733]: time="2025-07-10T00:23:42.928411186Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:23:42.928480 containerd[1733]: time="2025-07-10T00:23:42.928464688Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 00:23:42.928480 containerd[1733]: time="2025-07-10T00:23:42.928474669Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:23:42.928664 containerd[1733]: time="2025-07-10T00:23:42.928648600Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 00:23:42.928664 containerd[1733]: time="2025-07-10T00:23:42.928658980Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:23:42.928707 containerd[1733]: time="2025-07-10T00:23:42.928667989Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 00:23:42.928707 containerd[1733]: time="2025-07-10T00:23:42.928674543Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 00:23:42.928748 containerd[1733]: time="2025-07-10T00:23:42.928721293Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 00:23:42.928902 containerd[1733]: time="2025-07-10T00:23:42.928887533Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:23:42.928948 containerd[1733]: time="2025-07-10T00:23:42.928907960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 00:23:42.928948 containerd[1733]: time="2025-07-10T00:23:42.928934149Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 00:23:42.928986 containerd[1733]: time="2025-07-10T00:23:42.928955250Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 00:23:42.929150 containerd[1733]: time="2025-07-10T00:23:42.929137721Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 00:23:42.929201 containerd[1733]: time="2025-07-10T00:23:42.929178554Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:23:43.386609 containerd[1733]: time="2025-07-10T00:23:43.386549236Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 00:23:43.386752 containerd[1733]: time="2025-07-10T00:23:43.386639198Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 00:23:43.386752 containerd[1733]: time="2025-07-10T00:23:43.386656785Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 00:23:43.386752 containerd[1733]: time="2025-07-10T00:23:43.386675045Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 00:23:43.386752 containerd[1733]: time="2025-07-10T00:23:43.386687972Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 00:23:43.386752 containerd[1733]: time="2025-07-10T00:23:43.386698703Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 00:23:43.386752 containerd[1733]: time="2025-07-10T00:23:43.386709359Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 00:23:43.386752 containerd[1733]: time="2025-07-10T00:23:43.386721031Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 00:23:43.386752 containerd[1733]: time="2025-07-10T00:23:43.386732122Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 00:23:43.386752 containerd[1733]: time="2025-07-10T00:23:43.386743498Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 00:23:43.386752 containerd[1733]: time="2025-07-10T00:23:43.386752921Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 00:23:43.386951 containerd[1733]: time="2025-07-10T00:23:43.386766138Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 00:23:43.386951 containerd[1733]: time="2025-07-10T00:23:43.386900569Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 00:23:43.386951 containerd[1733]: time="2025-07-10T00:23:43.386936555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 00:23:43.387014 containerd[1733]: time="2025-07-10T00:23:43.386953369Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 00:23:43.387014 containerd[1733]: time="2025-07-10T00:23:43.386963688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 00:23:43.387014 containerd[1733]: time="2025-07-10T00:23:43.386974366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 00:23:43.387014 containerd[1733]: time="2025-07-10T00:23:43.386985375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 00:23:43.387014 containerd[1733]: time="2025-07-10T00:23:43.386996783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 00:23:43.387014 containerd[1733]: time="2025-07-10T00:23:43.387007468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 00:23:43.387122 containerd[1733]: time="2025-07-10T00:23:43.387019255Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 00:23:43.387122 containerd[1733]: time="2025-07-10T00:23:43.387043313Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 00:23:43.387122 containerd[1733]: time="2025-07-10T00:23:43.387055271Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 00:23:43.387175 containerd[1733]: time="2025-07-10T00:23:43.387126189Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 00:23:43.387175 containerd[1733]: time="2025-07-10T00:23:43.387140985Z" level=info msg="Start snapshots syncer" Jul 10 00:23:43.387175 containerd[1733]: time="2025-07-10T00:23:43.387160093Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 00:23:43.388415 containerd[1733]: time="2025-07-10T00:23:43.387402351Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 00:23:43.388415 containerd[1733]: time="2025-07-10T00:23:43.387454474Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387535513Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387635722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387654535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387671626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387682496Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387694751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387705542Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387718208Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387739339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387749456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387760069Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387788454Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387800540Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 00:23:43.388594 containerd[1733]: time="2025-07-10T00:23:43.387809232Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:23:43.388836 containerd[1733]: time="2025-07-10T00:23:43.387818771Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 00:23:43.388836 containerd[1733]: time="2025-07-10T00:23:43.387826011Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 00:23:43.388836 containerd[1733]: time="2025-07-10T00:23:43.387835257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 00:23:43.388836 containerd[1733]: time="2025-07-10T00:23:43.387845803Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 00:23:43.388836 containerd[1733]: time="2025-07-10T00:23:43.387862336Z" level=info msg="runtime interface created" Jul 10 00:23:43.388836 containerd[1733]: time="2025-07-10T00:23:43.387867711Z" level=info msg="created NRI interface" Jul 10 00:23:43.388836 containerd[1733]: time="2025-07-10T00:23:43.387879680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 00:23:43.388836 containerd[1733]: time="2025-07-10T00:23:43.387890417Z" level=info msg="Connect containerd service" Jul 10 00:23:43.388836 containerd[1733]: time="2025-07-10T00:23:43.387984893Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:23:43.390331 containerd[1733]: time="2025-07-10T00:23:43.390089290Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:23:45.256290 waagent[1824]: 2025-07-10T00:23:45.256217Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 10 00:23:45.259009 waagent[1824]: 2025-07-10T00:23:45.258967Z INFO Daemon Daemon OS: flatcar 4344.1.1 Jul 10 00:23:45.261006 waagent[1824]: 2025-07-10T00:23:45.260970Z INFO Daemon Daemon Python: 3.11.12 Jul 10 00:23:45.263134 waagent[1824]: 2025-07-10T00:23:45.263081Z INFO Daemon Daemon Run daemon Jul 10 00:23:45.263991 waagent[1824]: 2025-07-10T00:23:45.263962Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4344.1.1' Jul 10 00:23:45.265736 waagent[1824]: 2025-07-10T00:23:45.265697Z INFO Daemon Daemon Using waagent for provisioning Jul 10 00:23:45.268100 waagent[1824]: 2025-07-10T00:23:45.268062Z INFO Daemon Daemon Activate resource disk Jul 10 00:23:45.269207 waagent[1824]: 2025-07-10T00:23:45.269171Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 10 00:23:45.273059 waagent[1824]: 2025-07-10T00:23:45.273015Z INFO Daemon Daemon Found device: None Jul 10 00:23:45.274027 waagent[1824]: 2025-07-10T00:23:45.273944Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 10 00:23:45.277007 waagent[1824]: 2025-07-10T00:23:45.276970Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 10 00:23:45.281370 waagent[1824]: 2025-07-10T00:23:45.281322Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 00:23:45.282606 waagent[1824]: 2025-07-10T00:23:45.282573Z INFO Daemon Daemon Running default provisioning handler Jul 10 00:23:45.290271 waagent[1824]: 2025-07-10T00:23:45.290191Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 10 00:23:45.294794 waagent[1824]: 2025-07-10T00:23:45.294357Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 10 00:23:45.298048 waagent[1824]: 2025-07-10T00:23:45.298002Z INFO Daemon Daemon cloud-init is enabled: False Jul 10 00:23:45.299575 waagent[1824]: 2025-07-10T00:23:45.299381Z INFO Daemon Daemon Copying ovf-env.xml Jul 10 00:23:45.305386 containerd[1733]: time="2025-07-10T00:23:45.305352316Z" level=info msg="Start subscribing containerd event" Jul 10 00:23:45.305924 containerd[1733]: time="2025-07-10T00:23:45.305811873Z" level=info msg="Start recovering state" Jul 10 00:23:45.305990 containerd[1733]: time="2025-07-10T00:23:45.305781043Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:23:45.306035 containerd[1733]: time="2025-07-10T00:23:45.306025684Z" level=info msg="Start event monitor" Jul 10 00:23:45.306142 containerd[1733]: time="2025-07-10T00:23:45.306072617Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:23:45.306142 containerd[1733]: time="2025-07-10T00:23:45.306033969Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:23:45.306142 containerd[1733]: time="2025-07-10T00:23:45.306080586Z" level=info msg="Start streaming server" Jul 10 00:23:45.306142 containerd[1733]: time="2025-07-10T00:23:45.306104066Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 00:23:45.306142 containerd[1733]: time="2025-07-10T00:23:45.306110573Z" level=info msg="runtime interface starting up..." Jul 10 00:23:45.306142 containerd[1733]: time="2025-07-10T00:23:45.306116147Z" level=info msg="starting plugins..." Jul 10 00:23:45.306142 containerd[1733]: time="2025-07-10T00:23:45.306127335Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 00:23:45.306258 containerd[1733]: time="2025-07-10T00:23:45.306216539Z" level=info msg="containerd successfully booted in 2.385251s" Jul 10 00:23:45.306362 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:23:45.308452 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:23:45.310298 systemd[1]: Startup finished in 2.934s (kernel) + 10.958s (initrd) + 15.306s (userspace) = 29.199s. Jul 10 00:23:45.487507 waagent[1824]: 2025-07-10T00:23:45.487466Z INFO Daemon Daemon Successfully mounted dvd Jul 10 00:23:45.641146 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 10 00:23:45.647431 waagent[1824]: 2025-07-10T00:23:45.641486Z INFO Daemon Daemon Detect protocol endpoint Jul 10 00:23:45.647431 waagent[1824]: 2025-07-10T00:23:45.642030Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 10 00:23:45.647431 waagent[1824]: 2025-07-10T00:23:45.642293Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 10 00:23:45.647431 waagent[1824]: 2025-07-10T00:23:45.642529Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 10 00:23:45.647431 waagent[1824]: 2025-07-10T00:23:45.642653Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 10 00:23:45.647431 waagent[1824]: 2025-07-10T00:23:45.643056Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 10 00:23:45.652562 waagent[1824]: 2025-07-10T00:23:45.652530Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 10 00:23:45.653628 waagent[1824]: 2025-07-10T00:23:45.653115Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 10 00:23:45.653628 waagent[1824]: 2025-07-10T00:23:45.653340Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 10 00:23:46.166857 waagent[1824]: 2025-07-10T00:23:46.166800Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 10 00:23:46.167837 waagent[1824]: 2025-07-10T00:23:46.167135Z INFO Daemon Daemon Forcing an update of the goal state. Jul 10 00:23:46.171328 waagent[1824]: 2025-07-10T00:23:46.171299Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 00:23:46.184107 waagent[1824]: 2025-07-10T00:23:46.184081Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 10 00:23:46.186677 waagent[1824]: 2025-07-10T00:23:46.184879Z INFO Daemon Jul 10 00:23:46.186677 waagent[1824]: 2025-07-10T00:23:46.185348Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: bafc03a4-e16e-41f4-8cab-830a7b06bade eTag: 5217842107798009411 source: Fabric] Jul 10 00:23:46.186677 waagent[1824]: 2025-07-10T00:23:46.185590Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 10 00:23:46.186677 waagent[1824]: 2025-07-10T00:23:46.185862Z INFO Daemon Jul 10 00:23:46.186677 waagent[1824]: 2025-07-10T00:23:46.186060Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 10 00:23:46.192987 waagent[1824]: 2025-07-10T00:23:46.192962Z INFO Daemon Daemon Downloading artifacts profile blob Jul 10 00:23:46.259376 waagent[1824]: 2025-07-10T00:23:46.259331Z INFO Daemon Downloaded certificate {'thumbprint': '448BCAAF02C943AD71FA039339BBECCAE40953F6', 'hasPrivateKey': True} Jul 10 00:23:46.262510 waagent[1824]: 2025-07-10T00:23:46.260095Z INFO Daemon Fetch goal state completed Jul 10 00:23:46.268648 waagent[1824]: 2025-07-10T00:23:46.268620Z INFO Daemon Daemon Starting provisioning Jul 10 00:23:46.269275 waagent[1824]: 2025-07-10T00:23:46.268977Z INFO Daemon Daemon Handle ovf-env.xml. Jul 10 00:23:46.269275 waagent[1824]: 2025-07-10T00:23:46.269066Z INFO Daemon Daemon Set hostname [ci-4344.1.1-n-4c73015a89] Jul 10 00:23:46.328693 waagent[1824]: 2025-07-10T00:23:46.328654Z INFO Daemon Daemon Publish hostname [ci-4344.1.1-n-4c73015a89] Jul 10 00:23:46.333995 waagent[1824]: 2025-07-10T00:23:46.329342Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 10 00:23:46.333995 waagent[1824]: 2025-07-10T00:23:46.329642Z INFO Daemon Daemon Primary interface is [eth0] Jul 10 00:23:46.336337 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:23:46.336342 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:23:46.336362 systemd-networkd[1354]: eth0: DHCP lease lost Jul 10 00:23:46.337126 waagent[1824]: 2025-07-10T00:23:46.337083Z INFO Daemon Daemon Create user account if not exists Jul 10 00:23:46.338602 waagent[1824]: 2025-07-10T00:23:46.338222Z INFO Daemon Daemon User core already exists, skip useradd Jul 10 00:23:46.338602 waagent[1824]: 2025-07-10T00:23:46.338364Z INFO Daemon Daemon Configure sudoer Jul 10 00:23:46.356936 systemd-networkd[1354]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 10 00:23:47.520292 waagent[1824]: 2025-07-10T00:23:47.520196Z INFO Daemon Daemon Configure sshd Jul 10 00:23:47.523142 login[1827]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jul 10 00:23:48.521031 login[1826]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:23:48.525262 login[1827]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 00:23:48.531262 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:23:48.532415 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:23:48.534400 systemd-logind[1691]: New session 2 of user core. Jul 10 00:23:48.537629 systemd-logind[1691]: New session 1 of user core. Jul 10 00:23:48.567109 waagent[1824]: 2025-07-10T00:23:48.567046Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 10 00:23:48.572045 waagent[1824]: 2025-07-10T00:23:48.567412Z INFO Daemon Daemon Deploy ssh public key. Jul 10 00:23:48.583792 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:23:48.585515 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:23:48.594516 (systemd)[1875]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:23:48.596071 systemd-logind[1691]: New session c1 of user core. Jul 10 00:23:48.701184 systemd[1875]: Queued start job for default target default.target. Jul 10 00:23:48.712518 systemd[1875]: Created slice app.slice - User Application Slice. Jul 10 00:23:48.712546 systemd[1875]: Reached target paths.target - Paths. Jul 10 00:23:48.712573 systemd[1875]: Reached target timers.target - Timers. Jul 10 00:23:48.713357 systemd[1875]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:23:48.720182 systemd[1875]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:23:48.720304 systemd[1875]: Reached target sockets.target - Sockets. Jul 10 00:23:48.720379 systemd[1875]: Reached target basic.target - Basic System. Jul 10 00:23:48.720475 systemd[1875]: Reached target default.target - Main User Target. Jul 10 00:23:48.720502 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:23:48.720568 systemd[1875]: Startup finished in 120ms. Jul 10 00:23:48.734044 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:23:48.734654 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:23:48.784455 waagent[1824]: 2025-07-10T00:23:48.784395Z INFO Daemon Daemon Provisioning complete Jul 10 00:23:48.800076 waagent[1824]: 2025-07-10T00:23:48.800047Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 10 00:23:48.801339 waagent[1824]: 2025-07-10T00:23:48.801312Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 10 00:23:48.802644 waagent[1824]: 2025-07-10T00:23:48.802616Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 10 00:23:48.892573 waagent[1886]: 2025-07-10T00:23:48.892520Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 10 00:23:48.892758 waagent[1886]: 2025-07-10T00:23:48.892606Z INFO ExtHandler ExtHandler OS: flatcar 4344.1.1 Jul 10 00:23:48.892758 waagent[1886]: 2025-07-10T00:23:48.892644Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 10 00:23:48.892758 waagent[1886]: 2025-07-10T00:23:48.892680Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 10 00:23:49.578983 waagent[1886]: 2025-07-10T00:23:49.578889Z INFO ExtHandler ExtHandler Distro: flatcar-4344.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 10 00:23:49.579140 waagent[1886]: 2025-07-10T00:23:49.579114Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:23:49.579203 waagent[1886]: 2025-07-10T00:23:49.579172Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:23:49.586331 waagent[1886]: 2025-07-10T00:23:49.586284Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 10 00:23:49.591480 waagent[1886]: 2025-07-10T00:23:49.591455Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 10 00:23:49.591763 waagent[1886]: 2025-07-10T00:23:49.591740Z INFO ExtHandler Jul 10 00:23:49.591803 waagent[1886]: 2025-07-10T00:23:49.591785Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f67ad4e2-2680-4362-9873-cd7633d32788 eTag: 5217842107798009411 source: Fabric] Jul 10 00:23:49.592009 waagent[1886]: 2025-07-10T00:23:49.591991Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 10 00:23:49.592311 waagent[1886]: 2025-07-10T00:23:49.592291Z INFO ExtHandler Jul 10 00:23:49.592341 waagent[1886]: 2025-07-10T00:23:49.592327Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 10 00:23:49.598802 waagent[1886]: 2025-07-10T00:23:49.598781Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 10 00:23:49.704216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jul 10 00:23:49.920562 waagent[1886]: 2025-07-10T00:23:49.920505Z INFO ExtHandler Downloaded certificate {'thumbprint': '448BCAAF02C943AD71FA039339BBECCAE40953F6', 'hasPrivateKey': True} Jul 10 00:23:49.957992 waagent[1886]: 2025-07-10T00:23:49.920941Z INFO ExtHandler Fetch goal state completed Jul 10 00:23:49.957992 waagent[1886]: 2025-07-10T00:23:49.931374Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 10 00:23:49.957992 waagent[1886]: 2025-07-10T00:23:49.935176Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1886 Jul 10 00:23:49.957992 waagent[1886]: 2025-07-10T00:23:49.935261Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 10 00:23:49.957992 waagent[1886]: 2025-07-10T00:23:49.935424Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 10 00:23:49.957992 waagent[1886]: 2025-07-10T00:23:49.936242Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 10 00:23:49.957992 waagent[1886]: 2025-07-10T00:23:49.936447Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4344.1.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 10 00:23:49.957992 waagent[1886]: 2025-07-10T00:23:49.936515Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 10 00:23:49.957992 waagent[1886]: 2025-07-10T00:23:49.936784Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 10 00:23:50.021617 waagent[1886]: 2025-07-10T00:23:50.021589Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 10 00:23:50.021769 waagent[1886]: 2025-07-10T00:23:50.021744Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 10 00:23:50.026659 waagent[1886]: 2025-07-10T00:23:50.026496Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 10 00:23:50.031233 systemd[1]: Reload requested from client PID 1924 ('systemctl') (unit waagent.service)... Jul 10 00:23:50.031246 systemd[1]: Reloading... Jul 10 00:23:50.099941 zram_generator::config[1962]: No configuration found. Jul 10 00:23:50.170870 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:23:50.250207 systemd[1]: Reloading finished in 218 ms. Jul 10 00:23:50.267937 waagent[1886]: 2025-07-10T00:23:50.267835Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 10 00:23:50.267995 waagent[1886]: 2025-07-10T00:23:50.267940Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 10 00:23:50.922080 waagent[1886]: 2025-07-10T00:23:50.922026Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 10 00:23:50.922332 waagent[1886]: 2025-07-10T00:23:50.922312Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 10 00:23:50.922992 waagent[1886]: 2025-07-10T00:23:50.922941Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 10 00:23:50.923157 waagent[1886]: 2025-07-10T00:23:50.923121Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:23:50.923207 waagent[1886]: 2025-07-10T00:23:50.923189Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:23:50.923384 waagent[1886]: 2025-07-10T00:23:50.923365Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 10 00:23:50.923621 waagent[1886]: 2025-07-10T00:23:50.923577Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 10 00:23:50.923880 waagent[1886]: 2025-07-10T00:23:50.923855Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 10 00:23:50.924001 waagent[1886]: 2025-07-10T00:23:50.923976Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 10 00:23:50.924001 waagent[1886]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 10 00:23:50.924001 waagent[1886]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 10 00:23:50.924001 waagent[1886]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 10 00:23:50.924001 waagent[1886]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:23:50.924001 waagent[1886]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:23:50.924001 waagent[1886]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 10 00:23:50.924305 waagent[1886]: 2025-07-10T00:23:50.924277Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 10 00:23:50.924338 waagent[1886]: 2025-07-10T00:23:50.924317Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 10 00:23:50.924390 waagent[1886]: 2025-07-10T00:23:50.924360Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 10 00:23:50.924500 waagent[1886]: 2025-07-10T00:23:50.924482Z INFO EnvHandler ExtHandler Configure routes Jul 10 00:23:50.924552 waagent[1886]: 2025-07-10T00:23:50.924523Z INFO EnvHandler ExtHandler Gateway:None Jul 10 00:23:50.924585 waagent[1886]: 2025-07-10T00:23:50.924571Z INFO EnvHandler ExtHandler Routes:None Jul 10 00:23:50.925048 waagent[1886]: 2025-07-10T00:23:50.925017Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 10 00:23:50.925251 waagent[1886]: 2025-07-10T00:23:50.925052Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 10 00:23:50.925251 waagent[1886]: 2025-07-10T00:23:50.925125Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 10 00:23:50.950832 waagent[1886]: 2025-07-10T00:23:50.950800Z INFO ExtHandler ExtHandler Jul 10 00:23:50.950895 waagent[1886]: 2025-07-10T00:23:50.950852Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 948f8151-0147-4963-a922-08085371d496 correlation 76aa48c4-4224-4495-bc0f-20de969293f3 created: 2025-07-10T00:22:51.217743Z] Jul 10 00:23:50.951129 waagent[1886]: 2025-07-10T00:23:50.951106Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 10 00:23:50.951453 waagent[1886]: 2025-07-10T00:23:50.951430Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 10 00:23:50.991545 waagent[1886]: 2025-07-10T00:23:50.991134Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 10 00:23:50.991545 waagent[1886]: Try `iptables -h' or 'iptables --help' for more information.) Jul 10 00:23:50.991545 waagent[1886]: 2025-07-10T00:23:50.991471Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D53A7276-7603-4CAE-A163-9D638A37A17E;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 10 00:23:51.034190 waagent[1886]: 2025-07-10T00:23:51.034148Z INFO MonitorHandler ExtHandler Network interfaces: Jul 10 00:23:51.034190 waagent[1886]: Executing ['ip', '-a', '-o', 'link']: Jul 10 00:23:51.034190 waagent[1886]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 10 00:23:51.034190 waagent[1886]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:65:8c brd ff:ff:ff:ff:ff:ff\ alias Network Device Jul 10 00:23:51.034190 waagent[1886]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:dd:65:8c brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jul 10 00:23:51.034190 waagent[1886]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 10 00:23:51.034190 waagent[1886]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 10 00:23:51.034190 waagent[1886]: 2: eth0 inet 10.200.8.43/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 10 00:23:51.034190 waagent[1886]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 10 00:23:51.034190 waagent[1886]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 10 00:23:51.034190 waagent[1886]: 2: eth0 inet6 fe80::6245:bdff:fedd:658c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 00:23:51.034190 waagent[1886]: 3: enP30832s1 inet6 fe80::6245:bdff:fedd:658c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 10 00:23:51.092535 waagent[1886]: 2025-07-10T00:23:51.092493Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 10 00:23:51.092535 waagent[1886]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:23:51.092535 waagent[1886]: pkts bytes target prot opt in out source destination Jul 10 00:23:51.092535 waagent[1886]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:23:51.092535 waagent[1886]: pkts bytes target prot opt in out source destination Jul 10 00:23:51.092535 waagent[1886]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:23:51.092535 waagent[1886]: pkts bytes target prot opt in out source destination Jul 10 00:23:51.092535 waagent[1886]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 00:23:51.092535 waagent[1886]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 00:23:51.092535 waagent[1886]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 00:23:51.094788 waagent[1886]: 2025-07-10T00:23:51.094748Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 10 00:23:51.094788 waagent[1886]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:23:51.094788 waagent[1886]: pkts bytes target prot opt in out source destination Jul 10 00:23:51.094788 waagent[1886]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:23:51.094788 waagent[1886]: pkts bytes target prot opt in out source destination Jul 10 00:23:51.094788 waagent[1886]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 10 00:23:51.094788 waagent[1886]: pkts bytes target prot opt in out source destination Jul 10 00:23:51.094788 waagent[1886]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 10 00:23:51.094788 waagent[1886]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 10 00:23:51.094788 waagent[1886]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 10 00:23:51.863027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:23:51.864758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:23:52.995727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:23:53.007181 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:23:53.041215 kubelet[2060]: E0710 00:23:53.041182 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:23:53.043592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:23:53.043714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:23:53.044041 systemd[1]: kubelet.service: Consumed 128ms CPU time, 109M memory peak. Jul 10 00:24:03.113070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:24:03.114855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:03.391045 chronyd[1703]: Selected source PHC0 Jul 10 00:24:08.013991 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:24:08.015176 systemd[1]: Started sshd@0-10.200.8.43:22-10.200.16.10:60406.service - OpenSSH per-connection server daemon (10.200.16.10:60406). Jul 10 00:24:09.207852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:09.217203 (kubelet)[2078]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:24:09.247502 kubelet[2078]: E0710 00:24:09.247460 2078 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:24:09.248853 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:24:09.249023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:24:09.249328 systemd[1]: kubelet.service: Consumed 120ms CPU time, 108.5M memory peak. Jul 10 00:24:09.347899 sshd[2071]: Accepted publickey for core from 10.200.16.10 port 60406 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:09.348999 sshd-session[2071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:09.352978 systemd-logind[1691]: New session 3 of user core. Jul 10 00:24:09.362034 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:24:09.903034 systemd[1]: Started sshd@1-10.200.8.43:22-10.200.16.10:49450.service - OpenSSH per-connection server daemon (10.200.16.10:49450). Jul 10 00:24:10.530579 sshd[2088]: Accepted publickey for core from 10.200.16.10 port 49450 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:10.531823 sshd-session[2088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:10.536010 systemd-logind[1691]: New session 4 of user core. Jul 10 00:24:10.542030 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:24:10.972715 sshd[2090]: Connection closed by 10.200.16.10 port 49450 Jul 10 00:24:10.973230 sshd-session[2088]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:10.976022 systemd[1]: sshd@1-10.200.8.43:22-10.200.16.10:49450.service: Deactivated successfully. Jul 10 00:24:10.977489 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:24:10.978977 systemd-logind[1691]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:24:10.979664 systemd-logind[1691]: Removed session 4. Jul 10 00:24:11.088759 systemd[1]: Started sshd@2-10.200.8.43:22-10.200.16.10:49466.service - OpenSSH per-connection server daemon (10.200.16.10:49466). Jul 10 00:24:11.716301 sshd[2096]: Accepted publickey for core from 10.200.16.10 port 49466 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:11.717596 sshd-session[2096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:11.721576 systemd-logind[1691]: New session 5 of user core. Jul 10 00:24:11.726055 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:24:12.161424 sshd[2098]: Connection closed by 10.200.16.10 port 49466 Jul 10 00:24:12.161954 sshd-session[2096]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:12.164762 systemd[1]: sshd@2-10.200.8.43:22-10.200.16.10:49466.service: Deactivated successfully. Jul 10 00:24:12.166227 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:24:12.167807 systemd-logind[1691]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:24:12.168494 systemd-logind[1691]: Removed session 5. Jul 10 00:24:12.276203 systemd[1]: Started sshd@3-10.200.8.43:22-10.200.16.10:49470.service - OpenSSH per-connection server daemon (10.200.16.10:49470). Jul 10 00:24:12.903280 sshd[2104]: Accepted publickey for core from 10.200.16.10 port 49470 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:12.904532 sshd-session[2104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:12.908751 systemd-logind[1691]: New session 6 of user core. Jul 10 00:24:12.915062 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:24:13.345979 sshd[2106]: Connection closed by 10.200.16.10 port 49470 Jul 10 00:24:13.346691 sshd-session[2104]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:13.349387 systemd[1]: sshd@3-10.200.8.43:22-10.200.16.10:49470.service: Deactivated successfully. Jul 10 00:24:13.350862 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:24:13.351958 systemd-logind[1691]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:24:13.352993 systemd-logind[1691]: Removed session 6. Jul 10 00:24:13.459665 systemd[1]: Started sshd@4-10.200.8.43:22-10.200.16.10:49480.service - OpenSSH per-connection server daemon (10.200.16.10:49480). Jul 10 00:24:14.086896 sshd[2112]: Accepted publickey for core from 10.200.16.10 port 49480 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:14.088228 sshd-session[2112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:14.092673 systemd-logind[1691]: New session 7 of user core. Jul 10 00:24:14.099063 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:24:14.511510 sudo[2115]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:24:14.511711 sudo[2115]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:14.526730 sudo[2115]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:14.627395 sshd[2114]: Connection closed by 10.200.16.10 port 49480 Jul 10 00:24:14.628082 sshd-session[2112]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:14.631245 systemd[1]: sshd@4-10.200.8.43:22-10.200.16.10:49480.service: Deactivated successfully. Jul 10 00:24:14.632604 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:24:14.634093 systemd-logind[1691]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:24:14.634794 systemd-logind[1691]: Removed session 7. Jul 10 00:24:14.741739 systemd[1]: Started sshd@5-10.200.8.43:22-10.200.16.10:49488.service - OpenSSH per-connection server daemon (10.200.16.10:49488). Jul 10 00:24:15.373593 sshd[2121]: Accepted publickey for core from 10.200.16.10 port 49488 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:15.374850 sshd-session[2121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:15.379038 systemd-logind[1691]: New session 8 of user core. Jul 10 00:24:15.387048 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:24:15.715871 sudo[2125]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:24:15.716082 sudo[2125]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:15.721597 sudo[2125]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:15.725007 sudo[2124]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 00:24:15.725194 sudo[2124]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:15.731806 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 00:24:15.758667 augenrules[2147]: No rules Jul 10 00:24:15.759600 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:24:15.759771 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 00:24:15.760466 sudo[2124]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:15.862524 sshd[2123]: Connection closed by 10.200.16.10 port 49488 Jul 10 00:24:15.862963 sshd-session[2121]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:15.865406 systemd[1]: sshd@5-10.200.8.43:22-10.200.16.10:49488.service: Deactivated successfully. Jul 10 00:24:15.866732 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:24:15.868142 systemd-logind[1691]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:24:15.868786 systemd-logind[1691]: Removed session 8. Jul 10 00:24:15.981025 systemd[1]: Started sshd@6-10.200.8.43:22-10.200.16.10:49494.service - OpenSSH per-connection server daemon (10.200.16.10:49494). Jul 10 00:24:16.609329 sshd[2156]: Accepted publickey for core from 10.200.16.10 port 49494 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:24:16.610616 sshd-session[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:24:16.614790 systemd-logind[1691]: New session 9 of user core. Jul 10 00:24:16.622024 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:24:16.952099 sudo[2159]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:24:16.952294 sudo[2159]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:24:17.938600 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:24:17.947255 (dockerd)[2177]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:24:18.393990 dockerd[2177]: time="2025-07-10T00:24:18.393504808Z" level=info msg="Starting up" Jul 10 00:24:18.394673 dockerd[2177]: time="2025-07-10T00:24:18.394648162Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 00:24:18.500830 dockerd[2177]: time="2025-07-10T00:24:18.500791007Z" level=info msg="Loading containers: start." Jul 10 00:24:18.522932 kernel: Initializing XFRM netlink socket Jul 10 00:24:18.727310 systemd-networkd[1354]: docker0: Link UP Jul 10 00:24:18.738148 dockerd[2177]: time="2025-07-10T00:24:18.738126865Z" level=info msg="Loading containers: done." Jul 10 00:24:18.757263 dockerd[2177]: time="2025-07-10T00:24:18.757235376Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:24:18.757354 dockerd[2177]: time="2025-07-10T00:24:18.757286459Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 00:24:18.757380 dockerd[2177]: time="2025-07-10T00:24:18.757353215Z" level=info msg="Initializing buildkit" Jul 10 00:24:18.802656 dockerd[2177]: time="2025-07-10T00:24:18.802619123Z" level=info msg="Completed buildkit initialization" Jul 10 00:24:18.807898 dockerd[2177]: time="2025-07-10T00:24:18.807850430Z" level=info msg="Daemon has completed initialization" Jul 10 00:24:18.808031 dockerd[2177]: time="2025-07-10T00:24:18.807941238Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:24:18.808129 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:24:19.362906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 00:24:19.364508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:19.805754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:19.812097 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:24:19.844131 kubelet[2383]: E0710 00:24:19.844104 2383 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:24:19.845522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:24:19.845632 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:24:19.845969 systemd[1]: kubelet.service: Consumed 116ms CPU time, 110.7M memory peak. Jul 10 00:24:20.063643 containerd[1733]: time="2025-07-10T00:24:20.063387174Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:24:20.844469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177961109.mount: Deactivated successfully. Jul 10 00:24:21.426712 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 10 00:24:22.004684 containerd[1733]: time="2025-07-10T00:24:22.004643306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:22.006709 containerd[1733]: time="2025-07-10T00:24:22.006676829Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jul 10 00:24:22.008901 containerd[1733]: time="2025-07-10T00:24:22.008864024Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:22.012295 containerd[1733]: time="2025-07-10T00:24:22.012247137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:22.012932 containerd[1733]: time="2025-07-10T00:24:22.012763637Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.949332034s" Jul 10 00:24:22.012932 containerd[1733]: time="2025-07-10T00:24:22.012795262Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 10 00:24:22.013295 containerd[1733]: time="2025-07-10T00:24:22.013278095Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:24:23.539052 containerd[1733]: time="2025-07-10T00:24:23.539009540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:23.541042 containerd[1733]: time="2025-07-10T00:24:23.541010602Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jul 10 00:24:23.543265 containerd[1733]: time="2025-07-10T00:24:23.543235094Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:23.546506 containerd[1733]: time="2025-07-10T00:24:23.546472647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:23.547012 containerd[1733]: time="2025-07-10T00:24:23.546991079Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.533685495s" Jul 10 00:24:23.547062 containerd[1733]: time="2025-07-10T00:24:23.547021403Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 10 00:24:23.547556 containerd[1733]: time="2025-07-10T00:24:23.547532398Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:24:24.843422 containerd[1733]: time="2025-07-10T00:24:24.843379672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:24.845503 containerd[1733]: time="2025-07-10T00:24:24.845468545Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jul 10 00:24:24.849518 containerd[1733]: time="2025-07-10T00:24:24.849379563Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:24.853597 containerd[1733]: time="2025-07-10T00:24:24.853573007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:24.854230 containerd[1733]: time="2025-07-10T00:24:24.854123383Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.30650543s" Jul 10 00:24:24.854230 containerd[1733]: time="2025-07-10T00:24:24.854151492Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 10 00:24:24.854691 containerd[1733]: time="2025-07-10T00:24:24.854677913Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:24:25.116977 update_engine[1694]: I20250710 00:24:25.116938 1694 update_attempter.cc:509] Updating boot flags... Jul 10 00:24:25.736234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689739640.mount: Deactivated successfully. Jul 10 00:24:26.034934 containerd[1733]: time="2025-07-10T00:24:26.034851077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:26.037165 containerd[1733]: time="2025-07-10T00:24:26.037134247Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 10 00:24:26.039481 containerd[1733]: time="2025-07-10T00:24:26.039447739Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:26.044552 containerd[1733]: time="2025-07-10T00:24:26.043892663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:26.044552 containerd[1733]: time="2025-07-10T00:24:26.044450849Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.18973105s" Jul 10 00:24:26.044552 containerd[1733]: time="2025-07-10T00:24:26.044474550Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 10 00:24:26.044954 containerd[1733]: time="2025-07-10T00:24:26.044935897Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:24:26.542782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893325570.mount: Deactivated successfully. Jul 10 00:24:27.340359 containerd[1733]: time="2025-07-10T00:24:27.340318971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:27.350422 containerd[1733]: time="2025-07-10T00:24:27.350396862Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 10 00:24:27.353295 containerd[1733]: time="2025-07-10T00:24:27.353261179Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:27.356753 containerd[1733]: time="2025-07-10T00:24:27.356715539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:27.357703 containerd[1733]: time="2025-07-10T00:24:27.357583500Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.312622828s" Jul 10 00:24:27.357703 containerd[1733]: time="2025-07-10T00:24:27.357614709Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 00:24:27.358246 containerd[1733]: time="2025-07-10T00:24:27.358059594Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:24:27.895579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820323304.mount: Deactivated successfully. Jul 10 00:24:27.910309 containerd[1733]: time="2025-07-10T00:24:27.910278938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:24:27.912573 containerd[1733]: time="2025-07-10T00:24:27.912545583Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 10 00:24:27.915315 containerd[1733]: time="2025-07-10T00:24:27.915282762Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:24:27.918386 containerd[1733]: time="2025-07-10T00:24:27.918341881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:24:27.918940 containerd[1733]: time="2025-07-10T00:24:27.918677939Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 560.594582ms" Jul 10 00:24:27.918940 containerd[1733]: time="2025-07-10T00:24:27.918700943Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 00:24:27.919235 containerd[1733]: time="2025-07-10T00:24:27.919202633Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:24:28.501249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983246708.mount: Deactivated successfully. Jul 10 00:24:29.863121 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 10 00:24:29.865764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:30.302943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:30.311150 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:24:30.343003 kubelet[2610]: E0710 00:24:30.342961 2610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:24:30.344219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:24:30.344347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:24:30.344660 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.5M memory peak. Jul 10 00:24:30.881358 containerd[1733]: time="2025-07-10T00:24:30.881318990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:30.883356 containerd[1733]: time="2025-07-10T00:24:30.883327588Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 10 00:24:30.885677 containerd[1733]: time="2025-07-10T00:24:30.885622917Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:30.889678 containerd[1733]: time="2025-07-10T00:24:30.889410828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:30.889924 containerd[1733]: time="2025-07-10T00:24:30.889891706Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.970653217s" Jul 10 00:24:30.889957 containerd[1733]: time="2025-07-10T00:24:30.889933376Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 10 00:24:33.090481 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:33.090611 systemd[1]: kubelet.service: Consumed 126ms CPU time, 110.5M memory peak. Jul 10 00:24:33.092531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:33.114563 systemd[1]: Reload requested from client PID 2646 ('systemctl') (unit session-9.scope)... Jul 10 00:24:33.114573 systemd[1]: Reloading... Jul 10 00:24:33.192945 zram_generator::config[2695]: No configuration found. Jul 10 00:24:33.266539 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:24:33.355728 systemd[1]: Reloading finished in 240 ms. Jul 10 00:24:33.415339 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:24:33.415404 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:24:33.415611 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:33.417304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:33.940716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:33.947167 (kubelet)[2759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:24:33.977460 kubelet[2759]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:24:33.977460 kubelet[2759]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:24:33.977460 kubelet[2759]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:24:33.977688 kubelet[2759]: I0710 00:24:33.977505 2759 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:24:34.131949 kubelet[2759]: I0710 00:24:34.131925 2759 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:24:34.131949 kubelet[2759]: I0710 00:24:34.131946 2759 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:24:34.132130 kubelet[2759]: I0710 00:24:34.132120 2759 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:24:34.155963 kubelet[2759]: E0710 00:24:34.155942 2759 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:24:34.158864 kubelet[2759]: I0710 00:24:34.158737 2759 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:24:34.164879 kubelet[2759]: I0710 00:24:34.164862 2759 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:24:34.168079 kubelet[2759]: I0710 00:24:34.168063 2759 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:24:34.168661 kubelet[2759]: I0710 00:24:34.168642 2759 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:24:34.168766 kubelet[2759]: I0710 00:24:34.168740 2759 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:24:34.168905 kubelet[2759]: I0710 00:24:34.168781 2759 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-4c73015a89","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:24:34.169016 kubelet[2759]: I0710 00:24:34.168929 2759 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:24:34.169016 kubelet[2759]: I0710 00:24:34.168937 2759 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:24:34.169057 kubelet[2759]: I0710 00:24:34.169019 2759 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:24:34.171610 kubelet[2759]: I0710 00:24:34.171442 2759 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:24:34.171610 kubelet[2759]: I0710 00:24:34.171463 2759 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:24:34.171610 kubelet[2759]: I0710 00:24:34.171492 2759 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:24:34.171610 kubelet[2759]: I0710 00:24:34.171506 2759 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:24:34.175648 kubelet[2759]: W0710 00:24:34.175353 2759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-4c73015a89&limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jul 10 00:24:34.175648 kubelet[2759]: E0710 00:24:34.175396 2759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344.1.1-n-4c73015a89&limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:24:34.176357 kubelet[2759]: W0710 00:24:34.176260 2759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jul 10 00:24:34.176357 kubelet[2759]: E0710 00:24:34.176303 2759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:24:34.176437 kubelet[2759]: I0710 00:24:34.176367 2759 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:24:34.176706 kubelet[2759]: I0710 00:24:34.176693 2759 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:24:34.177197 kubelet[2759]: W0710 00:24:34.177184 2759 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:24:34.179351 kubelet[2759]: I0710 00:24:34.179316 2759 server.go:1274] "Started kubelet" Jul 10 00:24:34.179943 kubelet[2759]: I0710 00:24:34.179554 2759 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:24:34.179943 kubelet[2759]: I0710 00:24:34.179812 2759 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:24:34.181274 kubelet[2759]: I0710 00:24:34.181251 2759 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:24:34.186424 kubelet[2759]: E0710 00:24:34.184726 2759 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344.1.1-n-4c73015a89.1850bc19d4e7e59c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344.1.1-n-4c73015a89,UID:ci-4344.1.1-n-4c73015a89,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344.1.1-n-4c73015a89,},FirstTimestamp:2025-07-10 00:24:34.17929462 +0000 UTC m=+0.229464471,LastTimestamp:2025-07-10 00:24:34.17929462 +0000 UTC m=+0.229464471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344.1.1-n-4c73015a89,}" Jul 10 00:24:34.186525 kubelet[2759]: I0710 00:24:34.186510 2759 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:24:34.188420 kubelet[2759]: I0710 00:24:34.188264 2759 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:24:34.190459 kubelet[2759]: E0710 00:24:34.189523 2759 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-4c73015a89\" not found" Jul 10 00:24:34.190459 kubelet[2759]: I0710 00:24:34.189562 2759 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:24:34.190459 kubelet[2759]: I0710 00:24:34.189676 2759 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:24:34.190459 kubelet[2759]: I0710 00:24:34.189729 2759 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:24:34.190459 kubelet[2759]: I0710 00:24:34.190460 2759 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:24:34.190679 kubelet[2759]: W0710 00:24:34.190651 2759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jul 10 00:24:34.190736 kubelet[2759]: E0710 00:24:34.190725 2759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:24:34.190837 kubelet[2759]: E0710 00:24:34.190822 2759 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-4c73015a89?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="200ms" Jul 10 00:24:34.191062 kubelet[2759]: I0710 00:24:34.191054 2759 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:24:34.191148 kubelet[2759]: I0710 00:24:34.191139 2759 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:24:34.192836 kubelet[2759]: E0710 00:24:34.192780 2759 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:24:34.194207 kubelet[2759]: I0710 00:24:34.194195 2759 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:24:34.204112 kubelet[2759]: I0710 00:24:34.202950 2759 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:24:34.205312 kubelet[2759]: I0710 00:24:34.205293 2759 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:24:34.205381 kubelet[2759]: I0710 00:24:34.205323 2759 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:24:34.206119 kubelet[2759]: I0710 00:24:34.205337 2759 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:24:34.206235 kubelet[2759]: E0710 00:24:34.206221 2759 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:24:34.207056 kubelet[2759]: W0710 00:24:34.207023 2759 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jul 10 00:24:34.207123 kubelet[2759]: E0710 00:24:34.207066 2759 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.43:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:24:34.208694 kubelet[2759]: I0710 00:24:34.208675 2759 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:24:34.208765 kubelet[2759]: I0710 00:24:34.208712 2759 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:24:34.208765 kubelet[2759]: I0710 00:24:34.208726 2759 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:24:34.214128 kubelet[2759]: I0710 00:24:34.214116 2759 policy_none.go:49] "None policy: Start" Jul 10 00:24:34.215004 kubelet[2759]: I0710 00:24:34.214647 2759 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:24:34.215065 kubelet[2759]: I0710 00:24:34.215033 2759 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:24:34.222209 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:24:34.231710 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:24:34.234038 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:24:34.244380 kubelet[2759]: I0710 00:24:34.244361 2759 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:24:34.244497 kubelet[2759]: I0710 00:24:34.244485 2759 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:24:34.244524 kubelet[2759]: I0710 00:24:34.244496 2759 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:24:34.244958 kubelet[2759]: I0710 00:24:34.244894 2759 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:24:34.246350 kubelet[2759]: E0710 00:24:34.246285 2759 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344.1.1-n-4c73015a89\" not found" Jul 10 00:24:34.313219 systemd[1]: Created slice kubepods-burstable-podc9077d6344a5ea6c9a832a6ddee87f8e.slice - libcontainer container kubepods-burstable-podc9077d6344a5ea6c9a832a6ddee87f8e.slice. Jul 10 00:24:34.324729 systemd[1]: Created slice kubepods-burstable-podac38f38dff4cbf42e89fab4d16c78fe6.slice - libcontainer container kubepods-burstable-podac38f38dff4cbf42e89fab4d16c78fe6.slice. Jul 10 00:24:34.338672 systemd[1]: Created slice kubepods-burstable-pod6fb0d2ee3b39a3b91ba943d50acddb31.slice - libcontainer container kubepods-burstable-pod6fb0d2ee3b39a3b91ba943d50acddb31.slice. Jul 10 00:24:34.346166 kubelet[2759]: I0710 00:24:34.346143 2759 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.346401 kubelet[2759]: E0710 00:24:34.346373 2759 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.391769 kubelet[2759]: E0710 00:24:34.391747 2759 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-4c73015a89?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="400ms" Jul 10 00:24:34.491042 kubelet[2759]: I0710 00:24:34.490906 2759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9077d6344a5ea6c9a832a6ddee87f8e-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4c73015a89\" (UID: \"c9077d6344a5ea6c9a832a6ddee87f8e\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.491042 kubelet[2759]: I0710 00:24:34.490951 2759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac38f38dff4cbf42e89fab4d16c78fe6-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4c73015a89\" (UID: \"ac38f38dff4cbf42e89fab4d16c78fe6\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.491042 kubelet[2759]: I0710 00:24:34.490969 2759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac38f38dff4cbf42e89fab4d16c78fe6-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-4c73015a89\" (UID: \"ac38f38dff4cbf42e89fab4d16c78fe6\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.491042 kubelet[2759]: I0710 00:24:34.490996 2759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac38f38dff4cbf42e89fab4d16c78fe6-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-4c73015a89\" (UID: \"ac38f38dff4cbf42e89fab4d16c78fe6\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.491042 kubelet[2759]: I0710 00:24:34.491013 2759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9077d6344a5ea6c9a832a6ddee87f8e-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4c73015a89\" (UID: \"c9077d6344a5ea6c9a832a6ddee87f8e\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.491286 kubelet[2759]: I0710 00:24:34.491031 2759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9077d6344a5ea6c9a832a6ddee87f8e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-4c73015a89\" (UID: \"c9077d6344a5ea6c9a832a6ddee87f8e\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.491286 kubelet[2759]: I0710 00:24:34.491048 2759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac38f38dff4cbf42e89fab4d16c78fe6-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4c73015a89\" (UID: \"ac38f38dff4cbf42e89fab4d16c78fe6\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.491286 kubelet[2759]: I0710 00:24:34.491064 2759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac38f38dff4cbf42e89fab4d16c78fe6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-4c73015a89\" (UID: \"ac38f38dff4cbf42e89fab4d16c78fe6\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.491286 kubelet[2759]: I0710 00:24:34.491079 2759 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fb0d2ee3b39a3b91ba943d50acddb31-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-4c73015a89\" (UID: \"6fb0d2ee3b39a3b91ba943d50acddb31\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.548287 kubelet[2759]: I0710 00:24:34.548240 2759 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.548578 kubelet[2759]: E0710 00:24:34.548550 2759 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.623588 containerd[1733]: time="2025-07-10T00:24:34.623548486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-4c73015a89,Uid:c9077d6344a5ea6c9a832a6ddee87f8e,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:34.638048 containerd[1733]: time="2025-07-10T00:24:34.638022035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-4c73015a89,Uid:ac38f38dff4cbf42e89fab4d16c78fe6,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:34.641632 containerd[1733]: time="2025-07-10T00:24:34.641610491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-4c73015a89,Uid:6fb0d2ee3b39a3b91ba943d50acddb31,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:34.706377 containerd[1733]: time="2025-07-10T00:24:34.706347672Z" level=info msg="connecting to shim a5324f360d92791926b6dc4373088589ba911d1b30c21603a4b8d3e1b63682a0" address="unix:///run/containerd/s/4ce35702813f1d527a3cac6d3a0c7f93c2c13a438278ae8abb90d4af09b28f29" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:34.729231 systemd[1]: Started cri-containerd-a5324f360d92791926b6dc4373088589ba911d1b30c21603a4b8d3e1b63682a0.scope - libcontainer container a5324f360d92791926b6dc4373088589ba911d1b30c21603a4b8d3e1b63682a0. Jul 10 00:24:34.735747 containerd[1733]: time="2025-07-10T00:24:34.735677014Z" level=info msg="connecting to shim 334832848e4ec8c84db6787c6c95de950c34e287c836b819043bac1aa304aa9b" address="unix:///run/containerd/s/88d1b88deb1ce73657670ac35428550cfe97c55c417a4dca96ba16857f3f48ea" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:34.740315 containerd[1733]: time="2025-07-10T00:24:34.740292783Z" level=info msg="connecting to shim 33b31dabd15f57956cd4caf41a3a31fff3a7095f83e5d9c7e295e9a5201c7a7c" address="unix:///run/containerd/s/8bb464aac30a49d6154ef52ff48003561c2b537fb4080d4bb07598bdbc90e315" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:34.769138 systemd[1]: Started cri-containerd-334832848e4ec8c84db6787c6c95de950c34e287c836b819043bac1aa304aa9b.scope - libcontainer container 334832848e4ec8c84db6787c6c95de950c34e287c836b819043bac1aa304aa9b. Jul 10 00:24:34.771001 systemd[1]: Started cri-containerd-33b31dabd15f57956cd4caf41a3a31fff3a7095f83e5d9c7e295e9a5201c7a7c.scope - libcontainer container 33b31dabd15f57956cd4caf41a3a31fff3a7095f83e5d9c7e295e9a5201c7a7c. Jul 10 00:24:34.792557 kubelet[2759]: E0710 00:24:34.792502 2759 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344.1.1-n-4c73015a89?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="800ms" Jul 10 00:24:34.802440 containerd[1733]: time="2025-07-10T00:24:34.802415301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344.1.1-n-4c73015a89,Uid:c9077d6344a5ea6c9a832a6ddee87f8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5324f360d92791926b6dc4373088589ba911d1b30c21603a4b8d3e1b63682a0\"" Jul 10 00:24:34.805832 containerd[1733]: time="2025-07-10T00:24:34.805808214Z" level=info msg="CreateContainer within sandbox \"a5324f360d92791926b6dc4373088589ba911d1b30c21603a4b8d3e1b63682a0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:24:34.824093 containerd[1733]: time="2025-07-10T00:24:34.823901045Z" level=info msg="Container d9eb6f80c351f47d142f91defb19b34b000ea6151e6ef81606c8d872c33c15e4: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:34.836661 containerd[1733]: time="2025-07-10T00:24:34.836644434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344.1.1-n-4c73015a89,Uid:6fb0d2ee3b39a3b91ba943d50acddb31,Namespace:kube-system,Attempt:0,} returns sandbox id \"33b31dabd15f57956cd4caf41a3a31fff3a7095f83e5d9c7e295e9a5201c7a7c\"" Jul 10 00:24:34.838030 containerd[1733]: time="2025-07-10T00:24:34.838013501Z" level=info msg="CreateContainer within sandbox \"33b31dabd15f57956cd4caf41a3a31fff3a7095f83e5d9c7e295e9a5201c7a7c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:24:34.839011 containerd[1733]: time="2025-07-10T00:24:34.838982731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344.1.1-n-4c73015a89,Uid:ac38f38dff4cbf42e89fab4d16c78fe6,Namespace:kube-system,Attempt:0,} returns sandbox id \"334832848e4ec8c84db6787c6c95de950c34e287c836b819043bac1aa304aa9b\"" Jul 10 00:24:34.840471 containerd[1733]: time="2025-07-10T00:24:34.840418050Z" level=info msg="CreateContainer within sandbox \"334832848e4ec8c84db6787c6c95de950c34e287c836b819043bac1aa304aa9b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:24:34.850934 containerd[1733]: time="2025-07-10T00:24:34.849599751Z" level=info msg="CreateContainer within sandbox \"a5324f360d92791926b6dc4373088589ba911d1b30c21603a4b8d3e1b63682a0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d9eb6f80c351f47d142f91defb19b34b000ea6151e6ef81606c8d872c33c15e4\"" Jul 10 00:24:34.850934 containerd[1733]: time="2025-07-10T00:24:34.850387207Z" level=info msg="StartContainer for \"d9eb6f80c351f47d142f91defb19b34b000ea6151e6ef81606c8d872c33c15e4\"" Jul 10 00:24:34.851815 containerd[1733]: time="2025-07-10T00:24:34.851791146Z" level=info msg="connecting to shim d9eb6f80c351f47d142f91defb19b34b000ea6151e6ef81606c8d872c33c15e4" address="unix:///run/containerd/s/4ce35702813f1d527a3cac6d3a0c7f93c2c13a438278ae8abb90d4af09b28f29" protocol=ttrpc version=3 Jul 10 00:24:34.863447 containerd[1733]: time="2025-07-10T00:24:34.863429492Z" level=info msg="Container 612050aa4cf6b7fbd33a04b8ccbeb8aa255c235dab7f5430cd039bc043c3c534: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:34.866049 systemd[1]: Started cri-containerd-d9eb6f80c351f47d142f91defb19b34b000ea6151e6ef81606c8d872c33c15e4.scope - libcontainer container d9eb6f80c351f47d142f91defb19b34b000ea6151e6ef81606c8d872c33c15e4. Jul 10 00:24:34.869877 containerd[1733]: time="2025-07-10T00:24:34.869766216Z" level=info msg="Container 71283fe783848de36fe28edd813812e2406b5bbadc471159761606204c6dfa5c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:34.885983 containerd[1733]: time="2025-07-10T00:24:34.885957212Z" level=info msg="CreateContainer within sandbox \"334832848e4ec8c84db6787c6c95de950c34e287c836b819043bac1aa304aa9b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"612050aa4cf6b7fbd33a04b8ccbeb8aa255c235dab7f5430cd039bc043c3c534\"" Jul 10 00:24:34.886322 containerd[1733]: time="2025-07-10T00:24:34.886306640Z" level=info msg="StartContainer for \"612050aa4cf6b7fbd33a04b8ccbeb8aa255c235dab7f5430cd039bc043c3c534\"" Jul 10 00:24:34.887175 containerd[1733]: time="2025-07-10T00:24:34.887099867Z" level=info msg="connecting to shim 612050aa4cf6b7fbd33a04b8ccbeb8aa255c235dab7f5430cd039bc043c3c534" address="unix:///run/containerd/s/88d1b88deb1ce73657670ac35428550cfe97c55c417a4dca96ba16857f3f48ea" protocol=ttrpc version=3 Jul 10 00:24:34.891574 containerd[1733]: time="2025-07-10T00:24:34.891444563Z" level=info msg="CreateContainer within sandbox \"33b31dabd15f57956cd4caf41a3a31fff3a7095f83e5d9c7e295e9a5201c7a7c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"71283fe783848de36fe28edd813812e2406b5bbadc471159761606204c6dfa5c\"" Jul 10 00:24:34.892537 containerd[1733]: time="2025-07-10T00:24:34.892472764Z" level=info msg="StartContainer for \"71283fe783848de36fe28edd813812e2406b5bbadc471159761606204c6dfa5c\"" Jul 10 00:24:34.893716 containerd[1733]: time="2025-07-10T00:24:34.893622153Z" level=info msg="connecting to shim 71283fe783848de36fe28edd813812e2406b5bbadc471159761606204c6dfa5c" address="unix:///run/containerd/s/8bb464aac30a49d6154ef52ff48003561c2b537fb4080d4bb07598bdbc90e315" protocol=ttrpc version=3 Jul 10 00:24:34.914281 systemd[1]: Started cri-containerd-612050aa4cf6b7fbd33a04b8ccbeb8aa255c235dab7f5430cd039bc043c3c534.scope - libcontainer container 612050aa4cf6b7fbd33a04b8ccbeb8aa255c235dab7f5430cd039bc043c3c534. Jul 10 00:24:34.916084 containerd[1733]: time="2025-07-10T00:24:34.916036050Z" level=info msg="StartContainer for \"d9eb6f80c351f47d142f91defb19b34b000ea6151e6ef81606c8d872c33c15e4\" returns successfully" Jul 10 00:24:34.918131 systemd[1]: Started cri-containerd-71283fe783848de36fe28edd813812e2406b5bbadc471159761606204c6dfa5c.scope - libcontainer container 71283fe783848de36fe28edd813812e2406b5bbadc471159761606204c6dfa5c. Jul 10 00:24:34.950426 kubelet[2759]: I0710 00:24:34.950344 2759 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.950947 kubelet[2759]: E0710 00:24:34.950901 2759 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:34.973141 containerd[1733]: time="2025-07-10T00:24:34.973084780Z" level=info msg="StartContainer for \"612050aa4cf6b7fbd33a04b8ccbeb8aa255c235dab7f5430cd039bc043c3c534\" returns successfully" Jul 10 00:24:35.055031 containerd[1733]: time="2025-07-10T00:24:35.054971825Z" level=info msg="StartContainer for \"71283fe783848de36fe28edd813812e2406b5bbadc471159761606204c6dfa5c\" returns successfully" Jul 10 00:24:35.752616 kubelet[2759]: I0710 00:24:35.752597 2759 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:36.301140 kubelet[2759]: E0710 00:24:36.301102 2759 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4344.1.1-n-4c73015a89\" not found" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:36.382042 kubelet[2759]: I0710 00:24:36.381864 2759 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:37.177835 kubelet[2759]: I0710 00:24:37.177803 2759 apiserver.go:52] "Watching apiserver" Jul 10 00:24:37.190256 kubelet[2759]: I0710 00:24:37.190228 2759 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:24:37.563152 kubelet[2759]: W0710 00:24:37.563017 2759 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:24:38.384981 systemd[1]: Reload requested from client PID 3026 ('systemctl') (unit session-9.scope)... Jul 10 00:24:38.384993 systemd[1]: Reloading... Jul 10 00:24:38.444939 zram_generator::config[3072]: No configuration found. Jul 10 00:24:38.525099 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:24:38.617682 systemd[1]: Reloading finished in 232 ms. Jul 10 00:24:38.649097 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:38.660582 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:24:38.660785 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:38.660832 systemd[1]: kubelet.service: Consumed 487ms CPU time, 129.2M memory peak. Jul 10 00:24:38.662217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:24:39.162599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:24:39.171169 (kubelet)[3139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:24:39.206868 kubelet[3139]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:24:39.207132 kubelet[3139]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:24:39.207182 kubelet[3139]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:24:39.207348 kubelet[3139]: I0710 00:24:39.207327 3139 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:24:39.213648 kubelet[3139]: I0710 00:24:39.213625 3139 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:24:39.213648 kubelet[3139]: I0710 00:24:39.213648 3139 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:24:39.213936 kubelet[3139]: I0710 00:24:39.213904 3139 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:24:39.215098 kubelet[3139]: I0710 00:24:39.215080 3139 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:24:39.216423 kubelet[3139]: I0710 00:24:39.216304 3139 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:24:39.220412 kubelet[3139]: I0710 00:24:39.220396 3139 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 00:24:39.222164 kubelet[3139]: I0710 00:24:39.222149 3139 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:24:39.222248 kubelet[3139]: I0710 00:24:39.222218 3139 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:24:39.222325 kubelet[3139]: I0710 00:24:39.222307 3139 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:24:39.222452 kubelet[3139]: I0710 00:24:39.222324 3139 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344.1.1-n-4c73015a89","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:24:39.222568 kubelet[3139]: I0710 00:24:39.222458 3139 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:24:39.222568 kubelet[3139]: I0710 00:24:39.222466 3139 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:24:39.222568 kubelet[3139]: I0710 00:24:39.222486 3139 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:24:39.222789 kubelet[3139]: I0710 00:24:39.222779 3139 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:24:39.222825 kubelet[3139]: I0710 00:24:39.222793 3139 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:24:39.222825 kubelet[3139]: I0710 00:24:39.222818 3139 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:24:39.222825 kubelet[3139]: I0710 00:24:39.222828 3139 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:24:39.226384 kubelet[3139]: I0710 00:24:39.226207 3139 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 00:24:39.227619 kubelet[3139]: I0710 00:24:39.227530 3139 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:24:39.233017 kubelet[3139]: I0710 00:24:39.230571 3139 server.go:1274] "Started kubelet" Jul 10 00:24:39.235530 kubelet[3139]: I0710 00:24:39.231706 3139 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:24:39.237242 kubelet[3139]: I0710 00:24:39.237219 3139 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:24:39.238716 kubelet[3139]: I0710 00:24:39.231848 3139 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:24:39.239032 kubelet[3139]: I0710 00:24:39.239018 3139 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:24:39.239707 kubelet[3139]: I0710 00:24:39.239692 3139 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:24:39.245678 kubelet[3139]: I0710 00:24:39.245550 3139 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:24:39.246932 kubelet[3139]: I0710 00:24:39.246752 3139 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:24:39.246932 kubelet[3139]: E0710 00:24:39.246923 3139 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4344.1.1-n-4c73015a89\" not found" Jul 10 00:24:39.248516 kubelet[3139]: I0710 00:24:39.248479 3139 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:24:39.248594 kubelet[3139]: I0710 00:24:39.248576 3139 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:24:39.250963 kubelet[3139]: I0710 00:24:39.250260 3139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:24:39.251964 kubelet[3139]: I0710 00:24:39.251675 3139 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:24:39.252771 kubelet[3139]: I0710 00:24:39.252100 3139 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:24:39.253521 kubelet[3139]: I0710 00:24:39.251676 3139 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:24:39.253614 kubelet[3139]: I0710 00:24:39.253607 3139 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:24:39.253668 kubelet[3139]: I0710 00:24:39.253663 3139 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:24:39.253852 kubelet[3139]: E0710 00:24:39.253839 3139 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:24:39.258875 kubelet[3139]: E0710 00:24:39.258858 3139 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:24:39.260545 kubelet[3139]: I0710 00:24:39.260528 3139 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:24:39.294007 kubelet[3139]: I0710 00:24:39.293996 3139 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:24:39.294072 kubelet[3139]: I0710 00:24:39.294030 3139 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:24:39.294072 kubelet[3139]: I0710 00:24:39.294044 3139 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:24:39.294644 kubelet[3139]: I0710 00:24:39.294148 3139 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:24:39.294644 kubelet[3139]: I0710 00:24:39.294156 3139 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:24:39.294644 kubelet[3139]: I0710 00:24:39.294172 3139 policy_none.go:49] "None policy: Start" Jul 10 00:24:39.294771 kubelet[3139]: I0710 00:24:39.294762 3139 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:24:39.294833 kubelet[3139]: I0710 00:24:39.294819 3139 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:24:39.294951 kubelet[3139]: I0710 00:24:39.294940 3139 state_mem.go:75] "Updated machine memory state" Jul 10 00:24:39.297420 kubelet[3139]: I0710 00:24:39.297404 3139 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:24:39.297775 kubelet[3139]: I0710 00:24:39.297509 3139 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:24:39.297775 kubelet[3139]: I0710 00:24:39.297517 3139 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:24:39.297775 kubelet[3139]: I0710 00:24:39.297630 3139 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:24:39.361172 kubelet[3139]: W0710 00:24:39.361159 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:24:39.364185 kubelet[3139]: W0710 00:24:39.364162 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:24:39.364623 kubelet[3139]: W0710 00:24:39.364607 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:24:39.364683 kubelet[3139]: E0710 00:24:39.364643 3139 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-n-4c73015a89\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.395795 sudo[3173]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 00:24:39.396023 sudo[3173]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 00:24:39.399435 kubelet[3139]: I0710 00:24:39.399420 3139 kubelet_node_status.go:72] "Attempting to register node" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.408320 kubelet[3139]: I0710 00:24:39.408301 3139 kubelet_node_status.go:111] "Node was previously registered" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.408407 kubelet[3139]: I0710 00:24:39.408350 3139 kubelet_node_status.go:75] "Successfully registered node" node="ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.450372 kubelet[3139]: I0710 00:24:39.449672 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9077d6344a5ea6c9a832a6ddee87f8e-ca-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4c73015a89\" (UID: \"c9077d6344a5ea6c9a832a6ddee87f8e\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.450372 kubelet[3139]: I0710 00:24:39.450194 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9077d6344a5ea6c9a832a6ddee87f8e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344.1.1-n-4c73015a89\" (UID: \"c9077d6344a5ea6c9a832a6ddee87f8e\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.450372 kubelet[3139]: I0710 00:24:39.450218 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac38f38dff4cbf42e89fab4d16c78fe6-kubeconfig\") pod \"kube-controller-manager-ci-4344.1.1-n-4c73015a89\" (UID: \"ac38f38dff4cbf42e89fab4d16c78fe6\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.450372 kubelet[3139]: I0710 00:24:39.450239 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac38f38dff4cbf42e89fab4d16c78fe6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344.1.1-n-4c73015a89\" (UID: \"ac38f38dff4cbf42e89fab4d16c78fe6\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.450372 kubelet[3139]: I0710 00:24:39.450254 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fb0d2ee3b39a3b91ba943d50acddb31-kubeconfig\") pod \"kube-scheduler-ci-4344.1.1-n-4c73015a89\" (UID: \"6fb0d2ee3b39a3b91ba943d50acddb31\") " pod="kube-system/kube-scheduler-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.450525 kubelet[3139]: I0710 00:24:39.450269 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9077d6344a5ea6c9a832a6ddee87f8e-k8s-certs\") pod \"kube-apiserver-ci-4344.1.1-n-4c73015a89\" (UID: \"c9077d6344a5ea6c9a832a6ddee87f8e\") " pod="kube-system/kube-apiserver-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.450525 kubelet[3139]: I0710 00:24:39.450284 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac38f38dff4cbf42e89fab4d16c78fe6-ca-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4c73015a89\" (UID: \"ac38f38dff4cbf42e89fab4d16c78fe6\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.450525 kubelet[3139]: I0710 00:24:39.450297 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac38f38dff4cbf42e89fab4d16c78fe6-flexvolume-dir\") pod \"kube-controller-manager-ci-4344.1.1-n-4c73015a89\" (UID: \"ac38f38dff4cbf42e89fab4d16c78fe6\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.450525 kubelet[3139]: I0710 00:24:39.450310 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac38f38dff4cbf42e89fab4d16c78fe6-k8s-certs\") pod \"kube-controller-manager-ci-4344.1.1-n-4c73015a89\" (UID: \"ac38f38dff4cbf42e89fab4d16c78fe6\") " pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:39.845881 sudo[3173]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:40.224456 kubelet[3139]: I0710 00:24:40.224436 3139 apiserver.go:52] "Watching apiserver" Jul 10 00:24:40.249575 kubelet[3139]: I0710 00:24:40.249557 3139 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:24:40.288532 kubelet[3139]: W0710 00:24:40.288318 3139 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 10 00:24:40.288532 kubelet[3139]: E0710 00:24:40.288366 3139 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4344.1.1-n-4c73015a89\" already exists" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4c73015a89" Jul 10 00:24:40.294951 kubelet[3139]: I0710 00:24:40.294552 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344.1.1-n-4c73015a89" podStartSLOduration=1.294516986 podStartE2EDuration="1.294516986s" podCreationTimestamp="2025-07-10 00:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:24:40.294330363 +0000 UTC m=+1.119807254" watchObservedRunningTime="2025-07-10 00:24:40.294516986 +0000 UTC m=+1.119993870" Jul 10 00:24:40.310135 kubelet[3139]: I0710 00:24:40.310101 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344.1.1-n-4c73015a89" podStartSLOduration=3.31008765 podStartE2EDuration="3.31008765s" podCreationTimestamp="2025-07-10 00:24:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:24:40.302560799 +0000 UTC m=+1.128037691" watchObservedRunningTime="2025-07-10 00:24:40.31008765 +0000 UTC m=+1.135564540" Jul 10 00:24:40.982532 sudo[2159]: pam_unix(sudo:session): session closed for user root Jul 10 00:24:41.081845 sshd[2158]: Connection closed by 10.200.16.10 port 49494 Jul 10 00:24:41.082304 sshd-session[2156]: pam_unix(sshd:session): session closed for user core Jul 10 00:24:41.085220 systemd[1]: sshd@6-10.200.8.43:22-10.200.16.10:49494.service: Deactivated successfully. Jul 10 00:24:41.086803 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:24:41.087007 systemd[1]: session-9.scope: Consumed 2.730s CPU time, 267.4M memory peak. Jul 10 00:24:41.088100 systemd-logind[1691]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:24:41.089161 systemd-logind[1691]: Removed session 9. Jul 10 00:24:43.599392 kubelet[3139]: I0710 00:24:43.599358 3139 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:24:43.599893 kubelet[3139]: I0710 00:24:43.599747 3139 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:24:43.599937 containerd[1733]: time="2025-07-10T00:24:43.599609848Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:24:44.198005 kubelet[3139]: I0710 00:24:44.197253 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344.1.1-n-4c73015a89" podStartSLOduration=5.197235941 podStartE2EDuration="5.197235941s" podCreationTimestamp="2025-07-10 00:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:24:40.310292038 +0000 UTC m=+1.135768930" watchObservedRunningTime="2025-07-10 00:24:44.197235941 +0000 UTC m=+5.022712837" Jul 10 00:24:44.213394 systemd[1]: Created slice kubepods-besteffort-pod197d3a7a_d657_48df_946d_29b679673f89.slice - libcontainer container kubepods-besteffort-pod197d3a7a_d657_48df_946d_29b679673f89.slice. Jul 10 00:24:44.225647 systemd[1]: Created slice kubepods-burstable-pod9d33188f_d77b_4276_a0ea_7ae32153e715.slice - libcontainer container kubepods-burstable-pod9d33188f_d77b_4276_a0ea_7ae32153e715.slice. Jul 10 00:24:44.279622 kubelet[3139]: I0710 00:24:44.279598 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-bpf-maps\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279718 kubelet[3139]: I0710 00:24:44.279633 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-cgroup\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279718 kubelet[3139]: I0710 00:24:44.279652 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w2xr\" (UniqueName: \"kubernetes.io/projected/197d3a7a-d657-48df-946d-29b679673f89-kube-api-access-6w2xr\") pod \"kube-proxy-j7mr9\" (UID: \"197d3a7a-d657-48df-946d-29b679673f89\") " pod="kube-system/kube-proxy-j7mr9" Jul 10 00:24:44.279718 kubelet[3139]: I0710 00:24:44.279668 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phqt9\" (UniqueName: \"kubernetes.io/projected/9d33188f-d77b-4276-a0ea-7ae32153e715-kube-api-access-phqt9\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279718 kubelet[3139]: I0710 00:24:44.279682 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-lib-modules\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279718 kubelet[3139]: I0710 00:24:44.279697 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-config-path\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279817 kubelet[3139]: I0710 00:24:44.279710 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-host-proc-sys-kernel\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279817 kubelet[3139]: I0710 00:24:44.279723 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/197d3a7a-d657-48df-946d-29b679673f89-lib-modules\") pod \"kube-proxy-j7mr9\" (UID: \"197d3a7a-d657-48df-946d-29b679673f89\") " pod="kube-system/kube-proxy-j7mr9" Jul 10 00:24:44.279817 kubelet[3139]: I0710 00:24:44.279738 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-xtables-lock\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279817 kubelet[3139]: I0710 00:24:44.279753 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cni-path\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279817 kubelet[3139]: I0710 00:24:44.279767 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-etc-cni-netd\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279817 kubelet[3139]: I0710 00:24:44.279781 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d33188f-d77b-4276-a0ea-7ae32153e715-clustermesh-secrets\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279952 kubelet[3139]: I0710 00:24:44.279796 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/197d3a7a-d657-48df-946d-29b679673f89-xtables-lock\") pod \"kube-proxy-j7mr9\" (UID: \"197d3a7a-d657-48df-946d-29b679673f89\") " pod="kube-system/kube-proxy-j7mr9" Jul 10 00:24:44.279952 kubelet[3139]: I0710 00:24:44.279810 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-hostproc\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279952 kubelet[3139]: I0710 00:24:44.279824 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-host-proc-sys-net\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279952 kubelet[3139]: I0710 00:24:44.279838 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-run\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279952 kubelet[3139]: I0710 00:24:44.279852 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d33188f-d77b-4276-a0ea-7ae32153e715-hubble-tls\") pod \"cilium-dnlk2\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " pod="kube-system/cilium-dnlk2" Jul 10 00:24:44.279952 kubelet[3139]: I0710 00:24:44.279869 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/197d3a7a-d657-48df-946d-29b679673f89-kube-proxy\") pod \"kube-proxy-j7mr9\" (UID: \"197d3a7a-d657-48df-946d-29b679673f89\") " pod="kube-system/kube-proxy-j7mr9" Jul 10 00:24:44.391548 kubelet[3139]: E0710 00:24:44.390993 3139 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 10 00:24:44.391548 kubelet[3139]: E0710 00:24:44.391018 3139 projected.go:194] Error preparing data for projected volume kube-api-access-phqt9 for pod kube-system/cilium-dnlk2: configmap "kube-root-ca.crt" not found Jul 10 00:24:44.391548 kubelet[3139]: E0710 00:24:44.391064 3139 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d33188f-d77b-4276-a0ea-7ae32153e715-kube-api-access-phqt9 podName:9d33188f-d77b-4276-a0ea-7ae32153e715 nodeName:}" failed. No retries permitted until 2025-07-10 00:24:44.891046625 +0000 UTC m=+5.716523518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-phqt9" (UniqueName: "kubernetes.io/projected/9d33188f-d77b-4276-a0ea-7ae32153e715-kube-api-access-phqt9") pod "cilium-dnlk2" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715") : configmap "kube-root-ca.crt" not found Jul 10 00:24:44.393885 kubelet[3139]: E0710 00:24:44.393748 3139 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 10 00:24:44.394188 kubelet[3139]: E0710 00:24:44.394140 3139 projected.go:194] Error preparing data for projected volume kube-api-access-6w2xr for pod kube-system/kube-proxy-j7mr9: configmap "kube-root-ca.crt" not found Jul 10 00:24:44.394832 kubelet[3139]: E0710 00:24:44.394635 3139 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/197d3a7a-d657-48df-946d-29b679673f89-kube-api-access-6w2xr podName:197d3a7a-d657-48df-946d-29b679673f89 nodeName:}" failed. No retries permitted until 2025-07-10 00:24:44.894617957 +0000 UTC m=+5.720094841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6w2xr" (UniqueName: "kubernetes.io/projected/197d3a7a-d657-48df-946d-29b679673f89-kube-api-access-6w2xr") pod "kube-proxy-j7mr9" (UID: "197d3a7a-d657-48df-946d-29b679673f89") : configmap "kube-root-ca.crt" not found Jul 10 00:24:44.760908 systemd[1]: Created slice kubepods-besteffort-pod88e7da1d_2c95_4ed6_9099_018072d2f0b9.slice - libcontainer container kubepods-besteffort-pod88e7da1d_2c95_4ed6_9099_018072d2f0b9.slice. Jul 10 00:24:44.783360 kubelet[3139]: I0710 00:24:44.783327 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88e7da1d-2c95-4ed6-9099-018072d2f0b9-cilium-config-path\") pod \"cilium-operator-5d85765b45-dc2dz\" (UID: \"88e7da1d-2c95-4ed6-9099-018072d2f0b9\") " pod="kube-system/cilium-operator-5d85765b45-dc2dz" Jul 10 00:24:44.783696 kubelet[3139]: I0710 00:24:44.783368 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9gjk\" (UniqueName: \"kubernetes.io/projected/88e7da1d-2c95-4ed6-9099-018072d2f0b9-kube-api-access-d9gjk\") pod \"cilium-operator-5d85765b45-dc2dz\" (UID: \"88e7da1d-2c95-4ed6-9099-018072d2f0b9\") " pod="kube-system/cilium-operator-5d85765b45-dc2dz" Jul 10 00:24:45.065537 containerd[1733]: time="2025-07-10T00:24:45.065431196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dc2dz,Uid:88e7da1d-2c95-4ed6-9099-018072d2f0b9,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:45.111951 containerd[1733]: time="2025-07-10T00:24:45.111895807Z" level=info msg="connecting to shim d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f" address="unix:///run/containerd/s/7287c3aff8e138daf24f47a2d87d419b39984fc86ee626ad506060c30d95259f" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:45.121335 containerd[1733]: time="2025-07-10T00:24:45.121117760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j7mr9,Uid:197d3a7a-d657-48df-946d-29b679673f89,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:45.129190 containerd[1733]: time="2025-07-10T00:24:45.129142032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dnlk2,Uid:9d33188f-d77b-4276-a0ea-7ae32153e715,Namespace:kube-system,Attempt:0,}" Jul 10 00:24:45.130080 systemd[1]: Started cri-containerd-d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f.scope - libcontainer container d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f. Jul 10 00:24:45.166988 containerd[1733]: time="2025-07-10T00:24:45.166871489Z" level=info msg="connecting to shim d1414c57a9ad551a03839176cbdb635483badc020a6d6048ffce814626122fac" address="unix:///run/containerd/s/8ef5e3abc6cf970fd0b58fc920a574c34b7afb6e00bdef5159e58c990932142c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:45.168260 containerd[1733]: time="2025-07-10T00:24:45.168224956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-dc2dz,Uid:88e7da1d-2c95-4ed6-9099-018072d2f0b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f\"" Jul 10 00:24:45.170500 containerd[1733]: time="2025-07-10T00:24:45.170464652Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:24:45.183047 containerd[1733]: time="2025-07-10T00:24:45.182950758Z" level=info msg="connecting to shim e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8" address="unix:///run/containerd/s/be9baf963de8ed0a7be1bb293fddf58b5fb1e64b569ad7f99033b30d254b4bc6" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:24:45.188111 systemd[1]: Started cri-containerd-d1414c57a9ad551a03839176cbdb635483badc020a6d6048ffce814626122fac.scope - libcontainer container d1414c57a9ad551a03839176cbdb635483badc020a6d6048ffce814626122fac. Jul 10 00:24:45.204017 systemd[1]: Started cri-containerd-e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8.scope - libcontainer container e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8. Jul 10 00:24:45.232810 containerd[1733]: time="2025-07-10T00:24:45.232793214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j7mr9,Uid:197d3a7a-d657-48df-946d-29b679673f89,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1414c57a9ad551a03839176cbdb635483badc020a6d6048ffce814626122fac\"" Jul 10 00:24:45.234787 containerd[1733]: time="2025-07-10T00:24:45.234746751Z" level=info msg="CreateContainer within sandbox \"d1414c57a9ad551a03839176cbdb635483badc020a6d6048ffce814626122fac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:24:45.238305 containerd[1733]: time="2025-07-10T00:24:45.238254760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dnlk2,Uid:9d33188f-d77b-4276-a0ea-7ae32153e715,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\"" Jul 10 00:24:45.249170 containerd[1733]: time="2025-07-10T00:24:45.249148496Z" level=info msg="Container 7bac3f7659533de9b97df9c4dcc8e46ca261dae711a6816848ce91a5d6790cb1: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:45.262043 containerd[1733]: time="2025-07-10T00:24:45.262021960Z" level=info msg="CreateContainer within sandbox \"d1414c57a9ad551a03839176cbdb635483badc020a6d6048ffce814626122fac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7bac3f7659533de9b97df9c4dcc8e46ca261dae711a6816848ce91a5d6790cb1\"" Jul 10 00:24:45.262368 containerd[1733]: time="2025-07-10T00:24:45.262353021Z" level=info msg="StartContainer for \"7bac3f7659533de9b97df9c4dcc8e46ca261dae711a6816848ce91a5d6790cb1\"" Jul 10 00:24:45.263568 containerd[1733]: time="2025-07-10T00:24:45.263542647Z" level=info msg="connecting to shim 7bac3f7659533de9b97df9c4dcc8e46ca261dae711a6816848ce91a5d6790cb1" address="unix:///run/containerd/s/8ef5e3abc6cf970fd0b58fc920a574c34b7afb6e00bdef5159e58c990932142c" protocol=ttrpc version=3 Jul 10 00:24:45.276029 systemd[1]: Started cri-containerd-7bac3f7659533de9b97df9c4dcc8e46ca261dae711a6816848ce91a5d6790cb1.scope - libcontainer container 7bac3f7659533de9b97df9c4dcc8e46ca261dae711a6816848ce91a5d6790cb1. Jul 10 00:24:45.310343 containerd[1733]: time="2025-07-10T00:24:45.310310524Z" level=info msg="StartContainer for \"7bac3f7659533de9b97df9c4dcc8e46ca261dae711a6816848ce91a5d6790cb1\" returns successfully" Jul 10 00:24:46.299279 kubelet[3139]: I0710 00:24:46.299207 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j7mr9" podStartSLOduration=2.29919141 podStartE2EDuration="2.29919141s" podCreationTimestamp="2025-07-10 00:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:24:46.299059913 +0000 UTC m=+7.124536805" watchObservedRunningTime="2025-07-10 00:24:46.29919141 +0000 UTC m=+7.124668305" Jul 10 00:24:47.275175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856475147.mount: Deactivated successfully. Jul 10 00:24:47.753383 containerd[1733]: time="2025-07-10T00:24:47.753344926Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:47.755553 containerd[1733]: time="2025-07-10T00:24:47.755514383Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 10 00:24:47.757795 containerd[1733]: time="2025-07-10T00:24:47.757759190Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:47.759713 containerd[1733]: time="2025-07-10T00:24:47.759464480Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.588962561s" Jul 10 00:24:47.759713 containerd[1733]: time="2025-07-10T00:24:47.759497297Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 10 00:24:47.763225 containerd[1733]: time="2025-07-10T00:24:47.763202030Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:24:47.765118 containerd[1733]: time="2025-07-10T00:24:47.765090383Z" level=info msg="CreateContainer within sandbox \"d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:24:47.793039 containerd[1733]: time="2025-07-10T00:24:47.789999965Z" level=info msg="Container f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:47.804926 containerd[1733]: time="2025-07-10T00:24:47.804213084Z" level=info msg="CreateContainer within sandbox \"d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\"" Jul 10 00:24:47.805469 containerd[1733]: time="2025-07-10T00:24:47.805441497Z" level=info msg="StartContainer for \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\"" Jul 10 00:24:47.807051 containerd[1733]: time="2025-07-10T00:24:47.807001009Z" level=info msg="connecting to shim f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d" address="unix:///run/containerd/s/7287c3aff8e138daf24f47a2d87d419b39984fc86ee626ad506060c30d95259f" protocol=ttrpc version=3 Jul 10 00:24:47.831195 systemd[1]: Started cri-containerd-f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d.scope - libcontainer container f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d. Jul 10 00:24:47.852978 containerd[1733]: time="2025-07-10T00:24:47.852957757Z" level=info msg="StartContainer for \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\" returns successfully" Jul 10 00:24:48.373937 kubelet[3139]: I0710 00:24:48.372085 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-dc2dz" podStartSLOduration=1.781289615 podStartE2EDuration="4.372064168s" podCreationTimestamp="2025-07-10 00:24:44 +0000 UTC" firstStartedPulling="2025-07-10 00:24:45.16982952 +0000 UTC m=+5.995306419" lastFinishedPulling="2025-07-10 00:24:47.760604082 +0000 UTC m=+8.586080972" observedRunningTime="2025-07-10 00:24:48.371499512 +0000 UTC m=+9.196976406" watchObservedRunningTime="2025-07-10 00:24:48.372064168 +0000 UTC m=+9.197541059" Jul 10 00:24:56.084497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount647870164.mount: Deactivated successfully. Jul 10 00:24:57.565820 containerd[1733]: time="2025-07-10T00:24:57.565782781Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:57.567876 containerd[1733]: time="2025-07-10T00:24:57.567842710Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 10 00:24:57.570292 containerd[1733]: time="2025-07-10T00:24:57.570257189Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:24:57.571181 containerd[1733]: time="2025-07-10T00:24:57.571104261Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.807778301s" Jul 10 00:24:57.571181 containerd[1733]: time="2025-07-10T00:24:57.571132202Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 10 00:24:57.572756 containerd[1733]: time="2025-07-10T00:24:57.572682782Z" level=info msg="CreateContainer within sandbox \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:24:57.585156 containerd[1733]: time="2025-07-10T00:24:57.585131035Z" level=info msg="Container cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:24:57.600993 containerd[1733]: time="2025-07-10T00:24:57.600970902Z" level=info msg="CreateContainer within sandbox \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\"" Jul 10 00:24:57.601313 containerd[1733]: time="2025-07-10T00:24:57.601278931Z" level=info msg="StartContainer for \"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\"" Jul 10 00:24:57.602165 containerd[1733]: time="2025-07-10T00:24:57.602131954Z" level=info msg="connecting to shim cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e" address="unix:///run/containerd/s/be9baf963de8ed0a7be1bb293fddf58b5fb1e64b569ad7f99033b30d254b4bc6" protocol=ttrpc version=3 Jul 10 00:24:57.624046 systemd[1]: Started cri-containerd-cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e.scope - libcontainer container cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e. Jul 10 00:24:57.648358 containerd[1733]: time="2025-07-10T00:24:57.648338095Z" level=info msg="StartContainer for \"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\" returns successfully" Jul 10 00:24:57.654976 systemd[1]: cri-containerd-cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e.scope: Deactivated successfully. Jul 10 00:24:57.658907 containerd[1733]: time="2025-07-10T00:24:57.658884027Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\" id:\"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\" pid:3602 exited_at:{seconds:1752107097 nanos:658559488}" Jul 10 00:24:57.658907 containerd[1733]: time="2025-07-10T00:24:57.658976118Z" level=info msg="received exit event container_id:\"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\" id:\"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\" pid:3602 exited_at:{seconds:1752107097 nanos:658559488}" Jul 10 00:24:57.670799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e-rootfs.mount: Deactivated successfully. Jul 10 00:25:02.322006 containerd[1733]: time="2025-07-10T00:25:02.321926277Z" level=info msg="CreateContainer within sandbox \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:25:02.342588 containerd[1733]: time="2025-07-10T00:25:02.342022515Z" level=info msg="Container 93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:02.353848 containerd[1733]: time="2025-07-10T00:25:02.353823713Z" level=info msg="CreateContainer within sandbox \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\"" Jul 10 00:25:02.354276 containerd[1733]: time="2025-07-10T00:25:02.354149964Z" level=info msg="StartContainer for \"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\"" Jul 10 00:25:02.354987 containerd[1733]: time="2025-07-10T00:25:02.354962356Z" level=info msg="connecting to shim 93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57" address="unix:///run/containerd/s/be9baf963de8ed0a7be1bb293fddf58b5fb1e64b569ad7f99033b30d254b4bc6" protocol=ttrpc version=3 Jul 10 00:25:02.372049 systemd[1]: Started cri-containerd-93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57.scope - libcontainer container 93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57. Jul 10 00:25:02.395605 containerd[1733]: time="2025-07-10T00:25:02.395578196Z" level=info msg="StartContainer for \"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\" returns successfully" Jul 10 00:25:02.403841 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:25:02.404204 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:25:02.404545 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:25:02.410178 containerd[1733]: time="2025-07-10T00:25:02.406743713Z" level=info msg="received exit event container_id:\"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\" id:\"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\" pid:3646 exited_at:{seconds:1752107102 nanos:406221271}" Jul 10 00:25:02.410178 containerd[1733]: time="2025-07-10T00:25:02.408062916Z" level=info msg="TaskExit event in podsandbox handler container_id:\"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\" id:\"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\" pid:3646 exited_at:{seconds:1752107102 nanos:406221271}" Jul 10 00:25:02.408300 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:25:02.409824 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:25:02.412842 systemd[1]: cri-containerd-93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57.scope: Deactivated successfully. Jul 10 00:25:02.425292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57-rootfs.mount: Deactivated successfully. Jul 10 00:25:02.431380 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:25:03.325229 containerd[1733]: time="2025-07-10T00:25:03.325137217Z" level=info msg="CreateContainer within sandbox \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:25:03.348325 containerd[1733]: time="2025-07-10T00:25:03.347990966Z" level=info msg="Container b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:03.366442 containerd[1733]: time="2025-07-10T00:25:03.366412024Z" level=info msg="CreateContainer within sandbox \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\"" Jul 10 00:25:03.366928 containerd[1733]: time="2025-07-10T00:25:03.366772601Z" level=info msg="StartContainer for \"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\"" Jul 10 00:25:03.368123 containerd[1733]: time="2025-07-10T00:25:03.368087059Z" level=info msg="connecting to shim b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6" address="unix:///run/containerd/s/be9baf963de8ed0a7be1bb293fddf58b5fb1e64b569ad7f99033b30d254b4bc6" protocol=ttrpc version=3 Jul 10 00:25:03.389080 systemd[1]: Started cri-containerd-b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6.scope - libcontainer container b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6. Jul 10 00:25:03.415016 systemd[1]: cri-containerd-b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6.scope: Deactivated successfully. Jul 10 00:25:03.417861 containerd[1733]: time="2025-07-10T00:25:03.417378276Z" level=info msg="received exit event container_id:\"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\" id:\"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\" pid:3691 exited_at:{seconds:1752107103 nanos:417254815}" Jul 10 00:25:03.418632 containerd[1733]: time="2025-07-10T00:25:03.418612085Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\" id:\"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\" pid:3691 exited_at:{seconds:1752107103 nanos:417254815}" Jul 10 00:25:03.419114 containerd[1733]: time="2025-07-10T00:25:03.419098167Z" level=info msg="StartContainer for \"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\" returns successfully" Jul 10 00:25:03.433386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6-rootfs.mount: Deactivated successfully. Jul 10 00:25:04.330083 containerd[1733]: time="2025-07-10T00:25:04.329927224Z" level=info msg="CreateContainer within sandbox \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:25:04.353264 containerd[1733]: time="2025-07-10T00:25:04.352723599Z" level=info msg="Container e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:04.365129 containerd[1733]: time="2025-07-10T00:25:04.365104344Z" level=info msg="CreateContainer within sandbox \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\"" Jul 10 00:25:04.365435 containerd[1733]: time="2025-07-10T00:25:04.365419481Z" level=info msg="StartContainer for \"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\"" Jul 10 00:25:04.366259 containerd[1733]: time="2025-07-10T00:25:04.366208668Z" level=info msg="connecting to shim e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e" address="unix:///run/containerd/s/be9baf963de8ed0a7be1bb293fddf58b5fb1e64b569ad7f99033b30d254b4bc6" protocol=ttrpc version=3 Jul 10 00:25:04.383076 systemd[1]: Started cri-containerd-e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e.scope - libcontainer container e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e. Jul 10 00:25:04.400292 systemd[1]: cri-containerd-e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e.scope: Deactivated successfully. Jul 10 00:25:04.401507 containerd[1733]: time="2025-07-10T00:25:04.401448199Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\" id:\"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\" pid:3733 exited_at:{seconds:1752107104 nanos:401148799}" Jul 10 00:25:04.406588 containerd[1733]: time="2025-07-10T00:25:04.406556620Z" level=info msg="received exit event container_id:\"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\" id:\"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\" pid:3733 exited_at:{seconds:1752107104 nanos:401148799}" Jul 10 00:25:04.412321 containerd[1733]: time="2025-07-10T00:25:04.412301447Z" level=info msg="StartContainer for \"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\" returns successfully" Jul 10 00:25:04.420378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e-rootfs.mount: Deactivated successfully. Jul 10 00:25:05.335672 containerd[1733]: time="2025-07-10T00:25:05.334964585Z" level=info msg="CreateContainer within sandbox \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:25:05.359934 containerd[1733]: time="2025-07-10T00:25:05.359697804Z" level=info msg="Container 09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:05.372917 containerd[1733]: time="2025-07-10T00:25:05.372889836Z" level=info msg="CreateContainer within sandbox \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\"" Jul 10 00:25:05.373400 containerd[1733]: time="2025-07-10T00:25:05.373362003Z" level=info msg="StartContainer for \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\"" Jul 10 00:25:05.374422 containerd[1733]: time="2025-07-10T00:25:05.374383933Z" level=info msg="connecting to shim 09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4" address="unix:///run/containerd/s/be9baf963de8ed0a7be1bb293fddf58b5fb1e64b569ad7f99033b30d254b4bc6" protocol=ttrpc version=3 Jul 10 00:25:05.397052 systemd[1]: Started cri-containerd-09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4.scope - libcontainer container 09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4. Jul 10 00:25:05.422797 containerd[1733]: time="2025-07-10T00:25:05.422724222Z" level=info msg="StartContainer for \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" returns successfully" Jul 10 00:25:05.470422 containerd[1733]: time="2025-07-10T00:25:05.470398081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" id:\"c9dfb6778318b39b12808fc0b62bff29a197d53a822f4e992de5c15f7c7a3701\" pid:3802 exited_at:{seconds:1752107105 nanos:470095434}" Jul 10 00:25:05.485502 kubelet[3139]: I0710 00:25:05.485483 3139 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:25:05.521787 systemd[1]: Created slice kubepods-burstable-podf6e86b02_4b0a_4727_afb4_21dc2e024037.slice - libcontainer container kubepods-burstable-podf6e86b02_4b0a_4727_afb4_21dc2e024037.slice. Jul 10 00:25:05.534647 systemd[1]: Created slice kubepods-burstable-podea614c61_6d24_4934_80b2_7ca3b96db3b2.slice - libcontainer container kubepods-burstable-podea614c61_6d24_4934_80b2_7ca3b96db3b2.slice. Jul 10 00:25:05.618484 kubelet[3139]: I0710 00:25:05.618461 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6e86b02-4b0a-4727-afb4-21dc2e024037-config-volume\") pod \"coredns-7c65d6cfc9-fzd5s\" (UID: \"f6e86b02-4b0a-4727-afb4-21dc2e024037\") " pod="kube-system/coredns-7c65d6cfc9-fzd5s" Jul 10 00:25:05.618557 kubelet[3139]: I0710 00:25:05.618537 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea614c61-6d24-4934-80b2-7ca3b96db3b2-config-volume\") pod \"coredns-7c65d6cfc9-n6ppc\" (UID: \"ea614c61-6d24-4934-80b2-7ca3b96db3b2\") " pod="kube-system/coredns-7c65d6cfc9-n6ppc" Jul 10 00:25:05.618583 kubelet[3139]: I0710 00:25:05.618558 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf4qm\" (UniqueName: \"kubernetes.io/projected/f6e86b02-4b0a-4727-afb4-21dc2e024037-kube-api-access-mf4qm\") pod \"coredns-7c65d6cfc9-fzd5s\" (UID: \"f6e86b02-4b0a-4727-afb4-21dc2e024037\") " pod="kube-system/coredns-7c65d6cfc9-fzd5s" Jul 10 00:25:05.618617 kubelet[3139]: I0710 00:25:05.618602 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngcx9\" (UniqueName: \"kubernetes.io/projected/ea614c61-6d24-4934-80b2-7ca3b96db3b2-kube-api-access-ngcx9\") pod \"coredns-7c65d6cfc9-n6ppc\" (UID: \"ea614c61-6d24-4934-80b2-7ca3b96db3b2\") " pod="kube-system/coredns-7c65d6cfc9-n6ppc" Jul 10 00:25:05.825885 containerd[1733]: time="2025-07-10T00:25:05.825859566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fzd5s,Uid:f6e86b02-4b0a-4727-afb4-21dc2e024037,Namespace:kube-system,Attempt:0,}" Jul 10 00:25:05.840794 containerd[1733]: time="2025-07-10T00:25:05.840737224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n6ppc,Uid:ea614c61-6d24-4934-80b2-7ca3b96db3b2,Namespace:kube-system,Attempt:0,}" Jul 10 00:25:07.463811 systemd-networkd[1354]: cilium_host: Link UP Jul 10 00:25:07.465653 systemd-networkd[1354]: cilium_net: Link UP Jul 10 00:25:07.465821 systemd-networkd[1354]: cilium_net: Gained carrier Jul 10 00:25:07.465900 systemd-networkd[1354]: cilium_host: Gained carrier Jul 10 00:25:07.588264 systemd-networkd[1354]: cilium_vxlan: Link UP Jul 10 00:25:07.588270 systemd-networkd[1354]: cilium_vxlan: Gained carrier Jul 10 00:25:07.782946 kernel: NET: Registered PF_ALG protocol family Jul 10 00:25:07.913990 systemd-networkd[1354]: cilium_net: Gained IPv6LL Jul 10 00:25:08.098028 systemd-networkd[1354]: cilium_host: Gained IPv6LL Jul 10 00:25:08.310983 systemd-networkd[1354]: lxc_health: Link UP Jul 10 00:25:08.319174 systemd-networkd[1354]: lxc_health: Gained carrier Jul 10 00:25:08.856503 systemd-networkd[1354]: lxc284346060e51: Link UP Jul 10 00:25:08.857164 kernel: eth0: renamed from tmp3fa40 Jul 10 00:25:08.858036 systemd-networkd[1354]: lxc284346060e51: Gained carrier Jul 10 00:25:08.866063 systemd-networkd[1354]: cilium_vxlan: Gained IPv6LL Jul 10 00:25:08.884771 systemd-networkd[1354]: lxc46c35a8f74ca: Link UP Jul 10 00:25:08.886335 kernel: eth0: renamed from tmp07821 Jul 10 00:25:08.887527 systemd-networkd[1354]: lxc46c35a8f74ca: Gained carrier Jul 10 00:25:09.153775 kubelet[3139]: I0710 00:25:09.153678 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dnlk2" podStartSLOduration=12.820881554 podStartE2EDuration="25.153659216s" podCreationTimestamp="2025-07-10 00:24:44 +0000 UTC" firstStartedPulling="2025-07-10 00:24:45.238857037 +0000 UTC m=+6.064333919" lastFinishedPulling="2025-07-10 00:24:57.571634691 +0000 UTC m=+18.397111581" observedRunningTime="2025-07-10 00:25:06.356526438 +0000 UTC m=+27.182003334" watchObservedRunningTime="2025-07-10 00:25:09.153659216 +0000 UTC m=+29.979136111" Jul 10 00:25:09.506113 systemd-networkd[1354]: lxc_health: Gained IPv6LL Jul 10 00:25:10.402119 systemd-networkd[1354]: lxc46c35a8f74ca: Gained IPv6LL Jul 10 00:25:10.466071 systemd-networkd[1354]: lxc284346060e51: Gained IPv6LL Jul 10 00:25:11.372082 containerd[1733]: time="2025-07-10T00:25:11.371974502Z" level=info msg="connecting to shim 3fa4084ce19fc4a37e20bf5bd488d76bde51dc228ba501b215222bc7f48b9495" address="unix:///run/containerd/s/10b97b4355da24168d645e7eff6fd85c4139f3abb5cb50b354b9c2584f0e1921" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:25:11.373006 containerd[1733]: time="2025-07-10T00:25:11.372951541Z" level=info msg="connecting to shim 07821e0c12bcbc615064797ebc9de3332d8dddd0981c4e1708e245e18064ab85" address="unix:///run/containerd/s/e0aa667866357b5ba31ed1506f87ec1d030b802c7bd22763afccc1c710628280" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:25:11.398035 systemd[1]: Started cri-containerd-3fa4084ce19fc4a37e20bf5bd488d76bde51dc228ba501b215222bc7f48b9495.scope - libcontainer container 3fa4084ce19fc4a37e20bf5bd488d76bde51dc228ba501b215222bc7f48b9495. Jul 10 00:25:11.401065 systemd[1]: Started cri-containerd-07821e0c12bcbc615064797ebc9de3332d8dddd0981c4e1708e245e18064ab85.scope - libcontainer container 07821e0c12bcbc615064797ebc9de3332d8dddd0981c4e1708e245e18064ab85. Jul 10 00:25:11.446111 containerd[1733]: time="2025-07-10T00:25:11.446091897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n6ppc,Uid:ea614c61-6d24-4934-80b2-7ca3b96db3b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"07821e0c12bcbc615064797ebc9de3332d8dddd0981c4e1708e245e18064ab85\"" Jul 10 00:25:11.448243 containerd[1733]: time="2025-07-10T00:25:11.448152164Z" level=info msg="CreateContainer within sandbox \"07821e0c12bcbc615064797ebc9de3332d8dddd0981c4e1708e245e18064ab85\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:25:11.449223 containerd[1733]: time="2025-07-10T00:25:11.449194682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fzd5s,Uid:f6e86b02-4b0a-4727-afb4-21dc2e024037,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fa4084ce19fc4a37e20bf5bd488d76bde51dc228ba501b215222bc7f48b9495\"" Jul 10 00:25:11.450866 containerd[1733]: time="2025-07-10T00:25:11.450838929Z" level=info msg="CreateContainer within sandbox \"3fa4084ce19fc4a37e20bf5bd488d76bde51dc228ba501b215222bc7f48b9495\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:25:11.470374 containerd[1733]: time="2025-07-10T00:25:11.470188853Z" level=info msg="Container fe78a4a889478a0f129f0056ff2d4af611bdc9aaa0201d26de6bfae665013c44: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:11.472650 containerd[1733]: time="2025-07-10T00:25:11.472624645Z" level=info msg="Container b383f35abc31dc60b2e27d14702cab1d399b714ff2caefd3e7030bf5c39f9e87: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:25:11.484547 containerd[1733]: time="2025-07-10T00:25:11.484526109Z" level=info msg="CreateContainer within sandbox \"07821e0c12bcbc615064797ebc9de3332d8dddd0981c4e1708e245e18064ab85\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe78a4a889478a0f129f0056ff2d4af611bdc9aaa0201d26de6bfae665013c44\"" Jul 10 00:25:11.485026 containerd[1733]: time="2025-07-10T00:25:11.484858463Z" level=info msg="StartContainer for \"fe78a4a889478a0f129f0056ff2d4af611bdc9aaa0201d26de6bfae665013c44\"" Jul 10 00:25:11.486395 containerd[1733]: time="2025-07-10T00:25:11.486371083Z" level=info msg="connecting to shim fe78a4a889478a0f129f0056ff2d4af611bdc9aaa0201d26de6bfae665013c44" address="unix:///run/containerd/s/e0aa667866357b5ba31ed1506f87ec1d030b802c7bd22763afccc1c710628280" protocol=ttrpc version=3 Jul 10 00:25:11.488273 containerd[1733]: time="2025-07-10T00:25:11.487802318Z" level=info msg="CreateContainer within sandbox \"3fa4084ce19fc4a37e20bf5bd488d76bde51dc228ba501b215222bc7f48b9495\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b383f35abc31dc60b2e27d14702cab1d399b714ff2caefd3e7030bf5c39f9e87\"" Jul 10 00:25:11.488273 containerd[1733]: time="2025-07-10T00:25:11.488135280Z" level=info msg="StartContainer for \"b383f35abc31dc60b2e27d14702cab1d399b714ff2caefd3e7030bf5c39f9e87\"" Jul 10 00:25:11.488774 containerd[1733]: time="2025-07-10T00:25:11.488742703Z" level=info msg="connecting to shim b383f35abc31dc60b2e27d14702cab1d399b714ff2caefd3e7030bf5c39f9e87" address="unix:///run/containerd/s/10b97b4355da24168d645e7eff6fd85c4139f3abb5cb50b354b9c2584f0e1921" protocol=ttrpc version=3 Jul 10 00:25:11.503032 systemd[1]: Started cri-containerd-fe78a4a889478a0f129f0056ff2d4af611bdc9aaa0201d26de6bfae665013c44.scope - libcontainer container fe78a4a889478a0f129f0056ff2d4af611bdc9aaa0201d26de6bfae665013c44. Jul 10 00:25:11.506020 systemd[1]: Started cri-containerd-b383f35abc31dc60b2e27d14702cab1d399b714ff2caefd3e7030bf5c39f9e87.scope - libcontainer container b383f35abc31dc60b2e27d14702cab1d399b714ff2caefd3e7030bf5c39f9e87. Jul 10 00:25:11.543276 containerd[1733]: time="2025-07-10T00:25:11.543252536Z" level=info msg="StartContainer for \"fe78a4a889478a0f129f0056ff2d4af611bdc9aaa0201d26de6bfae665013c44\" returns successfully" Jul 10 00:25:11.544673 containerd[1733]: time="2025-07-10T00:25:11.544062925Z" level=info msg="StartContainer for \"b383f35abc31dc60b2e27d14702cab1d399b714ff2caefd3e7030bf5c39f9e87\" returns successfully" Jul 10 00:25:12.359477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount506203839.mount: Deactivated successfully. Jul 10 00:25:12.364622 kubelet[3139]: I0710 00:25:12.364572 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-n6ppc" podStartSLOduration=28.364557351 podStartE2EDuration="28.364557351s" podCreationTimestamp="2025-07-10 00:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:25:12.363631817 +0000 UTC m=+33.189108725" watchObservedRunningTime="2025-07-10 00:25:12.364557351 +0000 UTC m=+33.190034241" Jul 10 00:25:12.377952 kubelet[3139]: I0710 00:25:12.377601 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fzd5s" podStartSLOduration=28.377586767 podStartE2EDuration="28.377586767s" podCreationTimestamp="2025-07-10 00:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:25:12.377213478 +0000 UTC m=+33.202690369" watchObservedRunningTime="2025-07-10 00:25:12.377586767 +0000 UTC m=+33.203063660" Jul 10 00:26:15.308828 systemd[1]: Started sshd@7-10.200.8.43:22-10.200.16.10:59840.service - OpenSSH per-connection server daemon (10.200.16.10:59840). Jul 10 00:26:15.937963 sshd[4452]: Accepted publickey for core from 10.200.16.10 port 59840 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:15.938950 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:15.942962 systemd-logind[1691]: New session 10 of user core. Jul 10 00:26:15.948068 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:26:16.440293 sshd[4456]: Connection closed by 10.200.16.10 port 59840 Jul 10 00:26:16.440692 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:16.443385 systemd[1]: sshd@7-10.200.8.43:22-10.200.16.10:59840.service: Deactivated successfully. Jul 10 00:26:16.444907 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:26:16.445678 systemd-logind[1691]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:26:16.446792 systemd-logind[1691]: Removed session 10. Jul 10 00:26:21.560978 systemd[1]: Started sshd@8-10.200.8.43:22-10.200.16.10:55760.service - OpenSSH per-connection server daemon (10.200.16.10:55760). Jul 10 00:26:22.186616 sshd[4470]: Accepted publickey for core from 10.200.16.10 port 55760 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:22.187878 sshd-session[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:22.192217 systemd-logind[1691]: New session 11 of user core. Jul 10 00:26:22.202070 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:26:22.671859 sshd[4472]: Connection closed by 10.200.16.10 port 55760 Jul 10 00:26:22.672297 sshd-session[4470]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:22.675136 systemd[1]: sshd@8-10.200.8.43:22-10.200.16.10:55760.service: Deactivated successfully. Jul 10 00:26:22.676817 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:26:22.677538 systemd-logind[1691]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:26:22.678738 systemd-logind[1691]: Removed session 11. Jul 10 00:26:27.782637 systemd[1]: Started sshd@9-10.200.8.43:22-10.200.16.10:55774.service - OpenSSH per-connection server daemon (10.200.16.10:55774). Jul 10 00:26:28.413419 sshd[4485]: Accepted publickey for core from 10.200.16.10 port 55774 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:28.414660 sshd-session[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:28.418098 systemd-logind[1691]: New session 12 of user core. Jul 10 00:26:28.424019 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:26:28.896947 sshd[4487]: Connection closed by 10.200.16.10 port 55774 Jul 10 00:26:28.897316 sshd-session[4485]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:28.899764 systemd[1]: sshd@9-10.200.8.43:22-10.200.16.10:55774.service: Deactivated successfully. Jul 10 00:26:28.901369 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:26:28.902108 systemd-logind[1691]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:26:28.903391 systemd-logind[1691]: Removed session 12. Jul 10 00:26:34.010670 systemd[1]: Started sshd@10-10.200.8.43:22-10.200.16.10:58198.service - OpenSSH per-connection server daemon (10.200.16.10:58198). Jul 10 00:26:34.641286 sshd[4500]: Accepted publickey for core from 10.200.16.10 port 58198 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:34.642569 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:34.646768 systemd-logind[1691]: New session 13 of user core. Jul 10 00:26:34.651074 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:26:35.123159 sshd[4502]: Connection closed by 10.200.16.10 port 58198 Jul 10 00:26:35.123529 sshd-session[4500]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:35.125648 systemd[1]: sshd@10-10.200.8.43:22-10.200.16.10:58198.service: Deactivated successfully. Jul 10 00:26:35.127241 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:26:35.128424 systemd-logind[1691]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:26:35.129792 systemd-logind[1691]: Removed session 13. Jul 10 00:26:35.233573 systemd[1]: Started sshd@11-10.200.8.43:22-10.200.16.10:58206.service - OpenSSH per-connection server daemon (10.200.16.10:58206). Jul 10 00:26:35.858311 sshd[4515]: Accepted publickey for core from 10.200.16.10 port 58206 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:35.859283 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:35.863453 systemd-logind[1691]: New session 14 of user core. Jul 10 00:26:35.868029 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:26:36.368424 sshd[4517]: Connection closed by 10.200.16.10 port 58206 Jul 10 00:26:36.368979 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:36.371102 systemd[1]: sshd@11-10.200.8.43:22-10.200.16.10:58206.service: Deactivated successfully. Jul 10 00:26:36.372491 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:26:36.373597 systemd-logind[1691]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:26:36.375121 systemd-logind[1691]: Removed session 14. Jul 10 00:26:36.479237 systemd[1]: Started sshd@12-10.200.8.43:22-10.200.16.10:58216.service - OpenSSH per-connection server daemon (10.200.16.10:58216). Jul 10 00:26:37.106522 sshd[4527]: Accepted publickey for core from 10.200.16.10 port 58216 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:37.107819 sshd-session[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:37.112015 systemd-logind[1691]: New session 15 of user core. Jul 10 00:26:37.122066 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:26:37.591264 sshd[4529]: Connection closed by 10.200.16.10 port 58216 Jul 10 00:26:37.591683 sshd-session[4527]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:37.594466 systemd[1]: sshd@12-10.200.8.43:22-10.200.16.10:58216.service: Deactivated successfully. Jul 10 00:26:37.596028 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:26:37.596719 systemd-logind[1691]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:26:37.598031 systemd-logind[1691]: Removed session 15. Jul 10 00:26:42.708645 systemd[1]: Started sshd@13-10.200.8.43:22-10.200.16.10:39928.service - OpenSSH per-connection server daemon (10.200.16.10:39928). Jul 10 00:26:43.342657 sshd[4543]: Accepted publickey for core from 10.200.16.10 port 39928 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:43.343857 sshd-session[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:43.347941 systemd-logind[1691]: New session 16 of user core. Jul 10 00:26:43.353040 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:26:43.827029 sshd[4545]: Connection closed by 10.200.16.10 port 39928 Jul 10 00:26:43.827527 sshd-session[4543]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:43.829824 systemd[1]: sshd@13-10.200.8.43:22-10.200.16.10:39928.service: Deactivated successfully. Jul 10 00:26:43.831379 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:26:43.833225 systemd-logind[1691]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:26:43.834283 systemd-logind[1691]: Removed session 16. Jul 10 00:26:43.940301 systemd[1]: Started sshd@14-10.200.8.43:22-10.200.16.10:39934.service - OpenSSH per-connection server daemon (10.200.16.10:39934). Jul 10 00:26:44.566905 sshd[4557]: Accepted publickey for core from 10.200.16.10 port 39934 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:44.567842 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:44.571168 systemd-logind[1691]: New session 17 of user core. Jul 10 00:26:44.578048 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:26:45.119082 sshd[4559]: Connection closed by 10.200.16.10 port 39934 Jul 10 00:26:45.119624 sshd-session[4557]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:45.122175 systemd[1]: sshd@14-10.200.8.43:22-10.200.16.10:39934.service: Deactivated successfully. Jul 10 00:26:45.123984 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:26:45.125311 systemd-logind[1691]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:26:45.126624 systemd-logind[1691]: Removed session 17. Jul 10 00:26:45.236759 systemd[1]: Started sshd@15-10.200.8.43:22-10.200.16.10:39944.service - OpenSSH per-connection server daemon (10.200.16.10:39944). Jul 10 00:26:45.861114 sshd[4568]: Accepted publickey for core from 10.200.16.10 port 39944 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:45.862015 sshd-session[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:45.865770 systemd-logind[1691]: New session 18 of user core. Jul 10 00:26:45.874045 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:26:47.573106 sshd[4572]: Connection closed by 10.200.16.10 port 39944 Jul 10 00:26:47.573687 sshd-session[4568]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:47.576185 systemd[1]: sshd@15-10.200.8.43:22-10.200.16.10:39944.service: Deactivated successfully. Jul 10 00:26:47.578057 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:26:47.578244 systemd[1]: session-18.scope: Consumed 337ms CPU time, 67M memory peak. Jul 10 00:26:47.579492 systemd-logind[1691]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:26:47.580795 systemd-logind[1691]: Removed session 18. Jul 10 00:26:47.685311 systemd[1]: Started sshd@16-10.200.8.43:22-10.200.16.10:39948.service - OpenSSH per-connection server daemon (10.200.16.10:39948). Jul 10 00:26:48.309567 sshd[4589]: Accepted publickey for core from 10.200.16.10 port 39948 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:48.310488 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:48.314186 systemd-logind[1691]: New session 19 of user core. Jul 10 00:26:48.320038 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:26:48.867764 sshd[4591]: Connection closed by 10.200.16.10 port 39948 Jul 10 00:26:48.868189 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:48.870649 systemd[1]: sshd@16-10.200.8.43:22-10.200.16.10:39948.service: Deactivated successfully. Jul 10 00:26:48.872079 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:26:48.872757 systemd-logind[1691]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:26:48.874012 systemd-logind[1691]: Removed session 19. Jul 10 00:26:48.982552 systemd[1]: Started sshd@17-10.200.8.43:22-10.200.16.10:39950.service - OpenSSH per-connection server daemon (10.200.16.10:39950). Jul 10 00:26:49.623655 sshd[4600]: Accepted publickey for core from 10.200.16.10 port 39950 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:49.624841 sshd-session[4600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:49.628608 systemd-logind[1691]: New session 20 of user core. Jul 10 00:26:49.633059 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:26:50.109941 sshd[4602]: Connection closed by 10.200.16.10 port 39950 Jul 10 00:26:50.110529 sshd-session[4600]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:50.112682 systemd[1]: sshd@17-10.200.8.43:22-10.200.16.10:39950.service: Deactivated successfully. Jul 10 00:26:50.114230 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:26:50.115333 systemd-logind[1691]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:26:50.116578 systemd-logind[1691]: Removed session 20. Jul 10 00:26:55.225247 systemd[1]: Started sshd@18-10.200.8.43:22-10.200.16.10:33474.service - OpenSSH per-connection server daemon (10.200.16.10:33474). Jul 10 00:26:55.851418 sshd[4617]: Accepted publickey for core from 10.200.16.10 port 33474 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:26:55.852425 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:26:55.856335 systemd-logind[1691]: New session 21 of user core. Jul 10 00:26:55.865058 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:26:56.333460 sshd[4619]: Connection closed by 10.200.16.10 port 33474 Jul 10 00:26:56.333864 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Jul 10 00:26:56.336448 systemd[1]: sshd@18-10.200.8.43:22-10.200.16.10:33474.service: Deactivated successfully. Jul 10 00:26:56.338062 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:26:56.338754 systemd-logind[1691]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:26:56.340042 systemd-logind[1691]: Removed session 21. Jul 10 00:27:01.444624 systemd[1]: Started sshd@19-10.200.8.43:22-10.200.16.10:40826.service - OpenSSH per-connection server daemon (10.200.16.10:40826). Jul 10 00:27:02.080818 sshd[4631]: Accepted publickey for core from 10.200.16.10 port 40826 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:02.082149 sshd-session[4631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:02.086398 systemd-logind[1691]: New session 22 of user core. Jul 10 00:27:02.092066 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:27:02.562187 sshd[4633]: Connection closed by 10.200.16.10 port 40826 Jul 10 00:27:02.562625 sshd-session[4631]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:02.565538 systemd[1]: sshd@19-10.200.8.43:22-10.200.16.10:40826.service: Deactivated successfully. Jul 10 00:27:02.567219 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:27:02.567876 systemd-logind[1691]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:27:02.569345 systemd-logind[1691]: Removed session 22. Jul 10 00:27:07.675246 systemd[1]: Started sshd@20-10.200.8.43:22-10.200.16.10:40830.service - OpenSSH per-connection server daemon (10.200.16.10:40830). Jul 10 00:27:08.301566 sshd[4645]: Accepted publickey for core from 10.200.16.10 port 40830 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:08.302736 sshd-session[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:08.307085 systemd-logind[1691]: New session 23 of user core. Jul 10 00:27:08.313035 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:27:08.786205 sshd[4647]: Connection closed by 10.200.16.10 port 40830 Jul 10 00:27:08.786595 sshd-session[4645]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:08.789175 systemd[1]: sshd@20-10.200.8.43:22-10.200.16.10:40830.service: Deactivated successfully. Jul 10 00:27:08.790712 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:27:08.791400 systemd-logind[1691]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:27:08.792481 systemd-logind[1691]: Removed session 23. Jul 10 00:27:08.900396 systemd[1]: Started sshd@21-10.200.8.43:22-10.200.16.10:40832.service - OpenSSH per-connection server daemon (10.200.16.10:40832). Jul 10 00:27:09.530584 sshd[4659]: Accepted publickey for core from 10.200.16.10 port 40832 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:09.532144 sshd-session[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:09.537184 systemd-logind[1691]: New session 24 of user core. Jul 10 00:27:09.543087 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 00:27:11.161375 containerd[1733]: time="2025-07-10T00:27:11.161197139Z" level=info msg="StopContainer for \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\" with timeout 30 (s)" Jul 10 00:27:11.163094 containerd[1733]: time="2025-07-10T00:27:11.163055158Z" level=info msg="Stop container \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\" with signal terminated" Jul 10 00:27:11.170776 containerd[1733]: time="2025-07-10T00:27:11.170745903Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:27:11.173035 systemd[1]: cri-containerd-f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d.scope: Deactivated successfully. Jul 10 00:27:11.176704 containerd[1733]: time="2025-07-10T00:27:11.176654009Z" level=info msg="received exit event container_id:\"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\" id:\"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\" pid:3537 exited_at:{seconds:1752107231 nanos:176010753}" Jul 10 00:27:11.177199 containerd[1733]: time="2025-07-10T00:27:11.177160139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\" id:\"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\" pid:3537 exited_at:{seconds:1752107231 nanos:176010753}" Jul 10 00:27:11.177685 containerd[1733]: time="2025-07-10T00:27:11.177665096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" id:\"cffcb160a71102f332e4b4ac83d5ab03a99f0cf62cbb2c39c2d60103c1fbf890\" pid:4680 exited_at:{seconds:1752107231 nanos:176956336}" Jul 10 00:27:11.180099 containerd[1733]: time="2025-07-10T00:27:11.180047995Z" level=info msg="StopContainer for \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" with timeout 2 (s)" Jul 10 00:27:11.180444 containerd[1733]: time="2025-07-10T00:27:11.180394375Z" level=info msg="Stop container \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" with signal terminated" Jul 10 00:27:11.189762 systemd-networkd[1354]: lxc_health: Link DOWN Jul 10 00:27:11.189767 systemd-networkd[1354]: lxc_health: Lost carrier Jul 10 00:27:11.199527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d-rootfs.mount: Deactivated successfully. Jul 10 00:27:11.206493 systemd[1]: cri-containerd-09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4.scope: Deactivated successfully. Jul 10 00:27:11.206814 systemd[1]: cri-containerd-09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4.scope: Consumed 4.483s CPU time, 123.5M memory peak, 152K read from disk, 13.3M written to disk. Jul 10 00:27:11.207132 containerd[1733]: time="2025-07-10T00:27:11.206986026Z" level=info msg="received exit event container_id:\"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" id:\"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" pid:3772 exited_at:{seconds:1752107231 nanos:206563177}" Jul 10 00:27:11.207425 containerd[1733]: time="2025-07-10T00:27:11.207398432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" id:\"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" pid:3772 exited_at:{seconds:1752107231 nanos:206563177}" Jul 10 00:27:11.220723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4-rootfs.mount: Deactivated successfully. Jul 10 00:27:11.288501 containerd[1733]: time="2025-07-10T00:27:11.288482042Z" level=info msg="StopContainer for \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\" returns successfully" Jul 10 00:27:11.289287 containerd[1733]: time="2025-07-10T00:27:11.288985114Z" level=info msg="StopPodSandbox for \"d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f\"" Jul 10 00:27:11.289287 containerd[1733]: time="2025-07-10T00:27:11.289032840Z" level=info msg="Container to stop \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:11.290321 containerd[1733]: time="2025-07-10T00:27:11.290302775Z" level=info msg="StopContainer for \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" returns successfully" Jul 10 00:27:11.290851 containerd[1733]: time="2025-07-10T00:27:11.290828092Z" level=info msg="StopPodSandbox for \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\"" Jul 10 00:27:11.290907 containerd[1733]: time="2025-07-10T00:27:11.290885252Z" level=info msg="Container to stop \"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:11.290907 containerd[1733]: time="2025-07-10T00:27:11.290899447Z" level=info msg="Container to stop \"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:11.290976 containerd[1733]: time="2025-07-10T00:27:11.290907403Z" level=info msg="Container to stop \"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:11.290976 containerd[1733]: time="2025-07-10T00:27:11.290966207Z" level=info msg="Container to stop \"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:11.291014 containerd[1733]: time="2025-07-10T00:27:11.290974602Z" level=info msg="Container to stop \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:27:11.296282 systemd[1]: cri-containerd-d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f.scope: Deactivated successfully. Jul 10 00:27:11.297500 containerd[1733]: time="2025-07-10T00:27:11.297437798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f\" id:\"d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f\" pid:3246 exit_status:137 exited_at:{seconds:1752107231 nanos:296747225}" Jul 10 00:27:11.298335 systemd[1]: cri-containerd-e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8.scope: Deactivated successfully. Jul 10 00:27:11.321695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8-rootfs.mount: Deactivated successfully. Jul 10 00:27:11.326983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f-rootfs.mount: Deactivated successfully. Jul 10 00:27:11.337072 containerd[1733]: time="2025-07-10T00:27:11.337034722Z" level=info msg="shim disconnected" id=e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8 namespace=k8s.io Jul 10 00:27:11.337615 containerd[1733]: time="2025-07-10T00:27:11.337054129Z" level=warning msg="cleaning up after shim disconnected" id=e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8 namespace=k8s.io Jul 10 00:27:11.337615 containerd[1733]: time="2025-07-10T00:27:11.337485088Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:27:11.338762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f-shm.mount: Deactivated successfully. Jul 10 00:27:11.339171 containerd[1733]: time="2025-07-10T00:27:11.339143231Z" level=info msg="received exit event sandbox_id:\"d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f\" exit_status:137 exited_at:{seconds:1752107231 nanos:296747225}" Jul 10 00:27:11.339337 containerd[1733]: time="2025-07-10T00:27:11.337084131Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" id:\"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" pid:3329 exit_status:137 exited_at:{seconds:1752107231 nanos:299359586}" Jul 10 00:27:11.339669 containerd[1733]: time="2025-07-10T00:27:11.339445926Z" level=info msg="received exit event sandbox_id:\"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" exit_status:137 exited_at:{seconds:1752107231 nanos:299359586}" Jul 10 00:27:11.340026 containerd[1733]: time="2025-07-10T00:27:11.339450970Z" level=info msg="shim disconnected" id=d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f namespace=k8s.io Jul 10 00:27:11.340175 containerd[1733]: time="2025-07-10T00:27:11.340164627Z" level=warning msg="cleaning up after shim disconnected" id=d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f namespace=k8s.io Jul 10 00:27:11.340226 containerd[1733]: time="2025-07-10T00:27:11.340205896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:27:11.344039 containerd[1733]: time="2025-07-10T00:27:11.343999525Z" level=info msg="TearDown network for sandbox \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" successfully" Jul 10 00:27:11.344039 containerd[1733]: time="2025-07-10T00:27:11.344039852Z" level=info msg="StopPodSandbox for \"e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8\" returns successfully" Jul 10 00:27:11.344525 containerd[1733]: time="2025-07-10T00:27:11.344484670Z" level=info msg="TearDown network for sandbox \"d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f\" successfully" Jul 10 00:27:11.344525 containerd[1733]: time="2025-07-10T00:27:11.344500506Z" level=info msg="StopPodSandbox for \"d6dd62412b30bb1204a809dbf8b025652bdfce17c2839a3fcb7cb092999ecf8f\" returns successfully" Jul 10 00:27:11.376147 kubelet[3139]: I0710 00:27:11.376121 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9gjk\" (UniqueName: \"kubernetes.io/projected/88e7da1d-2c95-4ed6-9099-018072d2f0b9-kube-api-access-d9gjk\") pod \"88e7da1d-2c95-4ed6-9099-018072d2f0b9\" (UID: \"88e7da1d-2c95-4ed6-9099-018072d2f0b9\") " Jul 10 00:27:11.376398 kubelet[3139]: I0710 00:27:11.376156 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88e7da1d-2c95-4ed6-9099-018072d2f0b9-cilium-config-path\") pod \"88e7da1d-2c95-4ed6-9099-018072d2f0b9\" (UID: \"88e7da1d-2c95-4ed6-9099-018072d2f0b9\") " Jul 10 00:27:11.379547 kubelet[3139]: I0710 00:27:11.379472 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e7da1d-2c95-4ed6-9099-018072d2f0b9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "88e7da1d-2c95-4ed6-9099-018072d2f0b9" (UID: "88e7da1d-2c95-4ed6-9099-018072d2f0b9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:27:11.380156 kubelet[3139]: I0710 00:27:11.380091 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e7da1d-2c95-4ed6-9099-018072d2f0b9-kube-api-access-d9gjk" (OuterVolumeSpecName: "kube-api-access-d9gjk") pod "88e7da1d-2c95-4ed6-9099-018072d2f0b9" (UID: "88e7da1d-2c95-4ed6-9099-018072d2f0b9"). InnerVolumeSpecName "kube-api-access-d9gjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:27:11.477284 kubelet[3139]: I0710 00:27:11.477219 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phqt9\" (UniqueName: \"kubernetes.io/projected/9d33188f-d77b-4276-a0ea-7ae32153e715-kube-api-access-phqt9\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477284 kubelet[3139]: I0710 00:27:11.477245 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-lib-modules\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477284 kubelet[3139]: I0710 00:27:11.477260 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-hostproc\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477284 kubelet[3139]: I0710 00:27:11.477274 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cni-path\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477407 kubelet[3139]: I0710 00:27:11.477287 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-run\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477407 kubelet[3139]: I0710 00:27:11.477301 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-cgroup\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477407 kubelet[3139]: I0710 00:27:11.477317 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-config-path\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477407 kubelet[3139]: I0710 00:27:11.477331 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-xtables-lock\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477407 kubelet[3139]: I0710 00:27:11.477343 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-etc-cni-netd\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477407 kubelet[3139]: I0710 00:27:11.477358 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-bpf-maps\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477522 kubelet[3139]: I0710 00:27:11.477373 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-host-proc-sys-kernel\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477522 kubelet[3139]: I0710 00:27:11.477391 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d33188f-d77b-4276-a0ea-7ae32153e715-hubble-tls\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477522 kubelet[3139]: I0710 00:27:11.477408 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d33188f-d77b-4276-a0ea-7ae32153e715-clustermesh-secrets\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477522 kubelet[3139]: I0710 00:27:11.477422 3139 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-host-proc-sys-net\") pod \"9d33188f-d77b-4276-a0ea-7ae32153e715\" (UID: \"9d33188f-d77b-4276-a0ea-7ae32153e715\") " Jul 10 00:27:11.477522 kubelet[3139]: I0710 00:27:11.477460 3139 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9gjk\" (UniqueName: \"kubernetes.io/projected/88e7da1d-2c95-4ed6-9099-018072d2f0b9-kube-api-access-d9gjk\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.477522 kubelet[3139]: I0710 00:27:11.477470 3139 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88e7da1d-2c95-4ed6-9099-018072d2f0b9-cilium-config-path\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.477639 kubelet[3139]: I0710 00:27:11.477497 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:27:11.479150 kubelet[3139]: I0710 00:27:11.479124 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:27:11.479391 kubelet[3139]: I0710 00:27:11.479159 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:27:11.479391 kubelet[3139]: I0710 00:27:11.479171 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:27:11.479391 kubelet[3139]: I0710 00:27:11.479183 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:27:11.481448 kubelet[3139]: I0710 00:27:11.481404 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:27:11.481448 kubelet[3139]: I0710 00:27:11.481439 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-hostproc" (OuterVolumeSpecName: "hostproc") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:27:11.481540 kubelet[3139]: I0710 00:27:11.481452 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cni-path" (OuterVolumeSpecName: "cni-path") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:27:11.481540 kubelet[3139]: I0710 00:27:11.481466 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:27:11.481540 kubelet[3139]: I0710 00:27:11.481477 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:27:11.481760 kubelet[3139]: I0710 00:27:11.481738 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d33188f-d77b-4276-a0ea-7ae32153e715-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:27:11.481801 kubelet[3139]: I0710 00:27:11.481784 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d33188f-d77b-4276-a0ea-7ae32153e715-kube-api-access-phqt9" (OuterVolumeSpecName: "kube-api-access-phqt9") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "kube-api-access-phqt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:27:11.482186 kubelet[3139]: I0710 00:27:11.482158 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d33188f-d77b-4276-a0ea-7ae32153e715-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:27:11.482532 kubelet[3139]: I0710 00:27:11.482511 3139 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9d33188f-d77b-4276-a0ea-7ae32153e715" (UID: "9d33188f-d77b-4276-a0ea-7ae32153e715"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:27:11.558726 kubelet[3139]: I0710 00:27:11.558660 3139 scope.go:117] "RemoveContainer" containerID="09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4" Jul 10 00:27:11.569159 containerd[1733]: time="2025-07-10T00:27:11.568983101Z" level=info msg="RemoveContainer for \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\"" Jul 10 00:27:11.570049 systemd[1]: Removed slice kubepods-burstable-pod9d33188f_d77b_4276_a0ea_7ae32153e715.slice - libcontainer container kubepods-burstable-pod9d33188f_d77b_4276_a0ea_7ae32153e715.slice. Jul 10 00:27:11.570151 systemd[1]: kubepods-burstable-pod9d33188f_d77b_4276_a0ea_7ae32153e715.slice: Consumed 4.546s CPU time, 123.9M memory peak, 152K read from disk, 13.3M written to disk. Jul 10 00:27:11.573790 systemd[1]: Removed slice kubepods-besteffort-pod88e7da1d_2c95_4ed6_9099_018072d2f0b9.slice - libcontainer container kubepods-besteffort-pod88e7da1d_2c95_4ed6_9099_018072d2f0b9.slice. Jul 10 00:27:11.577124 containerd[1733]: time="2025-07-10T00:27:11.576981735Z" level=info msg="RemoveContainer for \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" returns successfully" Jul 10 00:27:11.577840 kubelet[3139]: I0710 00:27:11.577563 3139 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-etc-cni-netd\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.577840 kubelet[3139]: I0710 00:27:11.577579 3139 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-bpf-maps\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.577840 kubelet[3139]: I0710 00:27:11.577588 3139 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-host-proc-sys-kernel\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.577840 kubelet[3139]: I0710 00:27:11.577596 3139 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9d33188f-d77b-4276-a0ea-7ae32153e715-hubble-tls\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.577840 kubelet[3139]: I0710 00:27:11.577605 3139 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9d33188f-d77b-4276-a0ea-7ae32153e715-clustermesh-secrets\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.577840 kubelet[3139]: I0710 00:27:11.577613 3139 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-host-proc-sys-net\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.577840 kubelet[3139]: I0710 00:27:11.577621 3139 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phqt9\" (UniqueName: \"kubernetes.io/projected/9d33188f-d77b-4276-a0ea-7ae32153e715-kube-api-access-phqt9\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.577840 kubelet[3139]: I0710 00:27:11.577628 3139 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-lib-modules\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.578075 kubelet[3139]: I0710 00:27:11.577636 3139 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-hostproc\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.578075 kubelet[3139]: I0710 00:27:11.577644 3139 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cni-path\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.578075 kubelet[3139]: I0710 00:27:11.577651 3139 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-run\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.578075 kubelet[3139]: I0710 00:27:11.577658 3139 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-cgroup\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.578075 kubelet[3139]: I0710 00:27:11.577666 3139 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9d33188f-d77b-4276-a0ea-7ae32153e715-cilium-config-path\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.578075 kubelet[3139]: I0710 00:27:11.577673 3139 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d33188f-d77b-4276-a0ea-7ae32153e715-xtables-lock\") on node \"ci-4344.1.1-n-4c73015a89\" DevicePath \"\"" Jul 10 00:27:11.578075 kubelet[3139]: I0710 00:27:11.577767 3139 scope.go:117] "RemoveContainer" containerID="e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e" Jul 10 00:27:11.579397 containerd[1733]: time="2025-07-10T00:27:11.578983819Z" level=info msg="RemoveContainer for \"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\"" Jul 10 00:27:11.585856 containerd[1733]: time="2025-07-10T00:27:11.585823850Z" level=info msg="RemoveContainer for \"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\" returns successfully" Jul 10 00:27:11.586109 kubelet[3139]: I0710 00:27:11.586084 3139 scope.go:117] "RemoveContainer" containerID="b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6" Jul 10 00:27:11.589035 containerd[1733]: time="2025-07-10T00:27:11.588999006Z" level=info msg="RemoveContainer for \"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\"" Jul 10 00:27:11.595529 containerd[1733]: time="2025-07-10T00:27:11.595493031Z" level=info msg="RemoveContainer for \"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\" returns successfully" Jul 10 00:27:11.595665 kubelet[3139]: I0710 00:27:11.595655 3139 scope.go:117] "RemoveContainer" containerID="93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57" Jul 10 00:27:11.597541 containerd[1733]: time="2025-07-10T00:27:11.597483417Z" level=info msg="RemoveContainer for \"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\"" Jul 10 00:27:11.604417 containerd[1733]: time="2025-07-10T00:27:11.604386196Z" level=info msg="RemoveContainer for \"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\" returns successfully" Jul 10 00:27:11.604561 kubelet[3139]: I0710 00:27:11.604543 3139 scope.go:117] "RemoveContainer" containerID="cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e" Jul 10 00:27:11.605656 containerd[1733]: time="2025-07-10T00:27:11.605634960Z" level=info msg="RemoveContainer for \"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\"" Jul 10 00:27:11.611726 containerd[1733]: time="2025-07-10T00:27:11.611692725Z" level=info msg="RemoveContainer for \"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\" returns successfully" Jul 10 00:27:11.611854 kubelet[3139]: I0710 00:27:11.611840 3139 scope.go:117] "RemoveContainer" containerID="09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4" Jul 10 00:27:11.612085 containerd[1733]: time="2025-07-10T00:27:11.612061176Z" level=error msg="ContainerStatus for \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\": not found" Jul 10 00:27:11.612229 kubelet[3139]: E0710 00:27:11.612207 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\": not found" containerID="09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4" Jul 10 00:27:11.612290 kubelet[3139]: I0710 00:27:11.612230 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4"} err="failed to get container status \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\": rpc error: code = NotFound desc = an error occurred when try to find container \"09e8ad077c4c06549ce9312bc7ead97bbdbfe2950bd00cc62eb6d6f8b4e2aaa4\": not found" Jul 10 00:27:11.612319 kubelet[3139]: I0710 00:27:11.612291 3139 scope.go:117] "RemoveContainer" containerID="e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e" Jul 10 00:27:11.612491 containerd[1733]: time="2025-07-10T00:27:11.612469139Z" level=error msg="ContainerStatus for \"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\": not found" Jul 10 00:27:11.612562 kubelet[3139]: E0710 00:27:11.612548 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\": not found" containerID="e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e" Jul 10 00:27:11.612593 kubelet[3139]: I0710 00:27:11.612567 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e"} err="failed to get container status \"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0622566d22467f0002372d844a24d808520da7edb747f98c159aad83fee2d6e\": not found" Jul 10 00:27:11.612593 kubelet[3139]: I0710 00:27:11.612582 3139 scope.go:117] "RemoveContainer" containerID="b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6" Jul 10 00:27:11.612744 containerd[1733]: time="2025-07-10T00:27:11.612720036Z" level=error msg="ContainerStatus for \"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\": not found" Jul 10 00:27:11.612859 kubelet[3139]: E0710 00:27:11.612846 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\": not found" containerID="b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6" Jul 10 00:27:11.612891 kubelet[3139]: I0710 00:27:11.612861 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6"} err="failed to get container status \"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b03bafdff8bc62f7c78a4a4c7e851542757f5aa08cc7aaba4c675480339901c6\": not found" Jul 10 00:27:11.612891 kubelet[3139]: I0710 00:27:11.612873 3139 scope.go:117] "RemoveContainer" containerID="93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57" Jul 10 00:27:11.613026 containerd[1733]: time="2025-07-10T00:27:11.612998837Z" level=error msg="ContainerStatus for \"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\": not found" Jul 10 00:27:11.613098 kubelet[3139]: E0710 00:27:11.613080 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\": not found" containerID="93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57" Jul 10 00:27:11.613125 kubelet[3139]: I0710 00:27:11.613101 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57"} err="failed to get container status \"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\": rpc error: code = NotFound desc = an error occurred when try to find container \"93adea239dda2fd3fc6ad756c2de1db1db67dc84cc01d7d8c64dcc2f62e28a57\": not found" Jul 10 00:27:11.613125 kubelet[3139]: I0710 00:27:11.613115 3139 scope.go:117] "RemoveContainer" containerID="cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e" Jul 10 00:27:11.613239 containerd[1733]: time="2025-07-10T00:27:11.613220263Z" level=error msg="ContainerStatus for \"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\": not found" Jul 10 00:27:11.613333 kubelet[3139]: E0710 00:27:11.613296 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\": not found" containerID="cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e" Jul 10 00:27:11.613363 kubelet[3139]: I0710 00:27:11.613325 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e"} err="failed to get container status \"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbc65fc8d2b7505887a54b29f6c402c097c0eb31f013e78d593941fbb80d021e\": not found" Jul 10 00:27:11.613363 kubelet[3139]: I0710 00:27:11.613340 3139 scope.go:117] "RemoveContainer" containerID="f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d" Jul 10 00:27:11.614460 containerd[1733]: time="2025-07-10T00:27:11.614437674Z" level=info msg="RemoveContainer for \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\"" Jul 10 00:27:11.620991 containerd[1733]: time="2025-07-10T00:27:11.620957311Z" level=info msg="RemoveContainer for \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\" returns successfully" Jul 10 00:27:11.621115 kubelet[3139]: I0710 00:27:11.621105 3139 scope.go:117] "RemoveContainer" containerID="f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d" Jul 10 00:27:11.621339 containerd[1733]: time="2025-07-10T00:27:11.621311376Z" level=error msg="ContainerStatus for \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\": not found" Jul 10 00:27:11.621425 kubelet[3139]: E0710 00:27:11.621407 3139 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\": not found" containerID="f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d" Jul 10 00:27:11.621472 kubelet[3139]: I0710 00:27:11.621428 3139 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d"} err="failed to get container status \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0f78a881139428875d4458640fe883e863269dabdceda872e05c48bf8ec263d\": not found" Jul 10 00:27:12.198038 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9be6ed0914a6a682e2385a25f264d2e3ddb0ffcbd460fb67f7e2c5f1331cdd8-shm.mount: Deactivated successfully. Jul 10 00:27:12.198124 systemd[1]: var-lib-kubelet-pods-9d33188f\x2dd77b\x2d4276\x2da0ea\x2d7ae32153e715-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dphqt9.mount: Deactivated successfully. Jul 10 00:27:12.198180 systemd[1]: var-lib-kubelet-pods-88e7da1d\x2d2c95\x2d4ed6\x2d9099\x2d018072d2f0b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd9gjk.mount: Deactivated successfully. Jul 10 00:27:12.198233 systemd[1]: var-lib-kubelet-pods-9d33188f\x2dd77b\x2d4276\x2da0ea\x2d7ae32153e715-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:27:12.198281 systemd[1]: var-lib-kubelet-pods-9d33188f\x2dd77b\x2d4276\x2da0ea\x2d7ae32153e715-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:27:13.209415 sshd[4661]: Connection closed by 10.200.16.10 port 40832 Jul 10 00:27:13.210085 sshd-session[4659]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:13.213802 systemd[1]: sshd@21-10.200.8.43:22-10.200.16.10:40832.service: Deactivated successfully. Jul 10 00:27:13.215493 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 00:27:13.216228 systemd-logind[1691]: Session 24 logged out. Waiting for processes to exit. Jul 10 00:27:13.217567 systemd-logind[1691]: Removed session 24. Jul 10 00:27:13.256031 kubelet[3139]: I0710 00:27:13.256007 3139 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e7da1d-2c95-4ed6-9099-018072d2f0b9" path="/var/lib/kubelet/pods/88e7da1d-2c95-4ed6-9099-018072d2f0b9/volumes" Jul 10 00:27:13.256347 kubelet[3139]: I0710 00:27:13.256328 3139 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d33188f-d77b-4276-a0ea-7ae32153e715" path="/var/lib/kubelet/pods/9d33188f-d77b-4276-a0ea-7ae32153e715/volumes" Jul 10 00:27:13.319644 systemd[1]: Started sshd@22-10.200.8.43:22-10.200.16.10:47872.service - OpenSSH per-connection server daemon (10.200.16.10:47872). Jul 10 00:27:13.946474 sshd[4816]: Accepted publickey for core from 10.200.16.10 port 47872 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:13.947955 sshd-session[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:13.951941 systemd-logind[1691]: New session 25 of user core. Jul 10 00:27:13.955082 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 00:27:14.328742 kubelet[3139]: E0710 00:27:14.328629 3139 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:27:14.639399 kubelet[3139]: E0710 00:27:14.639360 3139 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d33188f-d77b-4276-a0ea-7ae32153e715" containerName="mount-cgroup" Jul 10 00:27:14.639399 kubelet[3139]: E0710 00:27:14.639386 3139 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d33188f-d77b-4276-a0ea-7ae32153e715" containerName="apply-sysctl-overwrites" Jul 10 00:27:14.639399 kubelet[3139]: E0710 00:27:14.639393 3139 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d33188f-d77b-4276-a0ea-7ae32153e715" containerName="mount-bpf-fs" Jul 10 00:27:14.639399 kubelet[3139]: E0710 00:27:14.639400 3139 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d33188f-d77b-4276-a0ea-7ae32153e715" containerName="clean-cilium-state" Jul 10 00:27:14.639558 kubelet[3139]: E0710 00:27:14.639407 3139 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9d33188f-d77b-4276-a0ea-7ae32153e715" containerName="cilium-agent" Jul 10 00:27:14.639558 kubelet[3139]: E0710 00:27:14.639413 3139 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="88e7da1d-2c95-4ed6-9099-018072d2f0b9" containerName="cilium-operator" Jul 10 00:27:14.639558 kubelet[3139]: I0710 00:27:14.639437 3139 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e7da1d-2c95-4ed6-9099-018072d2f0b9" containerName="cilium-operator" Jul 10 00:27:14.639558 kubelet[3139]: I0710 00:27:14.639442 3139 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d33188f-d77b-4276-a0ea-7ae32153e715" containerName="cilium-agent" Jul 10 00:27:14.649501 systemd[1]: Created slice kubepods-burstable-podd0b61f39_90d4_4f37_a7c8_879cf9ea675c.slice - libcontainer container kubepods-burstable-podd0b61f39_90d4_4f37_a7c8_879cf9ea675c.slice. Jul 10 00:27:14.693801 kubelet[3139]: I0710 00:27:14.693763 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-cilium-config-path\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.693888 kubelet[3139]: I0710 00:27:14.693810 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-host-proc-sys-net\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.693888 kubelet[3139]: I0710 00:27:14.693829 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-etc-cni-netd\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.693888 kubelet[3139]: I0710 00:27:14.693844 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-lib-modules\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.693888 kubelet[3139]: I0710 00:27:14.693860 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-xtables-lock\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.693888 kubelet[3139]: I0710 00:27:14.693874 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-cni-path\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.694006 kubelet[3139]: I0710 00:27:14.693893 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-hubble-tls\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.694256 kubelet[3139]: I0710 00:27:14.694221 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-cilium-ipsec-secrets\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.694353 kubelet[3139]: I0710 00:27:14.694311 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-hostproc\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.694498 kubelet[3139]: I0710 00:27:14.694433 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-cilium-cgroup\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.694498 kubelet[3139]: I0710 00:27:14.694464 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-clustermesh-secrets\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.694563 kubelet[3139]: I0710 00:27:14.694481 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-bpf-maps\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.694662 kubelet[3139]: I0710 00:27:14.694642 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-cilium-run\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.694691 kubelet[3139]: I0710 00:27:14.694667 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-host-proc-sys-kernel\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.694828 kubelet[3139]: I0710 00:27:14.694761 3139 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9cmw\" (UniqueName: \"kubernetes.io/projected/d0b61f39-90d4-4f37-a7c8-879cf9ea675c-kube-api-access-c9cmw\") pod \"cilium-zjcm9\" (UID: \"d0b61f39-90d4-4f37-a7c8-879cf9ea675c\") " pod="kube-system/cilium-zjcm9" Jul 10 00:27:14.758778 sshd[4818]: Connection closed by 10.200.16.10 port 47872 Jul 10 00:27:14.759184 sshd-session[4816]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:14.761328 systemd[1]: sshd@22-10.200.8.43:22-10.200.16.10:47872.service: Deactivated successfully. Jul 10 00:27:14.762947 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 00:27:14.764509 systemd-logind[1691]: Session 25 logged out. Waiting for processes to exit. Jul 10 00:27:14.765151 systemd-logind[1691]: Removed session 25. Jul 10 00:27:14.870296 systemd[1]: Started sshd@23-10.200.8.43:22-10.200.16.10:47874.service - OpenSSH per-connection server daemon (10.200.16.10:47874). Jul 10 00:27:14.955165 containerd[1733]: time="2025-07-10T00:27:14.955096040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zjcm9,Uid:d0b61f39-90d4-4f37-a7c8-879cf9ea675c,Namespace:kube-system,Attempt:0,}" Jul 10 00:27:14.988943 containerd[1733]: time="2025-07-10T00:27:14.988885033Z" level=info msg="connecting to shim cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e" address="unix:///run/containerd/s/d7f380dbf62d7def1539976a648caa3d90a0e0201ce34d0147eb01f8f563560c" namespace=k8s.io protocol=ttrpc version=3 Jul 10 00:27:15.010069 systemd[1]: Started cri-containerd-cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e.scope - libcontainer container cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e. Jul 10 00:27:15.030393 containerd[1733]: time="2025-07-10T00:27:15.030300620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zjcm9,Uid:d0b61f39-90d4-4f37-a7c8-879cf9ea675c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e\"" Jul 10 00:27:15.032443 containerd[1733]: time="2025-07-10T00:27:15.032415848Z" level=info msg="CreateContainer within sandbox \"cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:27:15.043586 containerd[1733]: time="2025-07-10T00:27:15.043560434Z" level=info msg="Container 2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:15.056534 containerd[1733]: time="2025-07-10T00:27:15.056418278Z" level=info msg="CreateContainer within sandbox \"cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05\"" Jul 10 00:27:15.056825 containerd[1733]: time="2025-07-10T00:27:15.056799502Z" level=info msg="StartContainer for \"2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05\"" Jul 10 00:27:15.057659 containerd[1733]: time="2025-07-10T00:27:15.057608607Z" level=info msg="connecting to shim 2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05" address="unix:///run/containerd/s/d7f380dbf62d7def1539976a648caa3d90a0e0201ce34d0147eb01f8f563560c" protocol=ttrpc version=3 Jul 10 00:27:15.075046 systemd[1]: Started cri-containerd-2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05.scope - libcontainer container 2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05. Jul 10 00:27:15.127231 containerd[1733]: time="2025-07-10T00:27:15.127169241Z" level=info msg="StartContainer for \"2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05\" returns successfully" Jul 10 00:27:15.152432 systemd[1]: cri-containerd-2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05.scope: Deactivated successfully. Jul 10 00:27:15.155574 containerd[1733]: time="2025-07-10T00:27:15.155544912Z" level=info msg="received exit event container_id:\"2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05\" id:\"2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05\" pid:4896 exited_at:{seconds:1752107235 nanos:155232175}" Jul 10 00:27:15.155722 containerd[1733]: time="2025-07-10T00:27:15.155707055Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05\" id:\"2639fe0789c4856f4878e6d331513114dbe13ae00697a50a7a1f8d58dfbe8b05\" pid:4896 exited_at:{seconds:1752107235 nanos:155232175}" Jul 10 00:27:15.500819 sshd[4834]: Accepted publickey for core from 10.200.16.10 port 47874 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:15.502185 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:15.506065 systemd-logind[1691]: New session 26 of user core. Jul 10 00:27:15.509089 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 10 00:27:15.580096 containerd[1733]: time="2025-07-10T00:27:15.580074873Z" level=info msg="CreateContainer within sandbox \"cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:27:15.595703 containerd[1733]: time="2025-07-10T00:27:15.595675197Z" level=info msg="Container 10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:15.612001 containerd[1733]: time="2025-07-10T00:27:15.611973565Z" level=info msg="CreateContainer within sandbox \"cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c\"" Jul 10 00:27:15.613279 containerd[1733]: time="2025-07-10T00:27:15.612370084Z" level=info msg="StartContainer for \"10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c\"" Jul 10 00:27:15.613279 containerd[1733]: time="2025-07-10T00:27:15.613026440Z" level=info msg="connecting to shim 10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c" address="unix:///run/containerd/s/d7f380dbf62d7def1539976a648caa3d90a0e0201ce34d0147eb01f8f563560c" protocol=ttrpc version=3 Jul 10 00:27:15.630035 systemd[1]: Started cri-containerd-10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c.scope - libcontainer container 10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c. Jul 10 00:27:15.652410 containerd[1733]: time="2025-07-10T00:27:15.652391763Z" level=info msg="StartContainer for \"10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c\" returns successfully" Jul 10 00:27:15.654129 systemd[1]: cri-containerd-10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c.scope: Deactivated successfully. Jul 10 00:27:15.654999 containerd[1733]: time="2025-07-10T00:27:15.654980353Z" level=info msg="received exit event container_id:\"10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c\" id:\"10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c\" pid:4944 exited_at:{seconds:1752107235 nanos:654596020}" Jul 10 00:27:15.655096 containerd[1733]: time="2025-07-10T00:27:15.654990688Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c\" id:\"10ad2db880bb1e1ddfd7f62e800834f04ffe597955691c5a5b76efaf8e88ed6c\" pid:4944 exited_at:{seconds:1752107235 nanos:654596020}" Jul 10 00:27:15.802043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459741286.mount: Deactivated successfully. Jul 10 00:27:15.939212 sshd[4930]: Connection closed by 10.200.16.10 port 47874 Jul 10 00:27:15.939760 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:15.942748 systemd[1]: sshd@23-10.200.8.43:22-10.200.16.10:47874.service: Deactivated successfully. Jul 10 00:27:15.944234 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 00:27:15.945336 systemd-logind[1691]: Session 26 logged out. Waiting for processes to exit. Jul 10 00:27:15.946378 systemd-logind[1691]: Removed session 26. Jul 10 00:27:16.053319 systemd[1]: Started sshd@24-10.200.8.43:22-10.200.16.10:47882.service - OpenSSH per-connection server daemon (10.200.16.10:47882). Jul 10 00:27:16.583957 containerd[1733]: time="2025-07-10T00:27:16.583551542Z" level=info msg="CreateContainer within sandbox \"cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:27:16.600646 containerd[1733]: time="2025-07-10T00:27:16.600609158Z" level=info msg="Container 75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:16.603941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1283724246.mount: Deactivated successfully. Jul 10 00:27:16.613719 containerd[1733]: time="2025-07-10T00:27:16.613695031Z" level=info msg="CreateContainer within sandbox \"cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2\"" Jul 10 00:27:16.614124 containerd[1733]: time="2025-07-10T00:27:16.614022222Z" level=info msg="StartContainer for \"75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2\"" Jul 10 00:27:16.615377 containerd[1733]: time="2025-07-10T00:27:16.615336474Z" level=info msg="connecting to shim 75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2" address="unix:///run/containerd/s/d7f380dbf62d7def1539976a648caa3d90a0e0201ce34d0147eb01f8f563560c" protocol=ttrpc version=3 Jul 10 00:27:16.638057 systemd[1]: Started cri-containerd-75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2.scope - libcontainer container 75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2. Jul 10 00:27:16.662153 systemd[1]: cri-containerd-75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2.scope: Deactivated successfully. Jul 10 00:27:16.663386 containerd[1733]: time="2025-07-10T00:27:16.663364440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2\" id:\"75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2\" pid:4994 exited_at:{seconds:1752107236 nanos:662972441}" Jul 10 00:27:16.664172 containerd[1733]: time="2025-07-10T00:27:16.664147962Z" level=info msg="received exit event container_id:\"75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2\" id:\"75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2\" pid:4994 exited_at:{seconds:1752107236 nanos:662972441}" Jul 10 00:27:16.670185 containerd[1733]: time="2025-07-10T00:27:16.670158259Z" level=info msg="StartContainer for \"75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2\" returns successfully" Jul 10 00:27:16.678996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75e3b53145f97c425ec10ca44a44ba7e57ecc7fcf48414ce6664b0c9c739bcd2-rootfs.mount: Deactivated successfully. Jul 10 00:27:16.682854 sshd[4980]: Accepted publickey for core from 10.200.16.10 port 47882 ssh2: RSA SHA256:fzafY2iLoj7qFnOd6qpPKPPcyyg42N0FbP0oWsOOjEU Jul 10 00:27:16.683808 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:27:16.686737 systemd-logind[1691]: New session 27 of user core. Jul 10 00:27:16.690042 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 10 00:27:17.591868 containerd[1733]: time="2025-07-10T00:27:17.591705434Z" level=info msg="CreateContainer within sandbox \"cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:27:17.610163 containerd[1733]: time="2025-07-10T00:27:17.609614457Z" level=info msg="Container b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:17.620541 containerd[1733]: time="2025-07-10T00:27:17.620517385Z" level=info msg="CreateContainer within sandbox \"cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f\"" Jul 10 00:27:17.620884 containerd[1733]: time="2025-07-10T00:27:17.620789561Z" level=info msg="StartContainer for \"b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f\"" Jul 10 00:27:17.621862 containerd[1733]: time="2025-07-10T00:27:17.621771533Z" level=info msg="connecting to shim b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f" address="unix:///run/containerd/s/d7f380dbf62d7def1539976a648caa3d90a0e0201ce34d0147eb01f8f563560c" protocol=ttrpc version=3 Jul 10 00:27:17.642070 systemd[1]: Started cri-containerd-b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f.scope - libcontainer container b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f. Jul 10 00:27:17.660889 systemd[1]: cri-containerd-b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f.scope: Deactivated successfully. Jul 10 00:27:17.662695 containerd[1733]: time="2025-07-10T00:27:17.662663621Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f\" id:\"b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f\" pid:5044 exited_at:{seconds:1752107237 nanos:661517039}" Jul 10 00:27:17.665135 containerd[1733]: time="2025-07-10T00:27:17.665103702Z" level=info msg="received exit event container_id:\"b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f\" id:\"b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f\" pid:5044 exited_at:{seconds:1752107237 nanos:661517039}" Jul 10 00:27:17.670180 containerd[1733]: time="2025-07-10T00:27:17.670155004Z" level=info msg="StartContainer for \"b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f\" returns successfully" Jul 10 00:27:17.678762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b65e5d6f8c0add96186a1301f83c21ac56cfd0c4f56a1f3dc85aa0f245d68e1f-rootfs.mount: Deactivated successfully. Jul 10 00:27:18.596357 containerd[1733]: time="2025-07-10T00:27:18.595828411Z" level=info msg="CreateContainer within sandbox \"cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:27:18.613310 containerd[1733]: time="2025-07-10T00:27:18.612249554Z" level=info msg="Container 0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d: CDI devices from CRI Config.CDIDevices: []" Jul 10 00:27:18.627254 containerd[1733]: time="2025-07-10T00:27:18.627230145Z" level=info msg="CreateContainer within sandbox \"cf7f13f30ad6f0d8a0eabe383b6f8eeab8f0eea94cb42e5a51307869cc930e2e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d\"" Jul 10 00:27:18.627688 containerd[1733]: time="2025-07-10T00:27:18.627664842Z" level=info msg="StartContainer for \"0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d\"" Jul 10 00:27:18.628613 containerd[1733]: time="2025-07-10T00:27:18.628583735Z" level=info msg="connecting to shim 0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d" address="unix:///run/containerd/s/d7f380dbf62d7def1539976a648caa3d90a0e0201ce34d0147eb01f8f563560c" protocol=ttrpc version=3 Jul 10 00:27:18.647039 systemd[1]: Started cri-containerd-0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d.scope - libcontainer container 0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d. Jul 10 00:27:18.672829 containerd[1733]: time="2025-07-10T00:27:18.672811810Z" level=info msg="StartContainer for \"0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d\" returns successfully" Jul 10 00:27:18.736222 containerd[1733]: time="2025-07-10T00:27:18.736137384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d\" id:\"153258df26aa6c8ddef77aba53ef15f0a82ed98284c25391b2978c84d5a0ae94\" pid:5115 exited_at:{seconds:1752107238 nanos:735731473}" Jul 10 00:27:18.975931 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jul 10 00:27:21.159606 containerd[1733]: time="2025-07-10T00:27:21.159569631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d\" id:\"7d180403e09986b78cf89ce469eb4bceb2186457ccf9798bcd2e5f96357407e3\" pid:5539 exit_status:1 exited_at:{seconds:1752107241 nanos:159182043}" Jul 10 00:27:21.308411 systemd-networkd[1354]: lxc_health: Link UP Jul 10 00:27:21.317038 systemd-networkd[1354]: lxc_health: Gained carrier Jul 10 00:27:22.626127 systemd-networkd[1354]: lxc_health: Gained IPv6LL Jul 10 00:27:22.994003 kubelet[3139]: I0710 00:27:22.993867 3139 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zjcm9" podStartSLOduration=8.993849165 podStartE2EDuration="8.993849165s" podCreationTimestamp="2025-07-10 00:27:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:27:19.614180837 +0000 UTC m=+160.439657729" watchObservedRunningTime="2025-07-10 00:27:22.993849165 +0000 UTC m=+163.819326057" Jul 10 00:27:23.295505 containerd[1733]: time="2025-07-10T00:27:23.295251526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d\" id:\"7522426665b96e815b2fe51c4569db7131b2cda2837b02d7a969dd8fabf7fa1b\" pid:5637 exited_at:{seconds:1752107243 nanos:294685432}" Jul 10 00:27:25.365265 containerd[1733]: time="2025-07-10T00:27:25.365227267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d\" id:\"3393e1fbbcb49a0e8009f8dfe7f76a744497d74d6d95db7b9432ca5e2973bacb\" pid:5682 exited_at:{seconds:1752107245 nanos:364842128}" Jul 10 00:27:27.509288 containerd[1733]: time="2025-07-10T00:27:27.509238687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d6243d6cc6d620c32acf5c3f696155b75f62f8ff98f152d53bf0d756693c59d\" id:\"bc92ed0e151e17f3fa299bb4376af22bcfefed358bf2ba16bbebcc33e0ffab9a\" pid:5705 exited_at:{seconds:1752107247 nanos:509049218}" Jul 10 00:27:27.612202 sshd[5024]: Connection closed by 10.200.16.10 port 47882 Jul 10 00:27:27.612813 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Jul 10 00:27:27.616028 systemd[1]: sshd@24-10.200.8.43:22-10.200.16.10:47882.service: Deactivated successfully. Jul 10 00:27:27.617776 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 00:27:27.618958 systemd-logind[1691]: Session 27 logged out. Waiting for processes to exit. Jul 10 00:27:27.620139 systemd-logind[1691]: Removed session 27.