Aug 19 08:17:01.952241 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 18 22:19:37 -00 2025 Aug 19 08:17:01.952261 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:17:01.952269 kernel: BIOS-provided physical RAM map: Aug 19 08:17:01.952273 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 19 08:17:01.952277 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 19 08:17:01.952281 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Aug 19 08:17:01.952286 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Aug 19 08:17:01.952292 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Aug 19 08:17:01.952296 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Aug 19 08:17:01.952300 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 19 08:17:01.952305 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 19 08:17:01.952309 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 19 08:17:01.952313 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 19 08:17:01.952317 kernel: printk: legacy bootconsole [earlyser0] enabled Aug 19 08:17:01.952323 kernel: NX (Execute Disable) protection: active Aug 19 08:17:01.952328 kernel: APIC: Static calls initialized Aug 19 08:17:01.952332 kernel: efi: EFI v2.7 by Microsoft Aug 19 08:17:01.952337 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3f495018 RNG=0x3ffd2018 Aug 19 08:17:01.952341 kernel: random: crng init done Aug 19 08:17:01.952345 kernel: secureboot: Secure boot disabled Aug 19 08:17:01.952350 kernel: SMBIOS 3.1.0 present. Aug 19 08:17:01.952354 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Aug 19 08:17:01.952359 kernel: DMI: Memory slots populated: 2/2 Aug 19 08:17:01.952364 kernel: Hypervisor detected: Microsoft Hyper-V Aug 19 08:17:01.952368 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Aug 19 08:17:01.952373 kernel: Hyper-V: Nested features: 0x3e0101 Aug 19 08:17:01.952377 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 19 08:17:01.952381 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 19 08:17:01.952386 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 19 08:17:01.952390 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 19 08:17:01.952394 kernel: tsc: Detected 2299.999 MHz processor Aug 19 08:17:01.952399 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 19 08:17:01.952404 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 19 08:17:01.952409 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Aug 19 08:17:01.952415 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 19 08:17:01.952420 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 19 08:17:01.952424 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Aug 19 08:17:01.952429 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Aug 19 08:17:01.952433 kernel: Using GB pages for direct mapping Aug 19 08:17:01.952438 kernel: ACPI: Early table checksum verification disabled Aug 19 08:17:01.952445 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 19 08:17:01.952451 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 19 08:17:01.952456 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 19 08:17:01.952460 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Aug 19 08:17:01.952465 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 19 08:17:01.952470 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 19 08:17:01.952474 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 19 08:17:01.952480 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 19 08:17:01.952485 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Aug 19 08:17:01.952490 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Aug 19 08:17:01.952494 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 19 08:17:01.952499 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 19 08:17:01.952504 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Aug 19 08:17:01.952508 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 19 08:17:01.952513 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 19 08:17:01.952518 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 19 08:17:01.952524 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 19 08:17:01.952528 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Aug 19 08:17:01.952533 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Aug 19 08:17:01.952537 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 19 08:17:01.952542 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Aug 19 08:17:01.952547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Aug 19 08:17:01.952552 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Aug 19 08:17:01.952610 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Aug 19 08:17:01.952618 kernel: Zone ranges: Aug 19 08:17:01.952628 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 19 08:17:01.952649 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 19 08:17:01.952654 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 19 08:17:01.952659 kernel: Device empty Aug 19 08:17:01.952664 kernel: Movable zone start for each node Aug 19 08:17:01.952669 kernel: Early memory node ranges Aug 19 08:17:01.952674 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 19 08:17:01.952679 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Aug 19 08:17:01.952683 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Aug 19 08:17:01.952690 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 19 08:17:01.952695 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 19 08:17:01.952699 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 19 08:17:01.952704 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 19 08:17:01.952709 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 19 08:17:01.952713 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Aug 19 08:17:01.952718 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Aug 19 08:17:01.952723 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 19 08:17:01.952728 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 19 08:17:01.952734 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 19 08:17:01.952738 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 19 08:17:01.952743 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 19 08:17:01.952748 kernel: TSC deadline timer available Aug 19 08:17:01.952752 kernel: CPU topo: Max. logical packages: 1 Aug 19 08:17:01.952757 kernel: CPU topo: Max. logical dies: 1 Aug 19 08:17:01.952761 kernel: CPU topo: Max. dies per package: 1 Aug 19 08:17:01.952766 kernel: CPU topo: Max. threads per core: 2 Aug 19 08:17:01.952770 kernel: CPU topo: Num. cores per package: 1 Aug 19 08:17:01.952776 kernel: CPU topo: Num. threads per package: 2 Aug 19 08:17:01.952781 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 19 08:17:01.952786 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 19 08:17:01.952790 kernel: Booting paravirtualized kernel on Hyper-V Aug 19 08:17:01.952795 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 19 08:17:01.952800 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 19 08:17:01.952805 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 19 08:17:01.952810 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 19 08:17:01.952814 kernel: pcpu-alloc: [0] 0 1 Aug 19 08:17:01.952820 kernel: Hyper-V: PV spinlocks enabled Aug 19 08:17:01.952825 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 19 08:17:01.952832 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:17:01.952836 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 08:17:01.952841 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 19 08:17:01.952845 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 08:17:01.952850 kernel: Fallback order for Node 0: 0 Aug 19 08:17:01.952854 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Aug 19 08:17:01.952859 kernel: Policy zone: Normal Aug 19 08:17:01.952864 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 08:17:01.952868 kernel: software IO TLB: area num 2. Aug 19 08:17:01.952872 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 19 08:17:01.952877 kernel: ftrace: allocating 40101 entries in 157 pages Aug 19 08:17:01.952881 kernel: ftrace: allocated 157 pages with 5 groups Aug 19 08:17:01.952885 kernel: Dynamic Preempt: voluntary Aug 19 08:17:01.952889 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 08:17:01.952895 kernel: rcu: RCU event tracing is enabled. Aug 19 08:17:01.952905 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 19 08:17:01.952910 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 08:17:01.952915 kernel: Rude variant of Tasks RCU enabled. Aug 19 08:17:01.952921 kernel: Tracing variant of Tasks RCU enabled. Aug 19 08:17:01.952926 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 08:17:01.952931 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 19 08:17:01.952935 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 08:17:01.952940 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 08:17:01.952945 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 19 08:17:01.952950 kernel: Using NULL legacy PIC Aug 19 08:17:01.952955 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 19 08:17:01.952960 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 08:17:01.952965 kernel: Console: colour dummy device 80x25 Aug 19 08:17:01.952969 kernel: printk: legacy console [tty1] enabled Aug 19 08:17:01.952974 kernel: printk: legacy console [ttyS0] enabled Aug 19 08:17:01.952979 kernel: printk: legacy bootconsole [earlyser0] disabled Aug 19 08:17:01.952983 kernel: ACPI: Core revision 20240827 Aug 19 08:17:01.952989 kernel: Failed to register legacy timer interrupt Aug 19 08:17:01.952994 kernel: APIC: Switch to symmetric I/O mode setup Aug 19 08:17:01.952998 kernel: x2apic enabled Aug 19 08:17:01.953003 kernel: APIC: Switched APIC routing to: physical x2apic Aug 19 08:17:01.953007 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Aug 19 08:17:01.953012 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 19 08:17:01.953017 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Aug 19 08:17:01.953021 kernel: Hyper-V: Using IPI hypercalls Aug 19 08:17:01.953026 kernel: APIC: send_IPI() replaced with hv_send_ipi() Aug 19 08:17:01.953032 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Aug 19 08:17:01.953037 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Aug 19 08:17:01.953041 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Aug 19 08:17:01.953046 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Aug 19 08:17:01.953051 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Aug 19 08:17:01.953076 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Aug 19 08:17:01.953083 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Aug 19 08:17:01.953088 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 19 08:17:01.953092 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 19 08:17:01.953099 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 19 08:17:01.953103 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 19 08:17:01.953108 kernel: Spectre V2 : Mitigation: Retpolines Aug 19 08:17:01.953112 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 19 08:17:01.953117 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 19 08:17:01.953122 kernel: RETBleed: Vulnerable Aug 19 08:17:01.953126 kernel: Speculative Store Bypass: Vulnerable Aug 19 08:17:01.953131 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 19 08:17:01.953135 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 19 08:17:01.953140 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 19 08:17:01.953144 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 19 08:17:01.953150 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 19 08:17:01.953155 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 19 08:17:01.953159 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 19 08:17:01.953164 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Aug 19 08:17:01.953168 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Aug 19 08:17:01.953173 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Aug 19 08:17:01.953178 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 19 08:17:01.953182 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 19 08:17:01.953187 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 19 08:17:01.953191 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 19 08:17:01.953197 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Aug 19 08:17:01.953202 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Aug 19 08:17:01.953206 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Aug 19 08:17:01.953211 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Aug 19 08:17:01.953216 kernel: Freeing SMP alternatives memory: 32K Aug 19 08:17:01.953220 kernel: pid_max: default: 32768 minimum: 301 Aug 19 08:17:01.953225 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 08:17:01.953229 kernel: landlock: Up and running. Aug 19 08:17:01.953234 kernel: SELinux: Initializing. Aug 19 08:17:01.953238 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 19 08:17:01.953243 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 19 08:17:01.953247 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Aug 19 08:17:01.953253 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Aug 19 08:17:01.953258 kernel: signal: max sigframe size: 11952 Aug 19 08:17:01.953263 kernel: rcu: Hierarchical SRCU implementation. Aug 19 08:17:01.953268 kernel: rcu: Max phase no-delay instances is 400. Aug 19 08:17:01.953272 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 08:17:01.953277 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 19 08:17:01.953282 kernel: smp: Bringing up secondary CPUs ... Aug 19 08:17:01.953287 kernel: smpboot: x86: Booting SMP configuration: Aug 19 08:17:01.953291 kernel: .... node #0, CPUs: #1 Aug 19 08:17:01.953297 kernel: smp: Brought up 1 node, 2 CPUs Aug 19 08:17:01.953302 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 19 08:17:01.953307 kernel: Memory: 8077020K/8383228K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54040K init, 2928K bss, 299992K reserved, 0K cma-reserved) Aug 19 08:17:01.953312 kernel: devtmpfs: initialized Aug 19 08:17:01.953317 kernel: x86/mm: Memory block size: 128MB Aug 19 08:17:01.953321 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 19 08:17:01.953326 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 08:17:01.953331 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 19 08:17:01.953335 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 08:17:01.953341 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 08:17:01.953346 kernel: audit: initializing netlink subsys (disabled) Aug 19 08:17:01.953351 kernel: audit: type=2000 audit(1755591418.029:1): state=initialized audit_enabled=0 res=1 Aug 19 08:17:01.953355 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 08:17:01.953360 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 19 08:17:01.953365 kernel: cpuidle: using governor menu Aug 19 08:17:01.953369 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 08:17:01.953374 kernel: dca service started, version 1.12.1 Aug 19 08:17:01.953379 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Aug 19 08:17:01.953385 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Aug 19 08:17:01.953389 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 19 08:17:01.953394 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 08:17:01.953399 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 08:17:01.953404 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 08:17:01.953409 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 08:17:01.953414 kernel: ACPI: Added _OSI(Module Device) Aug 19 08:17:01.953419 kernel: ACPI: Added _OSI(Processor Device) Aug 19 08:17:01.953424 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 08:17:01.953430 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 08:17:01.953436 kernel: ACPI: Interpreter enabled Aug 19 08:17:01.953441 kernel: ACPI: PM: (supports S0 S5) Aug 19 08:17:01.953446 kernel: ACPI: Using IOAPIC for interrupt routing Aug 19 08:17:01.953451 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 19 08:17:01.953456 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 19 08:17:01.953461 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 19 08:17:01.953466 kernel: iommu: Default domain type: Translated Aug 19 08:17:01.953471 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 19 08:17:01.953478 kernel: efivars: Registered efivars operations Aug 19 08:17:01.953483 kernel: PCI: Using ACPI for IRQ routing Aug 19 08:17:01.953488 kernel: PCI: System does not support PCI Aug 19 08:17:01.953493 kernel: vgaarb: loaded Aug 19 08:17:01.953498 kernel: clocksource: Switched to clocksource tsc-early Aug 19 08:17:01.953503 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 08:17:01.953508 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 08:17:01.953514 kernel: pnp: PnP ACPI init Aug 19 08:17:01.953519 kernel: pnp: PnP ACPI: found 3 devices Aug 19 08:17:01.953526 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 19 08:17:01.953531 kernel: NET: Registered PF_INET protocol family Aug 19 08:17:01.953536 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 19 08:17:01.953541 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 19 08:17:01.953546 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 08:17:01.953561 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 08:17:01.953569 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 19 08:17:01.953576 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 19 08:17:01.953582 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 19 08:17:01.953590 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 19 08:17:01.953597 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 08:17:01.953604 kernel: NET: Registered PF_XDP protocol family Aug 19 08:17:01.953611 kernel: PCI: CLS 0 bytes, default 64 Aug 19 08:17:01.953619 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 19 08:17:01.953628 kernel: software IO TLB: mapped [mem 0x000000003a9c4000-0x000000003e9c4000] (64MB) Aug 19 08:17:01.953635 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Aug 19 08:17:01.953644 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Aug 19 08:17:01.953653 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Aug 19 08:17:01.953663 kernel: clocksource: Switched to clocksource tsc Aug 19 08:17:01.953670 kernel: Initialise system trusted keyrings Aug 19 08:17:01.953678 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 19 08:17:01.953686 kernel: Key type asymmetric registered Aug 19 08:17:01.953694 kernel: Asymmetric key parser 'x509' registered Aug 19 08:17:01.953701 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 19 08:17:01.953709 kernel: io scheduler mq-deadline registered Aug 19 08:17:01.953717 kernel: io scheduler kyber registered Aug 19 08:17:01.953724 kernel: io scheduler bfq registered Aug 19 08:17:01.953734 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 19 08:17:01.953742 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 08:17:01.953750 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 19 08:17:01.953758 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 19 08:17:01.953767 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Aug 19 08:17:01.953775 kernel: i8042: PNP: No PS/2 controller found. Aug 19 08:17:01.953903 kernel: rtc_cmos 00:02: registered as rtc0 Aug 19 08:17:01.953975 kernel: rtc_cmos 00:02: setting system clock to 2025-08-19T08:17:01 UTC (1755591421) Aug 19 08:17:01.954043 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 19 08:17:01.954053 kernel: intel_pstate: Intel P-state driver initializing Aug 19 08:17:01.954062 kernel: efifb: probing for efifb Aug 19 08:17:01.954071 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 19 08:17:01.954079 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 19 08:17:01.954087 kernel: efifb: scrolling: redraw Aug 19 08:17:01.954095 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 19 08:17:01.954105 kernel: Console: switching to colour frame buffer device 128x48 Aug 19 08:17:01.954116 kernel: fb0: EFI VGA frame buffer device Aug 19 08:17:01.954124 kernel: pstore: Using crash dump compression: deflate Aug 19 08:17:01.954133 kernel: pstore: Registered efi_pstore as persistent store backend Aug 19 08:17:01.954142 kernel: NET: Registered PF_INET6 protocol family Aug 19 08:17:01.954151 kernel: Segment Routing with IPv6 Aug 19 08:17:01.954159 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 08:17:01.954168 kernel: NET: Registered PF_PACKET protocol family Aug 19 08:17:01.954177 kernel: Key type dns_resolver registered Aug 19 08:17:01.954186 kernel: IPI shorthand broadcast: enabled Aug 19 08:17:01.954197 kernel: sched_clock: Marking stable (2906003769, 92716405)->(3290203698, -291483524) Aug 19 08:17:01.954205 kernel: registered taskstats version 1 Aug 19 08:17:01.954214 kernel: Loading compiled-in X.509 certificates Aug 19 08:17:01.954223 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: 93a065b103c00d4b81cc5822e4e7f9674e63afaf' Aug 19 08:17:01.954231 kernel: Demotion targets for Node 0: null Aug 19 08:17:01.954240 kernel: Key type .fscrypt registered Aug 19 08:17:01.954249 kernel: Key type fscrypt-provisioning registered Aug 19 08:17:01.954258 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 08:17:01.954267 kernel: ima: Allocated hash algorithm: sha1 Aug 19 08:17:01.954277 kernel: ima: No architecture policies found Aug 19 08:17:01.954286 kernel: clk: Disabling unused clocks Aug 19 08:17:01.954295 kernel: Warning: unable to open an initial console. Aug 19 08:17:01.954304 kernel: Freeing unused kernel image (initmem) memory: 54040K Aug 19 08:17:01.954312 kernel: Write protecting the kernel read-only data: 24576k Aug 19 08:17:01.954322 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 19 08:17:01.954330 kernel: Run /init as init process Aug 19 08:17:01.954338 kernel: with arguments: Aug 19 08:17:01.954347 kernel: /init Aug 19 08:17:01.954357 kernel: with environment: Aug 19 08:17:01.954366 kernel: HOME=/ Aug 19 08:17:01.954375 kernel: TERM=linux Aug 19 08:17:01.954383 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 08:17:01.954394 systemd[1]: Successfully made /usr/ read-only. Aug 19 08:17:01.954406 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:17:01.954416 systemd[1]: Detected virtualization microsoft. Aug 19 08:17:01.954426 systemd[1]: Detected architecture x86-64. Aug 19 08:17:01.954436 systemd[1]: Running in initrd. Aug 19 08:17:01.954446 systemd[1]: No hostname configured, using default hostname. Aug 19 08:17:01.954455 systemd[1]: Hostname set to . Aug 19 08:17:01.954465 systemd[1]: Initializing machine ID from random generator. Aug 19 08:17:01.954474 systemd[1]: Queued start job for default target initrd.target. Aug 19 08:17:01.954483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:17:01.954492 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:17:01.954502 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 08:17:01.954514 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:17:01.954523 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 08:17:01.954534 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 08:17:01.954551 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 08:17:01.954575 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 08:17:01.954584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:17:01.954596 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:17:01.954606 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:17:01.954615 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:17:01.954624 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:17:01.954638 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:17:01.954648 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:17:01.954658 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:17:01.954667 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 08:17:01.954676 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 08:17:01.954688 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:17:01.954697 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:17:01.954706 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:17:01.954714 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:17:01.954723 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 08:17:01.954731 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:17:01.954740 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 08:17:01.954749 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 08:17:01.954760 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 08:17:01.954770 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:17:01.954779 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:17:01.954798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:17:01.954827 systemd-journald[205]: Collecting audit messages is disabled. Aug 19 08:17:01.954856 systemd-journald[205]: Journal started Aug 19 08:17:01.954880 systemd-journald[205]: Runtime Journal (/run/log/journal/810f502f04434bf0868db91d1a233669) is 8M, max 158.9M, 150.9M free. Aug 19 08:17:01.960013 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:17:01.966073 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 08:17:01.968254 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:17:01.970714 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 08:17:01.975666 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 08:17:01.976638 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:17:01.989168 systemd-modules-load[207]: Inserted module 'overlay' Aug 19 08:17:01.995362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:17:02.001671 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 08:17:02.010991 systemd-tmpfiles[216]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 08:17:02.014692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:17:02.026570 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 08:17:02.028261 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 19 08:17:02.030635 kernel: Bridge firewalling registered Aug 19 08:17:02.029678 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:17:02.031311 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:17:02.033652 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:17:02.038153 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:17:02.052103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:17:02.055339 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:17:02.060150 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:17:02.065130 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 08:17:02.074538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:17:02.086540 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=cc23dd01793203541561c15ffc568736bb5dae0d652141296dd11bf777bdf42f Aug 19 08:17:02.120298 systemd-resolved[247]: Positive Trust Anchors: Aug 19 08:17:02.121852 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:17:02.121888 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:17:02.140096 systemd-resolved[247]: Defaulting to hostname 'linux'. Aug 19 08:17:02.142498 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:17:02.146488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:17:02.157572 kernel: SCSI subsystem initialized Aug 19 08:17:02.164570 kernel: Loading iSCSI transport class v2.0-870. Aug 19 08:17:02.172581 kernel: iscsi: registered transport (tcp) Aug 19 08:17:02.187571 kernel: iscsi: registered transport (qla4xxx) Aug 19 08:17:02.187604 kernel: QLogic iSCSI HBA Driver Aug 19 08:17:02.199438 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:17:02.207304 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:17:02.208455 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:17:02.236921 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 08:17:02.243547 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 08:17:02.283574 kernel: raid6: avx512x4 gen() 45136 MB/s Aug 19 08:17:02.300565 kernel: raid6: avx512x2 gen() 44154 MB/s Aug 19 08:17:02.318564 kernel: raid6: avx512x1 gen() 28875 MB/s Aug 19 08:17:02.336566 kernel: raid6: avx2x4 gen() 42344 MB/s Aug 19 08:17:02.353564 kernel: raid6: avx2x2 gen() 42695 MB/s Aug 19 08:17:02.371131 kernel: raid6: avx2x1 gen() 30464 MB/s Aug 19 08:17:02.371214 kernel: raid6: using algorithm avx512x4 gen() 45136 MB/s Aug 19 08:17:02.389629 kernel: raid6: .... xor() 7421 MB/s, rmw enabled Aug 19 08:17:02.389708 kernel: raid6: using avx512x2 recovery algorithm Aug 19 08:17:02.406573 kernel: xor: automatically using best checksumming function avx Aug 19 08:17:02.518575 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 08:17:02.523581 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:17:02.528654 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:17:02.550787 systemd-udevd[456]: Using default interface naming scheme 'v255'. Aug 19 08:17:02.555392 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:17:02.561023 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 08:17:02.577040 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Aug 19 08:17:02.593891 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:17:02.597396 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:17:02.625451 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:17:02.632148 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 08:17:02.676594 kernel: cryptd: max_cpu_qlen set to 1000 Aug 19 08:17:02.685952 kernel: hv_vmbus: Vmbus version:5.3 Aug 19 08:17:02.691600 kernel: AES CTR mode by8 optimization enabled Aug 19 08:17:02.696184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:17:02.696531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:17:02.705475 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:17:02.710668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:17:02.721771 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:17:02.722607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:17:02.729324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:17:02.743744 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 19 08:17:02.743779 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 19 08:17:02.743882 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 19 08:17:02.749570 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 19 08:17:02.755571 kernel: hv_vmbus: registering driver hv_pci Aug 19 08:17:02.760576 kernel: hv_vmbus: registering driver hv_netvsc Aug 19 08:17:02.765581 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Aug 19 08:17:02.770582 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 19 08:17:02.778333 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Aug 19 08:17:02.780584 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Aug 19 08:17:02.786967 kernel: hv_vmbus: registering driver hid_hyperv Aug 19 08:17:02.786997 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddeba60 (unnamed net_device) (uninitialized): VF slot 1 added Aug 19 08:17:02.787135 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Aug 19 08:17:02.795580 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Aug 19 08:17:02.799577 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 19 08:17:02.799603 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Aug 19 08:17:02.801261 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 19 08:17:02.804207 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:17:02.822630 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Aug 19 08:17:02.824566 kernel: PTP clock support registered Aug 19 08:17:02.827741 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Aug 19 08:17:02.827894 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Aug 19 08:17:02.835754 kernel: hv_vmbus: registering driver hv_storvsc Aug 19 08:17:02.838574 kernel: scsi host0: storvsc_host_t Aug 19 08:17:02.840570 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Aug 19 08:17:02.840612 kernel: hv_utils: Registering HyperV Utility Driver Aug 19 08:17:02.842379 kernel: hv_vmbus: registering driver hv_utils Aug 19 08:17:02.844398 kernel: hv_utils: Heartbeat IC version 3.0 Aug 19 08:17:02.846082 kernel: hv_utils: Shutdown IC version 3.2 Aug 19 08:17:02.847970 kernel: hv_utils: TimeSync IC version 4.0 Aug 19 08:17:02.557071 systemd-resolved[247]: Clock change detected. Flushing caches. Aug 19 08:17:02.563271 systemd-journald[205]: Time jumped backwards, rotating. Aug 19 08:17:02.571602 kernel: nvme nvme0: pci function c05b:00:00.0 Aug 19 08:17:02.571803 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Aug 19 08:17:02.728738 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 19 08:17:02.734696 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 08:17:02.739139 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 19 08:17:02.739338 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 19 08:17:02.740736 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 19 08:17:02.757730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#279 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Aug 19 08:17:02.772703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#192 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Aug 19 08:17:03.071713 kernel: nvme nvme0: using unchecked data buffer Aug 19 08:17:03.266718 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Aug 19 08:17:03.278957 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Aug 19 08:17:03.300955 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Aug 19 08:17:03.317781 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Aug 19 08:17:03.323835 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Aug 19 08:17:03.325943 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 08:17:03.338033 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:17:03.343336 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:17:03.347638 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:17:03.352910 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 08:17:03.358113 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 08:17:03.372703 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 08:17:03.375458 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:17:03.387695 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 08:17:03.532701 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Aug 19 08:17:03.536845 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Aug 19 08:17:03.537006 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Aug 19 08:17:03.538542 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Aug 19 08:17:03.543859 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Aug 19 08:17:03.546709 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Aug 19 08:17:03.551856 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Aug 19 08:17:03.551903 kernel: pci 7870:00:00.0: enabling Extended Tags Aug 19 08:17:03.567335 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Aug 19 08:17:03.567500 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Aug 19 08:17:03.571813 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Aug 19 08:17:03.575889 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Aug 19 08:17:03.585694 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Aug 19 08:17:03.588356 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddeba60 eth0: VF registering: eth1 Aug 19 08:17:03.588511 kernel: mana 7870:00:00.0 eth1: joined to eth0 Aug 19 08:17:03.592698 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Aug 19 08:17:04.386696 disk-uuid[674]: The operation has completed successfully. Aug 19 08:17:04.388706 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 19 08:17:04.426881 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 08:17:04.426970 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 08:17:04.466200 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 08:17:04.490640 sh[715]: Success Aug 19 08:17:04.541184 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 08:17:04.541258 kernel: device-mapper: uevent: version 1.0.3 Aug 19 08:17:04.541276 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 08:17:04.551702 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 19 08:17:04.763399 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 08:17:04.770775 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 08:17:04.787624 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 08:17:04.800242 kernel: BTRFS: device fsid 99050df3-5e04-4f37-acde-dec46aab7896 devid 1 transid 39 /dev/mapper/usr (254:0) scanned by mount (728) Aug 19 08:17:04.800291 kernel: BTRFS info (device dm-0): first mount of filesystem 99050df3-5e04-4f37-acde-dec46aab7896 Aug 19 08:17:04.801515 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:17:04.803028 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 08:17:05.075712 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 08:17:05.078423 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:17:05.081133 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 08:17:05.081803 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 08:17:05.089389 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 08:17:05.112709 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (761) Aug 19 08:17:05.114773 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:17:05.114815 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:17:05.115815 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 19 08:17:05.136698 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:17:05.137055 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 08:17:05.140125 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 08:17:05.166134 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:17:05.170583 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:17:05.192455 systemd-networkd[897]: lo: Link UP Aug 19 08:17:05.192462 systemd-networkd[897]: lo: Gained carrier Aug 19 08:17:05.197759 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Aug 19 08:17:05.193383 systemd-networkd[897]: Enumeration completed Aug 19 08:17:05.193772 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:17:05.206422 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Aug 19 08:17:05.206572 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddeba60 eth0: Data path switched to VF: enP30832s1 Aug 19 08:17:05.193776 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:17:05.194067 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:17:05.196707 systemd[1]: Reached target network.target - Network. Aug 19 08:17:05.205731 systemd-networkd[897]: enP30832s1: Link UP Aug 19 08:17:05.205797 systemd-networkd[897]: eth0: Link UP Aug 19 08:17:05.205931 systemd-networkd[897]: eth0: Gained carrier Aug 19 08:17:05.205943 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:17:05.222983 systemd-networkd[897]: enP30832s1: Gained carrier Aug 19 08:17:05.237710 systemd-networkd[897]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 19 08:17:06.217940 ignition[850]: Ignition 2.21.0 Aug 19 08:17:06.217951 ignition[850]: Stage: fetch-offline Aug 19 08:17:06.218024 ignition[850]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:17:06.219962 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:17:06.218030 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 19 08:17:06.222793 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 19 08:17:06.218106 ignition[850]: parsed url from cmdline: "" Aug 19 08:17:06.218109 ignition[850]: no config URL provided Aug 19 08:17:06.218112 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 08:17:06.218117 ignition[850]: no config at "/usr/lib/ignition/user.ign" Aug 19 08:17:06.218121 ignition[850]: failed to fetch config: resource requires networking Aug 19 08:17:06.218257 ignition[850]: Ignition finished successfully Aug 19 08:17:06.242726 ignition[917]: Ignition 2.21.0 Aug 19 08:17:06.242735 ignition[917]: Stage: fetch Aug 19 08:17:06.242910 ignition[917]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:17:06.242917 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 19 08:17:06.242980 ignition[917]: parsed url from cmdline: "" Aug 19 08:17:06.242982 ignition[917]: no config URL provided Aug 19 08:17:06.242986 ignition[917]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 08:17:06.242991 ignition[917]: no config at "/usr/lib/ignition/user.ign" Aug 19 08:17:06.243020 ignition[917]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 19 08:17:06.306655 ignition[917]: GET result: OK Aug 19 08:17:06.306735 ignition[917]: config has been read from IMDS userdata Aug 19 08:17:06.306760 ignition[917]: parsing config with SHA512: aaf9edf3b8e79280a4ff0f9f13b1104928d4cd4b39882d576054ee43d2eebd345babdbf234f085154d3d048edab7a0298414c2f000b746c5554f083c0176d8d1 Aug 19 08:17:06.309852 unknown[917]: fetched base config from "system" Aug 19 08:17:06.310173 ignition[917]: fetch: fetch complete Aug 19 08:17:06.309858 unknown[917]: fetched base config from "system" Aug 19 08:17:06.310177 ignition[917]: fetch: fetch passed Aug 19 08:17:06.309863 unknown[917]: fetched user config from "azure" Aug 19 08:17:06.310212 ignition[917]: Ignition finished successfully Aug 19 08:17:06.312591 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 19 08:17:06.315186 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 08:17:06.337633 ignition[924]: Ignition 2.21.0 Aug 19 08:17:06.337643 ignition[924]: Stage: kargs Aug 19 08:17:06.339832 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 08:17:06.337817 ignition[924]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:17:06.343429 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 08:17:06.337825 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 19 08:17:06.338526 ignition[924]: kargs: kargs passed Aug 19 08:17:06.338557 ignition[924]: Ignition finished successfully Aug 19 08:17:06.359494 ignition[930]: Ignition 2.21.0 Aug 19 08:17:06.359513 ignition[930]: Stage: disks Aug 19 08:17:06.359695 ignition[930]: no configs at "/usr/lib/ignition/base.d" Aug 19 08:17:06.361231 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 08:17:06.359702 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 19 08:17:06.365134 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 08:17:06.360449 ignition[930]: disks: disks passed Aug 19 08:17:06.367736 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 08:17:06.360475 ignition[930]: Ignition finished successfully Aug 19 08:17:06.375721 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:17:06.376940 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:17:06.379724 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:17:06.384925 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 08:17:06.507496 systemd-fsck[938]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Aug 19 08:17:06.510988 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 08:17:06.515594 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 08:17:06.761699 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 41966107-04fa-426e-9830-6b4efa50e27b r/w with ordered data mode. Quota mode: none. Aug 19 08:17:06.762140 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 08:17:06.766092 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 08:17:06.783809 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:17:06.793761 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 08:17:06.798183 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 19 08:17:06.802202 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 08:17:06.811935 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (947) Aug 19 08:17:06.811955 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:17:06.811965 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:17:06.811974 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 19 08:17:06.802229 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:17:06.816419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:17:06.817659 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 08:17:06.823794 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 08:17:07.092818 systemd-networkd[897]: eth0: Gained IPv6LL Aug 19 08:17:07.318370 coreos-metadata[949]: Aug 19 08:17:07.318 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 19 08:17:07.322227 coreos-metadata[949]: Aug 19 08:17:07.322 INFO Fetch successful Aug 19 08:17:07.324771 coreos-metadata[949]: Aug 19 08:17:07.322 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 19 08:17:07.331013 coreos-metadata[949]: Aug 19 08:17:07.330 INFO Fetch successful Aug 19 08:17:07.345450 coreos-metadata[949]: Aug 19 08:17:07.345 INFO wrote hostname ci-4426.0.0-a-f24a6b1c57 to /sysroot/etc/hostname Aug 19 08:17:07.349044 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 19 08:17:07.526013 initrd-setup-root[977]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 08:17:07.558281 initrd-setup-root[984]: cut: /sysroot/etc/group: No such file or directory Aug 19 08:17:07.590635 initrd-setup-root[991]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 08:17:07.611924 initrd-setup-root[998]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 08:17:08.552483 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 08:17:08.554244 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 08:17:08.565948 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 08:17:08.577374 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 08:17:08.581027 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:17:08.597144 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 08:17:08.622906 ignition[1066]: INFO : Ignition 2.21.0 Aug 19 08:17:08.622906 ignition[1066]: INFO : Stage: mount Aug 19 08:17:08.630461 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:17:08.630461 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 19 08:17:08.630461 ignition[1066]: INFO : mount: mount passed Aug 19 08:17:08.630461 ignition[1066]: INFO : Ignition finished successfully Aug 19 08:17:08.624786 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 08:17:08.628780 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 08:17:08.650651 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 08:17:08.671885 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1078) Aug 19 08:17:08.671925 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 43dd0637-5e0b-4b8d-a544-a82ca0652f6f Aug 19 08:17:08.672812 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 19 08:17:08.673735 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Aug 19 08:17:08.678657 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 08:17:08.702045 ignition[1094]: INFO : Ignition 2.21.0 Aug 19 08:17:08.702045 ignition[1094]: INFO : Stage: files Aug 19 08:17:08.705978 ignition[1094]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:17:08.705978 ignition[1094]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 19 08:17:08.705978 ignition[1094]: DEBUG : files: compiled without relabeling support, skipping Aug 19 08:17:08.718566 ignition[1094]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 08:17:08.721771 ignition[1094]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 08:17:08.739128 ignition[1094]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 08:17:08.741076 ignition[1094]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 08:17:08.741076 ignition[1094]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 08:17:08.740491 unknown[1094]: wrote ssh authorized keys file for user: core Aug 19 08:17:08.770735 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 19 08:17:08.773233 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 19 08:17:08.812997 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 08:17:08.933225 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 19 08:17:08.937760 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 08:17:08.937760 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 19 08:17:09.145698 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 19 08:17:09.331226 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 08:17:09.335847 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 19 08:17:09.335847 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 08:17:09.335847 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:17:09.335847 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 08:17:09.335847 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:17:09.335847 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 08:17:09.335847 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:17:09.335847 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 08:17:09.358782 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:17:09.358782 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 08:17:09.358782 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 19 08:17:09.358782 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 19 08:17:09.358782 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 19 08:17:09.358782 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 19 08:17:09.604838 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 19 08:17:10.196802 ignition[1094]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 19 08:17:10.196802 ignition[1094]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 19 08:17:10.243259 ignition[1094]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:17:10.251450 ignition[1094]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 08:17:10.251450 ignition[1094]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 19 08:17:10.251450 ignition[1094]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 19 08:17:10.266123 ignition[1094]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 08:17:10.266123 ignition[1094]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:17:10.266123 ignition[1094]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 08:17:10.266123 ignition[1094]: INFO : files: files passed Aug 19 08:17:10.266123 ignition[1094]: INFO : Ignition finished successfully Aug 19 08:17:10.255381 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 08:17:10.259224 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 08:17:10.267272 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 08:17:10.272667 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 08:17:10.272758 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 08:17:10.292731 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:17:10.292731 initrd-setup-root-after-ignition[1124]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:17:10.302453 initrd-setup-root-after-ignition[1128]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 08:17:10.295914 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:17:10.299058 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 08:17:10.302956 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 08:17:10.350086 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 08:17:10.350166 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 08:17:10.354901 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 08:17:10.359524 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 08:17:10.363364 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 08:17:10.365036 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 08:17:10.380877 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:17:10.383426 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 08:17:10.406606 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:17:10.407108 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:17:10.407347 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 08:17:10.407955 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 08:17:10.408066 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 08:17:10.408623 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 08:17:10.408938 systemd[1]: Stopped target basic.target - Basic System. Aug 19 08:17:10.409511 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 08:17:10.409763 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 08:17:10.410042 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 08:17:10.410567 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 08:17:10.410914 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 08:17:10.411179 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 08:17:10.411468 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 08:17:10.411670 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 08:17:10.412187 systemd[1]: Stopped target swap.target - Swaps. Aug 19 08:17:10.412447 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 08:17:10.412542 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 08:17:10.413384 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:17:10.413737 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:17:10.413965 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 08:17:10.414572 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:17:10.455873 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 08:17:10.456008 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 08:17:10.458117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 08:17:10.458212 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 08:17:10.462847 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 08:17:10.462971 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 08:17:10.465396 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 19 08:17:10.465494 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 19 08:17:10.473507 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 08:17:10.483845 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 08:17:10.501377 ignition[1148]: INFO : Ignition 2.21.0 Aug 19 08:17:10.501377 ignition[1148]: INFO : Stage: umount Aug 19 08:17:10.501377 ignition[1148]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 08:17:10.501377 ignition[1148]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 19 08:17:10.498460 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 08:17:10.517803 ignition[1148]: INFO : umount: umount passed Aug 19 08:17:10.517803 ignition[1148]: INFO : Ignition finished successfully Aug 19 08:17:10.498611 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:17:10.509632 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 08:17:10.509826 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 08:17:10.522171 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 08:17:10.522250 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 08:17:10.525005 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 08:17:10.525084 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 08:17:10.536018 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 08:17:10.536066 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 08:17:10.538915 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 08:17:10.538954 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 08:17:10.539142 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 19 08:17:10.539170 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 19 08:17:10.539423 systemd[1]: Stopped target network.target - Network. Aug 19 08:17:10.539448 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 08:17:10.539475 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 08:17:10.539677 systemd[1]: Stopped target paths.target - Path Units. Aug 19 08:17:10.539710 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 08:17:10.544758 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:17:10.551239 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 08:17:10.555535 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 08:17:10.558767 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 08:17:10.558803 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 08:17:10.558972 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 08:17:10.559000 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 08:17:10.559263 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 08:17:10.559305 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 08:17:10.566754 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 08:17:10.566794 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 08:17:10.568985 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 08:17:10.573802 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 08:17:10.578842 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 08:17:10.584588 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 08:17:10.584675 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 08:17:10.590974 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 08:17:10.591215 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 08:17:10.591303 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 08:17:10.595039 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 08:17:10.595678 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 08:17:10.598725 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 08:17:10.598759 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:17:10.604762 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 08:17:10.608534 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 08:17:10.608584 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 08:17:10.614287 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:17:10.614330 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:17:10.631315 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 08:17:10.657760 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddeba60 eth0: Data path switched from VF: enP30832s1 Aug 19 08:17:10.657911 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Aug 19 08:17:10.631361 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 08:17:10.632177 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 08:17:10.632211 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:17:10.632850 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:17:10.633882 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 08:17:10.633932 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:17:10.657021 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 08:17:10.657115 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:17:10.658955 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 08:17:10.659025 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 08:17:10.666251 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 08:17:10.666316 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 08:17:10.670772 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 08:17:10.670802 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:17:10.674737 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 08:17:10.674779 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 08:17:10.679021 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 08:17:10.679060 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 08:17:10.680458 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 08:17:10.680491 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 08:17:10.681337 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 08:17:10.681416 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 08:17:10.681455 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:17:10.683510 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 08:17:10.683566 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:17:10.692337 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 19 08:17:10.692377 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:17:10.696997 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 08:17:10.697030 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:17:10.702754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 08:17:10.702796 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:17:10.707984 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 19 08:17:10.708033 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 19 08:17:10.708066 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 19 08:17:10.708100 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 19 08:17:10.708421 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 08:17:10.708507 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 08:17:11.035759 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 08:17:11.035852 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 08:17:11.039690 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 08:17:11.045755 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 08:17:11.045818 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 08:17:11.051375 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 08:17:11.062177 systemd[1]: Switching root. Aug 19 08:17:11.146469 systemd-journald[205]: Journal stopped Aug 19 08:17:16.714470 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Aug 19 08:17:16.714496 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 08:17:16.714508 kernel: SELinux: policy capability open_perms=1 Aug 19 08:17:16.714516 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 08:17:16.714523 kernel: SELinux: policy capability always_check_network=0 Aug 19 08:17:16.714531 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 08:17:16.714540 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 08:17:16.714549 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 08:17:16.714557 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 08:17:16.714565 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 08:17:16.714572 kernel: audit: type=1403 audit(1755591432.172:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 08:17:16.714582 systemd[1]: Successfully loaded SELinux policy in 144.417ms. Aug 19 08:17:16.714594 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.080ms. Aug 19 08:17:16.714604 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 08:17:16.714615 systemd[1]: Detected virtualization microsoft. Aug 19 08:17:16.714625 systemd[1]: Detected architecture x86-64. Aug 19 08:17:16.714634 systemd[1]: Detected first boot. Aug 19 08:17:16.714644 systemd[1]: Hostname set to . Aug 19 08:17:16.714655 systemd[1]: Initializing machine ID from random generator. Aug 19 08:17:16.714665 zram_generator::config[1192]: No configuration found. Aug 19 08:17:16.714675 kernel: Guest personality initialized and is inactive Aug 19 08:17:16.717904 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Aug 19 08:17:16.717926 kernel: Initialized host personality Aug 19 08:17:16.717934 kernel: NET: Registered PF_VSOCK protocol family Aug 19 08:17:16.717946 systemd[1]: Populated /etc with preset unit settings. Aug 19 08:17:16.717962 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 08:17:16.717972 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 08:17:16.717982 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 08:17:16.717991 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 08:17:16.718002 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 08:17:16.718015 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 08:17:16.718030 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 08:17:16.718041 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 08:17:16.718054 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 08:17:16.718066 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 08:17:16.718078 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 08:17:16.718090 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 08:17:16.718103 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 08:17:16.718117 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 08:17:16.718127 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 08:17:16.718143 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 08:17:16.718160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 08:17:16.718173 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 08:17:16.718185 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 19 08:17:16.718199 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 08:17:16.718210 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 08:17:16.718223 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 08:17:16.718238 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 08:17:16.718253 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 08:17:16.718264 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 08:17:16.718279 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 08:17:16.718291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 08:17:16.718304 systemd[1]: Reached target slices.target - Slice Units. Aug 19 08:17:16.718316 systemd[1]: Reached target swap.target - Swaps. Aug 19 08:17:16.718328 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 08:17:16.718342 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 08:17:16.718356 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 08:17:16.718373 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 08:17:16.718385 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 08:17:16.718400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 08:17:16.718413 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 08:17:16.718431 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 08:17:16.718444 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 08:17:16.718454 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 08:17:16.718465 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:17:16.718478 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 08:17:16.718493 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 08:17:16.718504 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 08:17:16.718517 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 08:17:16.718530 systemd[1]: Reached target machines.target - Containers. Aug 19 08:17:16.718545 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 08:17:16.718558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:17:16.724667 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 08:17:16.724701 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 08:17:16.724711 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:17:16.724719 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:17:16.724729 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:17:16.724738 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 08:17:16.724750 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:17:16.724760 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 08:17:16.724769 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 08:17:16.724778 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 08:17:16.724789 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 08:17:16.724798 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 08:17:16.724809 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:17:16.724818 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 08:17:16.724829 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 08:17:16.724837 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 08:17:16.724846 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 08:17:16.724856 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 08:17:16.724889 systemd-journald[1275]: Collecting audit messages is disabled. Aug 19 08:17:16.724914 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 08:17:16.724923 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 08:17:16.724933 systemd[1]: Stopped verity-setup.service. Aug 19 08:17:16.724943 systemd-journald[1275]: Journal started Aug 19 08:17:16.724964 systemd-journald[1275]: Runtime Journal (/run/log/journal/c3a773cace54420da8e4f4e565810593) is 8M, max 158.9M, 150.9M free. Aug 19 08:17:16.344736 systemd[1]: Queued start job for default target multi-user.target. Aug 19 08:17:16.352068 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 19 08:17:16.352349 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 08:17:16.729700 kernel: loop: module loaded Aug 19 08:17:16.734704 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:17:16.740698 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 08:17:16.742354 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 08:17:16.744664 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 08:17:16.747318 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 08:17:16.749267 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 08:17:16.752857 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 08:17:16.754961 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 08:17:16.757333 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 08:17:16.760053 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 08:17:16.761877 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 08:17:16.764574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:17:16.764731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:17:16.767168 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:17:16.767317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:17:16.770206 kernel: fuse: init (API version 7.41) Aug 19 08:17:16.773102 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 08:17:16.774449 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 08:17:16.776843 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:17:16.777829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:17:16.779894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 08:17:16.782775 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 08:17:16.784655 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 08:17:16.793186 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 08:17:16.796704 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 08:17:16.806895 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 08:17:16.808950 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 08:17:16.808984 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 08:17:16.812630 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 08:17:16.818756 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 08:17:16.820310 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:17:16.828499 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 08:17:16.831830 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 08:17:16.842254 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:17:16.844818 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 08:17:16.847004 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:17:16.850787 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:17:16.855794 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 08:17:16.865541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 08:17:16.870817 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 08:17:16.873336 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 08:17:16.875886 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 08:17:16.878051 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 08:17:16.888997 systemd-journald[1275]: Time spent on flushing to /var/log/journal/c3a773cace54420da8e4f4e565810593 is 32.534ms for 988 entries. Aug 19 08:17:16.888997 systemd-journald[1275]: System Journal (/var/log/journal/c3a773cace54420da8e4f4e565810593) is 11.9M, max 2.6G, 2.6G free. Aug 19 08:17:16.986293 systemd-journald[1275]: Received client request to flush runtime journal. Aug 19 08:17:16.986335 kernel: ACPI: bus type drm_connector registered Aug 19 08:17:16.986356 systemd-journald[1275]: /var/log/journal/c3a773cace54420da8e4f4e565810593/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Aug 19 08:17:16.986385 systemd-journald[1275]: Rotating system journal. Aug 19 08:17:16.986409 kernel: loop0: detected capacity change from 0 to 111000 Aug 19 08:17:16.895035 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 08:17:16.898848 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 08:17:16.903825 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 08:17:16.905922 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 08:17:16.926908 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:17:16.927727 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:17:16.955760 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Aug 19 08:17:16.955769 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Aug 19 08:17:16.957669 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 08:17:16.961479 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 08:17:16.967829 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:17:16.986881 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 08:17:16.989996 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 08:17:17.050440 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 08:17:17.053604 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 08:17:17.073326 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Aug 19 08:17:17.073345 systemd-tmpfiles[1352]: ACLs are not supported, ignoring. Aug 19 08:17:17.075305 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 08:17:17.323701 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 08:17:17.340703 kernel: loop1: detected capacity change from 0 to 229808 Aug 19 08:17:17.353454 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 08:17:17.381696 kernel: loop2: detected capacity change from 0 to 128016 Aug 19 08:17:17.690700 kernel: loop3: detected capacity change from 0 to 29256 Aug 19 08:17:18.023932 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 08:17:18.028986 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 08:17:18.057449 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Aug 19 08:17:18.102715 kernel: loop4: detected capacity change from 0 to 111000 Aug 19 08:17:18.113707 kernel: loop5: detected capacity change from 0 to 229808 Aug 19 08:17:18.131713 kernel: loop6: detected capacity change from 0 to 128016 Aug 19 08:17:18.145696 kernel: loop7: detected capacity change from 0 to 29256 Aug 19 08:17:18.153963 (sd-merge)[1362]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Aug 19 08:17:18.154316 (sd-merge)[1362]: Merged extensions into '/usr'. Aug 19 08:17:18.157424 systemd[1]: Reload requested from client PID 1330 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 08:17:18.157435 systemd[1]: Reloading... Aug 19 08:17:18.203724 zram_generator::config[1388]: No configuration found. Aug 19 08:17:18.397705 kernel: hv_vmbus: registering driver hyperv_fb Aug 19 08:17:18.409704 kernel: mousedev: PS/2 mouse device common for all mice Aug 19 08:17:18.482763 kernel: hv_vmbus: registering driver hv_balloon Aug 19 08:17:18.489779 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 19 08:17:18.489825 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 19 08:17:18.492704 kernel: Console: switching to colour dummy device 80x25 Aug 19 08:17:18.496742 kernel: Console: switching to colour frame buffer device 128x48 Aug 19 08:17:18.503700 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 19 08:17:18.537947 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 19 08:17:18.538485 systemd[1]: Reloading finished in 380 ms. Aug 19 08:17:18.555115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 08:17:18.561979 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 08:17:18.582788 systemd[1]: Starting ensure-sysext.service... Aug 19 08:17:18.586840 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 08:17:18.593701 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#278 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Aug 19 08:17:18.596861 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 08:17:18.619985 systemd[1]: Reload requested from client PID 1506 ('systemctl') (unit ensure-sysext.service)... Aug 19 08:17:18.620003 systemd[1]: Reloading... Aug 19 08:17:18.657820 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 08:17:18.657844 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 08:17:18.658069 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 08:17:18.658300 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 08:17:18.659027 systemd-tmpfiles[1508]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 08:17:18.659259 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. Aug 19 08:17:18.659302 systemd-tmpfiles[1508]: ACLs are not supported, ignoring. Aug 19 08:17:18.669240 systemd-tmpfiles[1508]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:17:18.669255 systemd-tmpfiles[1508]: Skipping /boot Aug 19 08:17:18.677490 systemd-tmpfiles[1508]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 08:17:18.677498 systemd-tmpfiles[1508]: Skipping /boot Aug 19 08:17:18.712362 zram_generator::config[1534]: No configuration found. Aug 19 08:17:18.888771 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Aug 19 08:17:18.961326 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Aug 19 08:17:18.963384 systemd[1]: Reloading finished in 343 ms. Aug 19 08:17:18.984038 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 08:17:19.018547 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:17:19.019529 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:17:19.024302 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 08:17:19.026768 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:17:19.028404 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:17:19.031376 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:17:19.035963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:17:19.037798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:17:19.039960 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 08:17:19.042226 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:17:19.043284 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 08:17:19.048926 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 08:17:19.053449 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 08:17:19.061345 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 08:17:19.067778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 08:17:19.069170 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:17:19.075338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:17:19.075487 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:17:19.076430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:17:19.077008 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:17:19.077861 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:17:19.078032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:17:19.091364 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 08:17:19.097905 systemd[1]: Finished ensure-sysext.service. Aug 19 08:17:19.101540 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:17:19.101781 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 08:17:19.103458 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 08:17:19.106553 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 08:17:19.112457 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 08:17:19.115823 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 08:17:19.116340 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 08:17:19.116370 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 08:17:19.116429 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 08:17:19.116598 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 19 08:17:19.122039 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 08:17:19.131775 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 08:17:19.132450 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 08:17:19.133628 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 08:17:19.134895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 08:17:19.136629 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 08:17:19.138092 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 08:17:19.138651 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 08:17:19.141119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 08:17:19.141281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 08:17:19.141666 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 08:17:19.154243 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 08:17:19.187160 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 08:17:19.210471 augenrules[1666]: No rules Aug 19 08:17:19.211448 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:17:19.211648 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:17:19.292205 systemd-networkd[1507]: lo: Link UP Aug 19 08:17:19.292392 systemd-networkd[1507]: lo: Gained carrier Aug 19 08:17:19.293315 systemd-networkd[1507]: Enumeration completed Aug 19 08:17:19.293423 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 08:17:19.293574 systemd-networkd[1507]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:17:19.293583 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:17:19.294708 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Aug 19 08:17:19.297352 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 08:17:19.298551 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Aug 19 08:17:19.299716 kernel: hv_netvsc f8615163-0000-1000-2000-6045bddeba60 eth0: Data path switched to VF: enP30832s1 Aug 19 08:17:19.299999 systemd-networkd[1507]: enP30832s1: Link UP Aug 19 08:17:19.300071 systemd-networkd[1507]: eth0: Link UP Aug 19 08:17:19.300073 systemd-networkd[1507]: eth0: Gained carrier Aug 19 08:17:19.300088 systemd-networkd[1507]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:17:19.300108 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 08:17:19.306900 systemd-networkd[1507]: enP30832s1: Gained carrier Aug 19 08:17:19.314753 systemd-networkd[1507]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 19 08:17:19.330100 systemd-resolved[1618]: Positive Trust Anchors: Aug 19 08:17:19.330113 systemd-resolved[1618]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 08:17:19.330148 systemd-resolved[1618]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 08:17:19.347378 systemd-resolved[1618]: Using system hostname 'ci-4426.0.0-a-f24a6b1c57'. Aug 19 08:17:19.348438 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 08:17:19.348813 systemd[1]: Reached target network.target - Network. Aug 19 08:17:19.348843 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 08:17:19.436529 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 08:17:19.441279 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 08:17:19.644426 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 08:17:19.646243 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 08:17:20.916814 systemd-networkd[1507]: eth0: Gained IPv6LL Aug 19 08:17:20.918834 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 08:17:20.920314 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 08:17:22.671223 ldconfig[1325]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 08:17:22.680055 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 08:17:22.685133 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 08:17:22.699159 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 08:17:22.701898 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 08:17:22.704855 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 08:17:22.706513 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 08:17:22.708165 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 19 08:17:22.709876 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 08:17:22.712812 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 08:17:22.715736 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 08:17:22.718736 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 08:17:22.718770 systemd[1]: Reached target paths.target - Path Units. Aug 19 08:17:22.719951 systemd[1]: Reached target timers.target - Timer Units. Aug 19 08:17:22.723602 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 08:17:22.726087 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 08:17:22.730355 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 08:17:22.733877 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 08:17:22.737772 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 08:17:22.745122 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 08:17:22.747974 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 08:17:22.751297 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 08:17:22.754395 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 08:17:22.755600 systemd[1]: Reached target basic.target - Basic System. Aug 19 08:17:22.757763 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:17:22.757785 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 08:17:22.759425 systemd[1]: Starting chronyd.service - NTP client/server... Aug 19 08:17:22.763126 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 08:17:22.766711 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 19 08:17:22.771546 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 08:17:22.779788 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 08:17:22.788034 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 08:17:22.791258 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 08:17:22.794962 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 08:17:22.795837 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 19 08:17:22.798245 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Aug 19 08:17:22.802813 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Aug 19 08:17:22.807839 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Aug 19 08:17:22.810148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:17:22.814800 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 08:17:22.816881 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Refreshing passwd entry cache Aug 19 08:17:22.817174 oslogin_cache_refresh[1696]: Refreshing passwd entry cache Aug 19 08:17:22.820798 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 08:17:22.826830 jq[1694]: false Aug 19 08:17:22.827152 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 08:17:22.828127 KVP[1697]: KVP starting; pid is:1697 Aug 19 08:17:22.833317 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 08:17:22.836771 oslogin_cache_refresh[1696]: Failure getting users, quitting Aug 19 08:17:22.837561 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Failure getting users, quitting Aug 19 08:17:22.837561 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:17:22.837561 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Refreshing group entry cache Aug 19 08:17:22.836784 oslogin_cache_refresh[1696]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 19 08:17:22.836818 oslogin_cache_refresh[1696]: Refreshing group entry cache Aug 19 08:17:22.838323 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 08:17:22.840207 KVP[1697]: KVP LIC Version: 3.1 Aug 19 08:17:22.840750 kernel: hv_utils: KVP IC version 4.0 Aug 19 08:17:22.850190 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 08:17:22.852078 chronyd[1686]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Aug 19 08:17:22.856662 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Failure getting groups, quitting Aug 19 08:17:22.856662 google_oslogin_nss_cache[1696]: oslogin_cache_refresh[1696]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:17:22.856753 extend-filesystems[1695]: Found /dev/nvme0n1p6 Aug 19 08:17:22.854351 oslogin_cache_refresh[1696]: Failure getting groups, quitting Aug 19 08:17:22.855075 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 08:17:22.854362 oslogin_cache_refresh[1696]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 19 08:17:22.855489 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 08:17:22.857879 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 08:17:22.861592 chronyd[1686]: Timezone right/UTC failed leap second check, ignoring Aug 19 08:17:22.861902 chronyd[1686]: Loaded seccomp filter (level 2) Aug 19 08:17:22.864514 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 08:17:22.867358 systemd[1]: Started chronyd.service - NTP client/server. Aug 19 08:17:22.871081 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 08:17:22.875158 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 08:17:22.876234 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 08:17:22.876483 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 19 08:17:22.876654 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 19 08:17:22.885222 extend-filesystems[1695]: Found /dev/nvme0n1p9 Aug 19 08:17:22.886577 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 08:17:22.887184 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 08:17:22.893906 extend-filesystems[1695]: Checking size of /dev/nvme0n1p9 Aug 19 08:17:22.895945 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 08:17:22.897872 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 08:17:22.904067 jq[1713]: true Aug 19 08:17:22.913216 (ntainerd)[1729]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 08:17:22.921575 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 08:17:22.929364 jq[1738]: true Aug 19 08:17:22.973935 update_engine[1712]: I20250819 08:17:22.971899 1712 main.cc:92] Flatcar Update Engine starting Aug 19 08:17:22.983450 tar[1721]: linux-amd64/LICENSE Aug 19 08:17:22.983450 tar[1721]: linux-amd64/helm Aug 19 08:17:22.987907 systemd-logind[1709]: New seat seat0. Aug 19 08:17:22.992202 systemd-logind[1709]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 19 08:17:22.992342 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 08:17:23.015423 bash[1760]: Updated "/home/core/.ssh/authorized_keys" Aug 19 08:17:23.016269 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 08:17:23.020341 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 19 08:17:23.023869 extend-filesystems[1695]: Old size kept for /dev/nvme0n1p9 Aug 19 08:17:23.025706 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 08:17:23.025945 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 08:17:23.041965 dbus-daemon[1689]: [system] SELinux support is enabled Aug 19 08:17:23.042827 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 08:17:23.048274 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 08:17:23.048299 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 08:17:23.053321 update_engine[1712]: I20250819 08:17:23.052623 1712 update_check_scheduler.cc:74] Next update check in 7m32s Aug 19 08:17:23.053798 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 08:17:23.053818 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 08:17:23.060116 systemd[1]: Started update-engine.service - Update Engine. Aug 19 08:17:23.063080 dbus-daemon[1689]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 19 08:17:23.066844 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 08:17:23.121140 coreos-metadata[1688]: Aug 19 08:17:23.120 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 19 08:17:23.126306 coreos-metadata[1688]: Aug 19 08:17:23.125 INFO Fetch successful Aug 19 08:17:23.128858 coreos-metadata[1688]: Aug 19 08:17:23.126 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 19 08:17:23.129906 coreos-metadata[1688]: Aug 19 08:17:23.129 INFO Fetch successful Aug 19 08:17:23.132202 coreos-metadata[1688]: Aug 19 08:17:23.131 INFO Fetching http://168.63.129.16/machine/552457c9-384e-43b9-adeb-0899fc9570d6/68389136%2D4fbe%2D4209%2D9c66%2D5b4b0c7258cd.%5Fci%2D4426.0.0%2Da%2Df24a6b1c57?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 19 08:17:23.133477 coreos-metadata[1688]: Aug 19 08:17:23.132 INFO Fetch successful Aug 19 08:17:23.133477 coreos-metadata[1688]: Aug 19 08:17:23.133 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 19 08:17:23.146268 coreos-metadata[1688]: Aug 19 08:17:23.146 INFO Fetch successful Aug 19 08:17:23.217102 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 19 08:17:23.219947 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 08:17:23.466285 locksmithd[1773]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 08:17:23.598423 sshd_keygen[1725]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 08:17:23.623963 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 08:17:23.631910 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 08:17:23.636387 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Aug 19 08:17:23.661821 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 08:17:23.662383 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 08:17:23.670800 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 08:17:23.683010 tar[1721]: linux-amd64/README.md Aug 19 08:17:23.689666 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Aug 19 08:17:23.702569 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 08:17:23.706171 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 08:17:23.710566 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 08:17:23.713913 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 19 08:17:23.716110 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 08:17:23.788733 containerd[1729]: time="2025-08-19T08:17:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 08:17:23.789701 containerd[1729]: time="2025-08-19T08:17:23.789209093Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 08:17:23.799347 containerd[1729]: time="2025-08-19T08:17:23.799222888Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.789µs" Aug 19 08:17:23.799347 containerd[1729]: time="2025-08-19T08:17:23.799253853Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 08:17:23.799347 containerd[1729]: time="2025-08-19T08:17:23.799272431Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 08:17:23.799444 containerd[1729]: time="2025-08-19T08:17:23.799396813Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 08:17:23.799444 containerd[1729]: time="2025-08-19T08:17:23.799409053Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 08:17:23.799444 containerd[1729]: time="2025-08-19T08:17:23.799430268Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:17:23.799513 containerd[1729]: time="2025-08-19T08:17:23.799474983Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 08:17:23.799513 containerd[1729]: time="2025-08-19T08:17:23.799484727Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:17:23.799690 containerd[1729]: time="2025-08-19T08:17:23.799664720Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 08:17:23.799722 containerd[1729]: time="2025-08-19T08:17:23.799677991Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:17:23.799722 containerd[1729]: time="2025-08-19T08:17:23.799702562Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 08:17:23.799722 containerd[1729]: time="2025-08-19T08:17:23.799710507Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 08:17:23.799795 containerd[1729]: time="2025-08-19T08:17:23.799773691Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 08:17:23.800301 containerd[1729]: time="2025-08-19T08:17:23.799933127Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:17:23.800301 containerd[1729]: time="2025-08-19T08:17:23.799961065Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 08:17:23.800301 containerd[1729]: time="2025-08-19T08:17:23.799970114Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 08:17:23.800301 containerd[1729]: time="2025-08-19T08:17:23.799994467Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 08:17:23.800301 containerd[1729]: time="2025-08-19T08:17:23.800187100Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 08:17:23.800301 containerd[1729]: time="2025-08-19T08:17:23.800218329Z" level=info msg="metadata content store policy set" policy=shared Aug 19 08:17:23.814667 containerd[1729]: time="2025-08-19T08:17:23.814613579Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 08:17:23.814835 containerd[1729]: time="2025-08-19T08:17:23.814814373Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 08:17:23.814917 containerd[1729]: time="2025-08-19T08:17:23.814905977Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 08:17:23.814972 containerd[1729]: time="2025-08-19T08:17:23.814938830Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 08:17:23.814972 containerd[1729]: time="2025-08-19T08:17:23.814967725Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 08:17:23.815025 containerd[1729]: time="2025-08-19T08:17:23.814981653Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 08:17:23.815025 containerd[1729]: time="2025-08-19T08:17:23.814993767Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 08:17:23.815025 containerd[1729]: time="2025-08-19T08:17:23.815005160Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 08:17:23.815025 containerd[1729]: time="2025-08-19T08:17:23.815016647Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 08:17:23.815098 containerd[1729]: time="2025-08-19T08:17:23.815027433Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 08:17:23.815098 containerd[1729]: time="2025-08-19T08:17:23.815038921Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 08:17:23.815098 containerd[1729]: time="2025-08-19T08:17:23.815059166Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815172978Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815191912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815205830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815215657Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815225601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815235493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815245991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815256357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815267380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815277231Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815286383Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815342108Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815354494Z" level=info msg="Start snapshots syncer" Aug 19 08:17:23.816214 containerd[1729]: time="2025-08-19T08:17:23.815385511Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 08:17:23.816502 containerd[1729]: time="2025-08-19T08:17:23.815646828Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 08:17:23.816502 containerd[1729]: time="2025-08-19T08:17:23.815702608Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.815767504Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.815878445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.815896308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.815906648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.815916306Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.815927666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.815937988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.815947489Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.815971806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.815986680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.815998887Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.816026745Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.816039669Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 08:17:23.816624 containerd[1729]: time="2025-08-19T08:17:23.816048182Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:17:23.816877 containerd[1729]: time="2025-08-19T08:17:23.816057219Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 08:17:23.816877 containerd[1729]: time="2025-08-19T08:17:23.816064400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 08:17:23.816877 containerd[1729]: time="2025-08-19T08:17:23.816104721Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 08:17:23.816877 containerd[1729]: time="2025-08-19T08:17:23.816116168Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 08:17:23.816877 containerd[1729]: time="2025-08-19T08:17:23.816130740Z" level=info msg="runtime interface created" Aug 19 08:17:23.816877 containerd[1729]: time="2025-08-19T08:17:23.816135981Z" level=info msg="created NRI interface" Aug 19 08:17:23.816877 containerd[1729]: time="2025-08-19T08:17:23.816144033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 08:17:23.816877 containerd[1729]: time="2025-08-19T08:17:23.816154021Z" level=info msg="Connect containerd service" Aug 19 08:17:23.816877 containerd[1729]: time="2025-08-19T08:17:23.816174814Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 08:17:23.817029 containerd[1729]: time="2025-08-19T08:17:23.816878656Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:17:24.152754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:17:24.159995 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:17:24.619626 kubelet[1845]: E0819 08:17:24.619576 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:17:24.621442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:17:24.621574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:17:24.621907 systemd[1]: kubelet.service: Consumed 890ms CPU time, 266M memory peak. Aug 19 08:17:24.679485 containerd[1729]: time="2025-08-19T08:17:24.679310175Z" level=info msg="Start subscribing containerd event" Aug 19 08:17:24.679485 containerd[1729]: time="2025-08-19T08:17:24.679363933Z" level=info msg="Start recovering state" Aug 19 08:17:24.679485 containerd[1729]: time="2025-08-19T08:17:24.679460702Z" level=info msg="Start event monitor" Aug 19 08:17:24.679485 containerd[1729]: time="2025-08-19T08:17:24.679464263Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 08:17:24.679635 containerd[1729]: time="2025-08-19T08:17:24.679504373Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 08:17:24.679635 containerd[1729]: time="2025-08-19T08:17:24.679471658Z" level=info msg="Start cni network conf syncer for default" Aug 19 08:17:24.679635 containerd[1729]: time="2025-08-19T08:17:24.679519266Z" level=info msg="Start streaming server" Aug 19 08:17:24.679635 containerd[1729]: time="2025-08-19T08:17:24.679530964Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 08:17:24.679635 containerd[1729]: time="2025-08-19T08:17:24.679538714Z" level=info msg="runtime interface starting up..." Aug 19 08:17:24.679635 containerd[1729]: time="2025-08-19T08:17:24.679544383Z" level=info msg="starting plugins..." Aug 19 08:17:24.679635 containerd[1729]: time="2025-08-19T08:17:24.679557035Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 08:17:24.679774 containerd[1729]: time="2025-08-19T08:17:24.679641569Z" level=info msg="containerd successfully booted in 0.891238s" Aug 19 08:17:24.679798 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 08:17:24.682761 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 08:17:24.687466 systemd[1]: Startup finished in 3.035s (kernel) + 10.614s (initrd) + 12.657s (userspace) = 26.307s. Aug 19 08:17:24.893223 login[1828]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Aug 19 08:17:24.893590 login[1827]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 19 08:17:24.903229 systemd-logind[1709]: New session 2 of user core. Aug 19 08:17:24.904175 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 08:17:24.905116 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 08:17:24.928099 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 08:17:24.929847 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 08:17:24.944129 (systemd)[1862]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 08:17:24.945742 systemd-logind[1709]: New session c1 of user core. Aug 19 08:17:25.078651 systemd[1862]: Queued start job for default target default.target. Aug 19 08:17:25.085338 systemd[1862]: Created slice app.slice - User Application Slice. Aug 19 08:17:25.085363 systemd[1862]: Reached target paths.target - Paths. Aug 19 08:17:25.085467 systemd[1862]: Reached target timers.target - Timers. Aug 19 08:17:25.086287 systemd[1862]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 08:17:25.095989 systemd[1862]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 08:17:25.096049 systemd[1862]: Reached target sockets.target - Sockets. Aug 19 08:17:25.096082 systemd[1862]: Reached target basic.target - Basic System. Aug 19 08:17:25.096140 systemd[1862]: Reached target default.target - Main User Target. Aug 19 08:17:25.096161 systemd[1862]: Startup finished in 145ms. Aug 19 08:17:25.096296 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 08:17:25.105833 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 08:17:25.202420 waagent[1824]: 2025-08-19T08:17:25.202310Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Aug 19 08:17:25.203779 waagent[1824]: 2025-08-19T08:17:25.203734Z INFO Daemon Daemon OS: flatcar 4426.0.0 Aug 19 08:17:25.203870 waagent[1824]: 2025-08-19T08:17:25.203846Z INFO Daemon Daemon Python: 3.11.13 Aug 19 08:17:25.204224 waagent[1824]: 2025-08-19T08:17:25.204176Z INFO Daemon Daemon Run daemon Aug 19 08:17:25.204345 waagent[1824]: 2025-08-19T08:17:25.204323Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4426.0.0' Aug 19 08:17:25.208022 waagent[1824]: 2025-08-19T08:17:25.204389Z INFO Daemon Daemon Using waagent for provisioning Aug 19 08:17:25.208202 waagent[1824]: 2025-08-19T08:17:25.208178Z INFO Daemon Daemon Activate resource disk Aug 19 08:17:25.208271 waagent[1824]: 2025-08-19T08:17:25.208251Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 19 08:17:25.210763 waagent[1824]: 2025-08-19T08:17:25.210553Z INFO Daemon Daemon Found device: None Aug 19 08:17:25.210977 waagent[1824]: 2025-08-19T08:17:25.210955Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 19 08:17:25.211119 waagent[1824]: 2025-08-19T08:17:25.211104Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 19 08:17:25.211592 waagent[1824]: 2025-08-19T08:17:25.211548Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 19 08:17:25.211894 waagent[1824]: 2025-08-19T08:17:25.211658Z INFO Daemon Daemon Running default provisioning handler Aug 19 08:17:25.224486 waagent[1824]: 2025-08-19T08:17:25.223989Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Aug 19 08:17:25.224828 waagent[1824]: 2025-08-19T08:17:25.224799Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 19 08:17:25.225003 waagent[1824]: 2025-08-19T08:17:25.224986Z INFO Daemon Daemon cloud-init is enabled: False Aug 19 08:17:25.225277 waagent[1824]: 2025-08-19T08:17:25.225262Z INFO Daemon Daemon Copying ovf-env.xml Aug 19 08:17:25.302274 waagent[1824]: 2025-08-19T08:17:25.302226Z INFO Daemon Daemon Successfully mounted dvd Aug 19 08:17:25.315724 waagent[1824]: 2025-08-19T08:17:25.315660Z INFO Daemon Daemon Detect protocol endpoint Aug 19 08:17:25.316377 waagent[1824]: 2025-08-19T08:17:25.316345Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 19 08:17:25.317018 waagent[1824]: 2025-08-19T08:17:25.316996Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 19 08:17:25.317275 waagent[1824]: 2025-08-19T08:17:25.317260Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 19 08:17:25.317979 waagent[1824]: 2025-08-19T08:17:25.317961Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 19 08:17:25.318383 waagent[1824]: 2025-08-19T08:17:25.318368Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 19 08:17:25.325994 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 19 08:17:25.327704 waagent[1824]: 2025-08-19T08:17:25.327641Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 19 08:17:25.328989 waagent[1824]: 2025-08-19T08:17:25.328347Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 19 08:17:25.328989 waagent[1824]: 2025-08-19T08:17:25.328591Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 19 08:17:25.413166 waagent[1824]: 2025-08-19T08:17:25.413118Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 19 08:17:25.414564 waagent[1824]: 2025-08-19T08:17:25.414531Z INFO Daemon Daemon Forcing an update of the goal state. Aug 19 08:17:25.418765 waagent[1824]: 2025-08-19T08:17:25.418735Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 19 08:17:25.434891 waagent[1824]: 2025-08-19T08:17:25.434864Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Aug 19 08:17:25.437365 waagent[1824]: 2025-08-19T08:17:25.435805Z INFO Daemon Aug 19 08:17:25.437365 waagent[1824]: 2025-08-19T08:17:25.436665Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 849f56b0-f65c-42b5-a4bd-426d8c3aa9e1 eTag: 16091515249871456318 source: Fabric] Aug 19 08:17:25.437365 waagent[1824]: 2025-08-19T08:17:25.437578Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Aug 19 08:17:25.437365 waagent[1824]: 2025-08-19T08:17:25.437949Z INFO Daemon Aug 19 08:17:25.437365 waagent[1824]: 2025-08-19T08:17:25.438452Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Aug 19 08:17:25.444695 waagent[1824]: 2025-08-19T08:17:25.444656Z INFO Daemon Daemon Downloading artifacts profile blob Aug 19 08:17:25.525350 waagent[1824]: 2025-08-19T08:17:25.525307Z INFO Daemon Downloaded certificate {'thumbprint': 'EF9E5116CA5212CF7ACE9FA07E6AB805659BC755', 'hasPrivateKey': True} Aug 19 08:17:25.529510 waagent[1824]: 2025-08-19T08:17:25.526242Z INFO Daemon Fetch goal state completed Aug 19 08:17:25.538359 waagent[1824]: 2025-08-19T08:17:25.538304Z INFO Daemon Daemon Starting provisioning Aug 19 08:17:25.539360 waagent[1824]: 2025-08-19T08:17:25.538771Z INFO Daemon Daemon Handle ovf-env.xml. Aug 19 08:17:25.539360 waagent[1824]: 2025-08-19T08:17:25.539003Z INFO Daemon Daemon Set hostname [ci-4426.0.0-a-f24a6b1c57] Aug 19 08:17:25.557246 waagent[1824]: 2025-08-19T08:17:25.557210Z INFO Daemon Daemon Publish hostname [ci-4426.0.0-a-f24a6b1c57] Aug 19 08:17:25.563091 waagent[1824]: 2025-08-19T08:17:25.557865Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 19 08:17:25.563091 waagent[1824]: 2025-08-19T08:17:25.558509Z INFO Daemon Daemon Primary interface is [eth0] Aug 19 08:17:25.565546 systemd-networkd[1507]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 08:17:25.565554 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 08:17:25.565576 systemd-networkd[1507]: eth0: DHCP lease lost Aug 19 08:17:25.566426 waagent[1824]: 2025-08-19T08:17:25.566379Z INFO Daemon Daemon Create user account if not exists Aug 19 08:17:25.567545 waagent[1824]: 2025-08-19T08:17:25.567271Z INFO Daemon Daemon User core already exists, skip useradd Aug 19 08:17:25.569218 waagent[1824]: 2025-08-19T08:17:25.567559Z INFO Daemon Daemon Configure sudoer Aug 19 08:17:25.571052 waagent[1824]: 2025-08-19T08:17:25.570996Z INFO Daemon Daemon Configure sshd Aug 19 08:17:25.574872 waagent[1824]: 2025-08-19T08:17:25.574830Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Aug 19 08:17:25.580772 waagent[1824]: 2025-08-19T08:17:25.575357Z INFO Daemon Daemon Deploy ssh public key. Aug 19 08:17:25.591746 systemd-networkd[1507]: eth0: DHCPv4 address 10.200.8.11/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 19 08:17:25.895168 login[1828]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 19 08:17:25.899595 systemd-logind[1709]: New session 1 of user core. Aug 19 08:17:25.905799 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 08:17:26.642924 waagent[1824]: 2025-08-19T08:17:26.642867Z INFO Daemon Daemon Provisioning complete Aug 19 08:17:26.653510 waagent[1824]: 2025-08-19T08:17:26.653477Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 19 08:17:26.656555 waagent[1824]: 2025-08-19T08:17:26.654103Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 19 08:17:26.656555 waagent[1824]: 2025-08-19T08:17:26.654366Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Aug 19 08:17:26.754585 waagent[1914]: 2025-08-19T08:17:26.754524Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Aug 19 08:17:26.754866 waagent[1914]: 2025-08-19T08:17:26.754620Z INFO ExtHandler ExtHandler OS: flatcar 4426.0.0 Aug 19 08:17:26.754866 waagent[1914]: 2025-08-19T08:17:26.754657Z INFO ExtHandler ExtHandler Python: 3.11.13 Aug 19 08:17:26.754866 waagent[1914]: 2025-08-19T08:17:26.754715Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Aug 19 08:17:26.791556 waagent[1914]: 2025-08-19T08:17:26.791505Z INFO ExtHandler ExtHandler Distro: flatcar-4426.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Aug 19 08:17:26.791712 waagent[1914]: 2025-08-19T08:17:26.791666Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 19 08:17:26.791777 waagent[1914]: 2025-08-19T08:17:26.791738Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 19 08:17:26.796392 waagent[1914]: 2025-08-19T08:17:26.796347Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 19 08:17:26.802021 waagent[1914]: 2025-08-19T08:17:26.801995Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 19 08:17:26.802327 waagent[1914]: 2025-08-19T08:17:26.802301Z INFO ExtHandler Aug 19 08:17:26.802364 waagent[1914]: 2025-08-19T08:17:26.802348Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: df0debb9-043b-4544-b0fa-604d445c5953 eTag: 16091515249871456318 source: Fabric] Aug 19 08:17:26.802552 waagent[1914]: 2025-08-19T08:17:26.802528Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 19 08:17:26.802860 waagent[1914]: 2025-08-19T08:17:26.802836Z INFO ExtHandler Aug 19 08:17:26.802900 waagent[1914]: 2025-08-19T08:17:26.802873Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 19 08:17:26.809464 waagent[1914]: 2025-08-19T08:17:26.809430Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 19 08:17:26.874824 waagent[1914]: 2025-08-19T08:17:26.874779Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EF9E5116CA5212CF7ACE9FA07E6AB805659BC755', 'hasPrivateKey': True} Aug 19 08:17:26.875142 waagent[1914]: 2025-08-19T08:17:26.875117Z INFO ExtHandler Fetch goal state completed Aug 19 08:17:26.891352 waagent[1914]: 2025-08-19T08:17:26.891305Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Aug 19 08:17:26.895483 waagent[1914]: 2025-08-19T08:17:26.895404Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1914 Aug 19 08:17:26.895544 waagent[1914]: 2025-08-19T08:17:26.895521Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Aug 19 08:17:26.895833 waagent[1914]: 2025-08-19T08:17:26.895811Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Aug 19 08:17:26.896850 waagent[1914]: 2025-08-19T08:17:26.896817Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4426.0.0', '', 'Flatcar Container Linux by Kinvolk'] Aug 19 08:17:26.897128 waagent[1914]: 2025-08-19T08:17:26.897101Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4426.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Aug 19 08:17:26.897237 waagent[1914]: 2025-08-19T08:17:26.897215Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Aug 19 08:17:26.897614 waagent[1914]: 2025-08-19T08:17:26.897590Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 19 08:17:26.932368 waagent[1914]: 2025-08-19T08:17:26.932342Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 19 08:17:26.932496 waagent[1914]: 2025-08-19T08:17:26.932475Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 19 08:17:26.937550 waagent[1914]: 2025-08-19T08:17:26.937525Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 19 08:17:26.942597 systemd[1]: Reload requested from client PID 1929 ('systemctl') (unit waagent.service)... Aug 19 08:17:26.942607 systemd[1]: Reloading... Aug 19 08:17:27.015702 zram_generator::config[1968]: No configuration found. Aug 19 08:17:27.186329 systemd[1]: Reloading finished in 243 ms. Aug 19 08:17:27.197723 waagent[1914]: 2025-08-19T08:17:27.197194Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Aug 19 08:17:27.197723 waagent[1914]: 2025-08-19T08:17:27.197333Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Aug 19 08:17:27.534423 waagent[1914]: 2025-08-19T08:17:27.534359Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Aug 19 08:17:27.534712 waagent[1914]: 2025-08-19T08:17:27.534665Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Aug 19 08:17:27.535359 waagent[1914]: 2025-08-19T08:17:27.535328Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 19 08:17:27.535567 waagent[1914]: 2025-08-19T08:17:27.535532Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 19 08:17:27.535628 waagent[1914]: 2025-08-19T08:17:27.535603Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 19 08:17:27.535807 waagent[1914]: 2025-08-19T08:17:27.535778Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 19 08:17:27.536135 waagent[1914]: 2025-08-19T08:17:27.536110Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 19 08:17:27.536295 waagent[1914]: 2025-08-19T08:17:27.536270Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 19 08:17:27.536328 waagent[1914]: 2025-08-19T08:17:27.536313Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 19 08:17:27.536378 waagent[1914]: 2025-08-19T08:17:27.536358Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 19 08:17:27.536482 waagent[1914]: 2025-08-19T08:17:27.536461Z INFO EnvHandler ExtHandler Configure routes Aug 19 08:17:27.536535 waagent[1914]: 2025-08-19T08:17:27.536509Z INFO EnvHandler ExtHandler Gateway:None Aug 19 08:17:27.536579 waagent[1914]: 2025-08-19T08:17:27.536529Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 19 08:17:27.536895 waagent[1914]: 2025-08-19T08:17:27.536859Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 19 08:17:27.536997 waagent[1914]: 2025-08-19T08:17:27.536930Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 19 08:17:27.537041 waagent[1914]: 2025-08-19T08:17:27.537018Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 19 08:17:27.537407 waagent[1914]: 2025-08-19T08:17:27.537257Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 19 08:17:27.537407 waagent[1914]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 19 08:17:27.537407 waagent[1914]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Aug 19 08:17:27.537407 waagent[1914]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 19 08:17:27.537407 waagent[1914]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 19 08:17:27.537407 waagent[1914]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 19 08:17:27.537407 waagent[1914]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 19 08:17:27.537742 waagent[1914]: 2025-08-19T08:17:27.537671Z INFO EnvHandler ExtHandler Routes:None Aug 19 08:17:27.546450 waagent[1914]: 2025-08-19T08:17:27.546413Z INFO ExtHandler ExtHandler Aug 19 08:17:27.546516 waagent[1914]: 2025-08-19T08:17:27.546476Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 0f3f87ae-8b52-4a8a-a4d1-9d5452119844 correlation fe4b6d56-7a3e-4433-a995-dfd48191b4e5 created: 2025-08-19T08:16:29.002495Z] Aug 19 08:17:27.546785 waagent[1914]: 2025-08-19T08:17:27.546761Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 19 08:17:27.547170 waagent[1914]: 2025-08-19T08:17:27.547148Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Aug 19 08:17:27.586505 waagent[1914]: 2025-08-19T08:17:27.586084Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Aug 19 08:17:27.586505 waagent[1914]: Try `iptables -h' or 'iptables --help' for more information.) Aug 19 08:17:27.586505 waagent[1914]: 2025-08-19T08:17:27.586436Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A6B4A03E-9DC7-4CB7-BB7B-993C54A9CF10;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Aug 19 08:17:27.612740 waagent[1914]: 2025-08-19T08:17:27.612676Z INFO MonitorHandler ExtHandler Network interfaces: Aug 19 08:17:27.612740 waagent[1914]: Executing ['ip', '-a', '-o', 'link']: Aug 19 08:17:27.612740 waagent[1914]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 19 08:17:27.612740 waagent[1914]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:de:ba:60 brd ff:ff:ff:ff:ff:ff\ alias Network Device Aug 19 08:17:27.612740 waagent[1914]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:de:ba:60 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Aug 19 08:17:27.612740 waagent[1914]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 19 08:17:27.612740 waagent[1914]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 19 08:17:27.612740 waagent[1914]: 2: eth0 inet 10.200.8.11/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 19 08:17:27.612740 waagent[1914]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 19 08:17:27.612740 waagent[1914]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Aug 19 08:17:27.612740 waagent[1914]: 2: eth0 inet6 fe80::6245:bdff:fede:ba60/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 19 08:17:27.640428 waagent[1914]: 2025-08-19T08:17:27.640383Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Aug 19 08:17:27.640428 waagent[1914]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 19 08:17:27.640428 waagent[1914]: pkts bytes target prot opt in out source destination Aug 19 08:17:27.640428 waagent[1914]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 19 08:17:27.640428 waagent[1914]: pkts bytes target prot opt in out source destination Aug 19 08:17:27.640428 waagent[1914]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 19 08:17:27.640428 waagent[1914]: pkts bytes target prot opt in out source destination Aug 19 08:17:27.640428 waagent[1914]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 19 08:17:27.640428 waagent[1914]: 7 940 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 19 08:17:27.640428 waagent[1914]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 19 08:17:27.643134 waagent[1914]: 2025-08-19T08:17:27.643087Z INFO EnvHandler ExtHandler Current Firewall rules: Aug 19 08:17:27.643134 waagent[1914]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 19 08:17:27.643134 waagent[1914]: pkts bytes target prot opt in out source destination Aug 19 08:17:27.643134 waagent[1914]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 19 08:17:27.643134 waagent[1914]: pkts bytes target prot opt in out source destination Aug 19 08:17:27.643134 waagent[1914]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 19 08:17:27.643134 waagent[1914]: pkts bytes target prot opt in out source destination Aug 19 08:17:27.643134 waagent[1914]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 19 08:17:27.643134 waagent[1914]: 8 992 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 19 08:17:27.643134 waagent[1914]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 19 08:17:34.734644 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 08:17:34.735959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:17:35.227587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:17:35.232931 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:17:35.264557 kubelet[2066]: E0819 08:17:35.264528 2066 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:17:35.267401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:17:35.267525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:17:35.267834 systemd[1]: kubelet.service: Consumed 123ms CPU time, 108.4M memory peak. Aug 19 08:17:45.484655 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 19 08:17:45.486014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:17:45.705177 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 08:17:45.706190 systemd[1]: Started sshd@0-10.200.8.11:22-10.200.16.10:46382.service - OpenSSH per-connection server daemon (10.200.16.10:46382). Aug 19 08:17:46.109722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:17:46.113968 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:17:46.145691 kubelet[2085]: E0819 08:17:46.145641 2085 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:17:46.147103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:17:46.147231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:17:46.147572 systemd[1]: kubelet.service: Consumed 120ms CPU time, 110.5M memory peak. Aug 19 08:17:46.510663 sshd[2077]: Accepted publickey for core from 10.200.16.10 port 46382 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:17:46.511802 sshd-session[2077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:46.515762 systemd-logind[1709]: New session 3 of user core. Aug 19 08:17:46.521804 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 08:17:46.648126 chronyd[1686]: Selected source PHC0 Aug 19 08:17:47.078043 systemd[1]: Started sshd@1-10.200.8.11:22-10.200.16.10:46392.service - OpenSSH per-connection server daemon (10.200.16.10:46392). Aug 19 08:17:47.715076 sshd[2095]: Accepted publickey for core from 10.200.16.10 port 46392 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:17:47.716236 sshd-session[2095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:47.720331 systemd-logind[1709]: New session 4 of user core. Aug 19 08:17:47.727839 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 08:17:48.164599 sshd[2098]: Connection closed by 10.200.16.10 port 46392 Aug 19 08:17:48.165136 sshd-session[2095]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:48.167888 systemd[1]: sshd@1-10.200.8.11:22-10.200.16.10:46392.service: Deactivated successfully. Aug 19 08:17:48.169424 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 08:17:48.170979 systemd-logind[1709]: Session 4 logged out. Waiting for processes to exit. Aug 19 08:17:48.171888 systemd-logind[1709]: Removed session 4. Aug 19 08:17:48.285913 systemd[1]: Started sshd@2-10.200.8.11:22-10.200.16.10:46404.service - OpenSSH per-connection server daemon (10.200.16.10:46404). Aug 19 08:17:48.924991 sshd[2104]: Accepted publickey for core from 10.200.16.10 port 46404 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:17:48.926008 sshd-session[2104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:48.929820 systemd-logind[1709]: New session 5 of user core. Aug 19 08:17:48.935854 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 08:17:49.377307 sshd[2107]: Connection closed by 10.200.16.10 port 46404 Aug 19 08:17:49.377846 sshd-session[2104]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:49.381030 systemd[1]: sshd@2-10.200.8.11:22-10.200.16.10:46404.service: Deactivated successfully. Aug 19 08:17:49.382401 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 08:17:49.383162 systemd-logind[1709]: Session 5 logged out. Waiting for processes to exit. Aug 19 08:17:49.384159 systemd-logind[1709]: Removed session 5. Aug 19 08:17:49.493201 systemd[1]: Started sshd@3-10.200.8.11:22-10.200.16.10:46408.service - OpenSSH per-connection server daemon (10.200.16.10:46408). Aug 19 08:17:50.144922 sshd[2113]: Accepted publickey for core from 10.200.16.10 port 46408 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:17:50.145987 sshd-session[2113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:50.150005 systemd-logind[1709]: New session 6 of user core. Aug 19 08:17:50.156824 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 08:17:50.594245 sshd[2116]: Connection closed by 10.200.16.10 port 46408 Aug 19 08:17:50.594764 sshd-session[2113]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:50.597430 systemd[1]: sshd@3-10.200.8.11:22-10.200.16.10:46408.service: Deactivated successfully. Aug 19 08:17:50.598826 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 08:17:50.600039 systemd-logind[1709]: Session 6 logged out. Waiting for processes to exit. Aug 19 08:17:50.601118 systemd-logind[1709]: Removed session 6. Aug 19 08:17:50.710767 systemd[1]: Started sshd@4-10.200.8.11:22-10.200.16.10:55758.service - OpenSSH per-connection server daemon (10.200.16.10:55758). Aug 19 08:17:51.349556 sshd[2122]: Accepted publickey for core from 10.200.16.10 port 55758 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:17:51.350614 sshd-session[2122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:51.354746 systemd-logind[1709]: New session 7 of user core. Aug 19 08:17:51.360824 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 08:17:51.811543 sudo[2126]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 08:17:51.811770 sudo[2126]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:17:51.823488 sudo[2126]: pam_unix(sudo:session): session closed for user root Aug 19 08:17:51.926271 sshd[2125]: Connection closed by 10.200.16.10 port 55758 Aug 19 08:17:51.928140 sshd-session[2122]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:51.930883 systemd[1]: sshd@4-10.200.8.11:22-10.200.16.10:55758.service: Deactivated successfully. Aug 19 08:17:51.932409 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 08:17:51.933629 systemd-logind[1709]: Session 7 logged out. Waiting for processes to exit. Aug 19 08:17:51.934672 systemd-logind[1709]: Removed session 7. Aug 19 08:17:52.044350 systemd[1]: Started sshd@5-10.200.8.11:22-10.200.16.10:55760.service - OpenSSH per-connection server daemon (10.200.16.10:55760). Aug 19 08:17:52.684350 sshd[2132]: Accepted publickey for core from 10.200.16.10 port 55760 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:17:52.685437 sshd-session[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:52.689412 systemd-logind[1709]: New session 8 of user core. Aug 19 08:17:52.696837 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 08:17:53.032458 sudo[2137]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 08:17:53.032672 sudo[2137]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:17:53.038194 sudo[2137]: pam_unix(sudo:session): session closed for user root Aug 19 08:17:53.041695 sudo[2136]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 08:17:53.041899 sudo[2136]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:17:53.048712 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 08:17:53.079409 augenrules[2159]: No rules Aug 19 08:17:53.080333 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 08:17:53.080512 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 08:17:53.081226 sudo[2136]: pam_unix(sudo:session): session closed for user root Aug 19 08:17:53.184961 sshd[2135]: Connection closed by 10.200.16.10 port 55760 Aug 19 08:17:53.185341 sshd-session[2132]: pam_unix(sshd:session): session closed for user core Aug 19 08:17:53.187636 systemd[1]: sshd@5-10.200.8.11:22-10.200.16.10:55760.service: Deactivated successfully. Aug 19 08:17:53.189007 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 08:17:53.190155 systemd-logind[1709]: Session 8 logged out. Waiting for processes to exit. Aug 19 08:17:53.191111 systemd-logind[1709]: Removed session 8. Aug 19 08:17:53.296007 systemd[1]: Started sshd@6-10.200.8.11:22-10.200.16.10:55776.service - OpenSSH per-connection server daemon (10.200.16.10:55776). Aug 19 08:17:53.938978 sshd[2168]: Accepted publickey for core from 10.200.16.10 port 55776 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:17:53.940025 sshd-session[2168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:17:53.944000 systemd-logind[1709]: New session 9 of user core. Aug 19 08:17:53.953832 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 08:17:54.286628 sudo[2172]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 08:17:54.286864 sudo[2172]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 08:17:55.402723 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 08:17:55.415958 (dockerd)[2191]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 08:17:56.234566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 19 08:17:56.235981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:17:56.516693 dockerd[2191]: time="2025-08-19T08:17:56.516210328Z" level=info msg="Starting up" Aug 19 08:17:56.517368 dockerd[2191]: time="2025-08-19T08:17:56.517328584Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 08:17:56.526563 dockerd[2191]: time="2025-08-19T08:17:56.526531526Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 08:17:56.850964 dockerd[2191]: time="2025-08-19T08:17:56.850920025Z" level=info msg="Loading containers: start." Aug 19 08:17:56.865617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:17:56.872991 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:17:56.878901 kernel: Initializing XFRM netlink socket Aug 19 08:17:56.910848 kubelet[2220]: E0819 08:17:56.910817 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:17:56.911823 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:17:56.911937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:17:56.912177 systemd[1]: kubelet.service: Consumed 130ms CPU time, 111.1M memory peak. Aug 19 08:17:57.309549 systemd-networkd[1507]: docker0: Link UP Aug 19 08:17:57.337564 dockerd[2191]: time="2025-08-19T08:17:57.337524907Z" level=info msg="Loading containers: done." Aug 19 08:17:57.389592 dockerd[2191]: time="2025-08-19T08:17:57.389549598Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 08:17:57.389724 dockerd[2191]: time="2025-08-19T08:17:57.389631410Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 08:17:57.389724 dockerd[2191]: time="2025-08-19T08:17:57.389719913Z" level=info msg="Initializing buildkit" Aug 19 08:17:57.425373 dockerd[2191]: time="2025-08-19T08:17:57.425343383Z" level=info msg="Completed buildkit initialization" Aug 19 08:17:57.431367 dockerd[2191]: time="2025-08-19T08:17:57.431316471Z" level=info msg="Daemon has completed initialization" Aug 19 08:17:57.431930 dockerd[2191]: time="2025-08-19T08:17:57.431375251Z" level=info msg="API listen on /run/docker.sock" Aug 19 08:17:57.431577 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 08:17:58.301389 containerd[1729]: time="2025-08-19T08:17:58.301351152Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Aug 19 08:17:58.981242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150337459.mount: Deactivated successfully. Aug 19 08:18:00.137232 containerd[1729]: time="2025-08-19T08:18:00.137183652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:00.139058 containerd[1729]: time="2025-08-19T08:18:00.139023513Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078672" Aug 19 08:18:00.141869 containerd[1729]: time="2025-08-19T08:18:00.141827307Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:00.145145 containerd[1729]: time="2025-08-19T08:18:00.145104467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:00.145986 containerd[1729]: time="2025-08-19T08:18:00.145765401Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 1.84437797s" Aug 19 08:18:00.145986 containerd[1729]: time="2025-08-19T08:18:00.145801257Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Aug 19 08:18:00.146463 containerd[1729]: time="2025-08-19T08:18:00.146444999Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Aug 19 08:18:01.373305 containerd[1729]: time="2025-08-19T08:18:01.373257893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:01.375425 containerd[1729]: time="2025-08-19T08:18:01.375388578Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018074" Aug 19 08:18:01.381184 containerd[1729]: time="2025-08-19T08:18:01.381141607Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:01.384579 containerd[1729]: time="2025-08-19T08:18:01.384530717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:01.385265 containerd[1729]: time="2025-08-19T08:18:01.385114532Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.238644032s" Aug 19 08:18:01.385265 containerd[1729]: time="2025-08-19T08:18:01.385145262Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Aug 19 08:18:01.385708 containerd[1729]: time="2025-08-19T08:18:01.385693212Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Aug 19 08:18:02.538624 containerd[1729]: time="2025-08-19T08:18:02.538578690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:02.541111 containerd[1729]: time="2025-08-19T08:18:02.541077312Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153919" Aug 19 08:18:02.544040 containerd[1729]: time="2025-08-19T08:18:02.544004331Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:02.547723 containerd[1729]: time="2025-08-19T08:18:02.547670216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:02.548239 containerd[1729]: time="2025-08-19T08:18:02.548119055Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 1.162369238s" Aug 19 08:18:02.548239 containerd[1729]: time="2025-08-19T08:18:02.548148352Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Aug 19 08:18:02.548712 containerd[1729]: time="2025-08-19T08:18:02.548668961Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Aug 19 08:18:03.468734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24488233.mount: Deactivated successfully. Aug 19 08:18:03.821811 containerd[1729]: time="2025-08-19T08:18:03.821766456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:03.823990 containerd[1729]: time="2025-08-19T08:18:03.823956266Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899634" Aug 19 08:18:03.826417 containerd[1729]: time="2025-08-19T08:18:03.826380014Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:03.829112 containerd[1729]: time="2025-08-19T08:18:03.829073376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:03.829462 containerd[1729]: time="2025-08-19T08:18:03.829340382Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 1.280628267s" Aug 19 08:18:03.829462 containerd[1729]: time="2025-08-19T08:18:03.829368411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Aug 19 08:18:03.829928 containerd[1729]: time="2025-08-19T08:18:03.829910629Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 19 08:18:04.382509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount713633617.mount: Deactivated successfully. Aug 19 08:18:05.252252 containerd[1729]: time="2025-08-19T08:18:05.252205479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:05.254215 containerd[1729]: time="2025-08-19T08:18:05.254186664Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Aug 19 08:18:05.259002 containerd[1729]: time="2025-08-19T08:18:05.258954910Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:05.262908 containerd[1729]: time="2025-08-19T08:18:05.262729979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:05.263349 containerd[1729]: time="2025-08-19T08:18:05.263325311Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.433381495s" Aug 19 08:18:05.263388 containerd[1729]: time="2025-08-19T08:18:05.263357652Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 19 08:18:05.263882 containerd[1729]: time="2025-08-19T08:18:05.263859641Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 08:18:05.695958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2643081730.mount: Deactivated successfully. Aug 19 08:18:05.710034 containerd[1729]: time="2025-08-19T08:18:05.709996472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:18:05.711961 containerd[1729]: time="2025-08-19T08:18:05.711925506Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Aug 19 08:18:05.714330 containerd[1729]: time="2025-08-19T08:18:05.714290151Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:18:05.717398 containerd[1729]: time="2025-08-19T08:18:05.717356861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 08:18:05.717891 containerd[1729]: time="2025-08-19T08:18:05.717772967Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 453.888803ms" Aug 19 08:18:05.717891 containerd[1729]: time="2025-08-19T08:18:05.717800360Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 19 08:18:05.718272 containerd[1729]: time="2025-08-19T08:18:05.718244616Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 19 08:18:06.219293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756401369.mount: Deactivated successfully. Aug 19 08:18:06.600710 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Aug 19 08:18:06.985149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 19 08:18:06.987804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:18:07.703801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:18:07.710150 (kubelet)[2595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 08:18:07.752207 kubelet[2595]: E0819 08:18:07.752183 2595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 08:18:07.754258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 08:18:07.754383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 08:18:07.754865 systemd[1]: kubelet.service: Consumed 128ms CPU time, 110.1M memory peak. Aug 19 08:18:08.379841 update_engine[1712]: I20250819 08:18:08.379781 1712 update_attempter.cc:509] Updating boot flags... Aug 19 08:18:08.549606 containerd[1729]: time="2025-08-19T08:18:08.549560363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:08.552455 containerd[1729]: time="2025-08-19T08:18:08.552200634Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377879" Aug 19 08:18:08.554895 containerd[1729]: time="2025-08-19T08:18:08.554858435Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:08.558731 containerd[1729]: time="2025-08-19T08:18:08.558703139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:18:08.559517 containerd[1729]: time="2025-08-19T08:18:08.559477346Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.84119211s" Aug 19 08:18:08.559517 containerd[1729]: time="2025-08-19T08:18:08.559511079Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 19 08:18:11.565889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:18:11.566033 systemd[1]: kubelet.service: Consumed 128ms CPU time, 110.1M memory peak. Aug 19 08:18:11.568286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:18:11.588436 systemd[1]: Reload requested from client PID 2660 ('systemctl') (unit session-9.scope)... Aug 19 08:18:11.588447 systemd[1]: Reloading... Aug 19 08:18:11.656717 zram_generator::config[2702]: No configuration found. Aug 19 08:18:12.613927 systemd[1]: Reloading finished in 1025 ms. Aug 19 08:18:13.443780 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 08:18:13.443877 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 08:18:13.444161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:18:13.445829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:18:20.443756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:18:20.451942 (kubelet)[2773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:18:20.483540 kubelet[2773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:18:20.483540 kubelet[2773]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 08:18:20.483540 kubelet[2773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:18:20.483811 kubelet[2773]: I0819 08:18:20.483577 2773 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:18:20.690175 kubelet[2773]: I0819 08:18:20.690144 2773 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 19 08:18:20.690175 kubelet[2773]: I0819 08:18:20.690168 2773 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:18:20.690369 kubelet[2773]: I0819 08:18:20.690358 2773 server.go:956] "Client rotation is on, will bootstrap in background" Aug 19 08:18:20.716238 kubelet[2773]: I0819 08:18:20.715541 2773 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:18:20.716322 kubelet[2773]: E0819 08:18:20.716246 2773 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 19 08:18:20.723914 kubelet[2773]: I0819 08:18:20.723890 2773 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:18:20.726693 kubelet[2773]: I0819 08:18:20.726671 2773 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:18:20.726906 kubelet[2773]: I0819 08:18:20.726873 2773 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:18:20.727052 kubelet[2773]: I0819 08:18:20.726902 2773 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.0.0-a-f24a6b1c57","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:18:20.727052 kubelet[2773]: I0819 08:18:20.727049 2773 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:18:20.727187 kubelet[2773]: I0819 08:18:20.727059 2773 container_manager_linux.go:303] "Creating device plugin manager" Aug 19 08:18:20.727187 kubelet[2773]: I0819 08:18:20.727164 2773 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:18:20.728957 kubelet[2773]: I0819 08:18:20.728941 2773 kubelet.go:480] "Attempting to sync node with API server" Aug 19 08:18:20.728957 kubelet[2773]: I0819 08:18:20.728957 2773 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:18:20.729129 kubelet[2773]: I0819 08:18:20.728977 2773 kubelet.go:386] "Adding apiserver pod source" Aug 19 08:18:20.729129 kubelet[2773]: I0819 08:18:20.728991 2773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:18:20.738654 kubelet[2773]: I0819 08:18:20.738498 2773 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:18:20.738889 kubelet[2773]: I0819 08:18:20.738873 2773 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 19 08:18:20.739928 kubelet[2773]: E0819 08:18:20.739756 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 19 08:18:20.739928 kubelet[2773]: E0819 08:18:20.739825 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-a-f24a6b1c57&limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 19 08:18:20.739928 kubelet[2773]: W0819 08:18:20.739832 2773 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 08:18:20.741592 kubelet[2773]: I0819 08:18:20.741576 2773 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 08:18:20.741649 kubelet[2773]: I0819 08:18:20.741618 2773 server.go:1289] "Started kubelet" Aug 19 08:18:20.742790 kubelet[2773]: I0819 08:18:20.742763 2773 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:18:20.743608 kubelet[2773]: I0819 08:18:20.743558 2773 server.go:317] "Adding debug handlers to kubelet server" Aug 19 08:18:20.744906 kubelet[2773]: I0819 08:18:20.744858 2773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:18:20.745235 kubelet[2773]: I0819 08:18:20.745221 2773 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:18:20.746600 kubelet[2773]: E0819 08:18:20.745396 2773 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.0.0-a-f24a6b1c57.185d1d2b1c65c0d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.0.0-a-f24a6b1c57,UID:ci-4426.0.0-a-f24a6b1c57,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.0.0-a-f24a6b1c57,},FirstTimestamp:2025-08-19 08:18:20.741591253 +0000 UTC m=+0.286529371,LastTimestamp:2025-08-19 08:18:20.741591253 +0000 UTC m=+0.286529371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.0.0-a-f24a6b1c57,}" Aug 19 08:18:20.748083 kubelet[2773]: I0819 08:18:20.748069 2773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:18:20.748331 kubelet[2773]: I0819 08:18:20.748319 2773 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:18:20.750893 kubelet[2773]: E0819 08:18:20.750874 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:20.751241 kubelet[2773]: I0819 08:18:20.751231 2773 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 08:18:20.751512 kubelet[2773]: I0819 08:18:20.751502 2773 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 08:18:20.751609 kubelet[2773]: I0819 08:18:20.751602 2773 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:18:20.752172 kubelet[2773]: E0819 08:18:20.752157 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 19 08:18:20.753978 kubelet[2773]: E0819 08:18:20.753924 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-a-f24a6b1c57?timeout=10s\": dial tcp 10.200.8.11:6443: connect: connection refused" interval="200ms" Aug 19 08:18:20.754251 kubelet[2773]: E0819 08:18:20.754238 2773 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:18:20.754633 kubelet[2773]: I0819 08:18:20.754623 2773 factory.go:223] Registration of the containerd container factory successfully Aug 19 08:18:20.754709 kubelet[2773]: I0819 08:18:20.754704 2773 factory.go:223] Registration of the systemd container factory successfully Aug 19 08:18:20.754799 kubelet[2773]: I0819 08:18:20.754789 2773 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:18:20.767510 kubelet[2773]: I0819 08:18:20.767473 2773 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 19 08:18:20.768387 kubelet[2773]: I0819 08:18:20.768365 2773 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 19 08:18:20.768445 kubelet[2773]: I0819 08:18:20.768399 2773 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 19 08:18:20.768445 kubelet[2773]: I0819 08:18:20.768415 2773 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 08:18:20.768445 kubelet[2773]: I0819 08:18:20.768422 2773 kubelet.go:2436] "Starting kubelet main sync loop" Aug 19 08:18:20.768515 kubelet[2773]: E0819 08:18:20.768451 2773 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:18:20.774034 kubelet[2773]: I0819 08:18:20.773875 2773 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 08:18:20.774034 kubelet[2773]: I0819 08:18:20.773889 2773 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 08:18:20.774034 kubelet[2773]: I0819 08:18:20.773902 2773 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:18:20.774425 kubelet[2773]: E0819 08:18:20.774395 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 19 08:18:20.852007 kubelet[2773]: E0819 08:18:20.851984 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:20.869283 kubelet[2773]: E0819 08:18:20.869259 2773 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:18:20.952511 kubelet[2773]: E0819 08:18:20.952460 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:20.955020 kubelet[2773]: E0819 08:18:20.954996 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-a-f24a6b1c57?timeout=10s\": dial tcp 10.200.8.11:6443: connect: connection refused" interval="400ms" Aug 19 08:18:21.053455 kubelet[2773]: E0819 08:18:21.053422 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:21.069674 kubelet[2773]: E0819 08:18:21.069646 2773 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:18:21.154061 kubelet[2773]: E0819 08:18:21.154028 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:21.254882 kubelet[2773]: E0819 08:18:21.254849 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:21.355419 kubelet[2773]: E0819 08:18:21.355298 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:21.355677 kubelet[2773]: E0819 08:18:21.355639 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-a-f24a6b1c57?timeout=10s\": dial tcp 10.200.8.11:6443: connect: connection refused" interval="800ms" Aug 19 08:18:21.456154 kubelet[2773]: E0819 08:18:21.456101 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:21.470270 kubelet[2773]: E0819 08:18:21.470245 2773 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:18:21.556724 kubelet[2773]: E0819 08:18:21.556661 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:21.624380 kubelet[2773]: E0819 08:18:21.624290 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 19 08:18:21.656789 kubelet[2773]: E0819 08:18:21.656758 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:21.757049 kubelet[2773]: E0819 08:18:21.757011 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:21.857391 kubelet[2773]: E0819 08:18:21.857348 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:21.957851 kubelet[2773]: E0819 08:18:21.957764 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:22.058264 kubelet[2773]: E0819 08:18:22.058214 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:22.629918 kubelet[2773]: E0819 08:18:22.071739 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-a-f24a6b1c57&limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 19 08:18:22.629918 kubelet[2773]: E0819 08:18:22.078259 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 19 08:18:22.629918 kubelet[2773]: E0819 08:18:22.126048 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 19 08:18:22.629918 kubelet[2773]: E0819 08:18:22.156913 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-a-f24a6b1c57?timeout=10s\": dial tcp 10.200.8.11:6443: connect: connection refused" interval="1.6s" Aug 19 08:18:22.629918 kubelet[2773]: E0819 08:18:22.158983 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:22.629918 kubelet[2773]: E0819 08:18:22.259807 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:22.629918 kubelet[2773]: E0819 08:18:22.270950 2773 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 08:18:22.629918 kubelet[2773]: E0819 08:18:22.360350 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:22.630385 kubelet[2773]: E0819 08:18:22.460804 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:22.630385 kubelet[2773]: E0819 08:18:22.561198 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:22.631419 kubelet[2773]: I0819 08:18:22.631131 2773 policy_none.go:49] "None policy: Start" Aug 19 08:18:22.631419 kubelet[2773]: I0819 08:18:22.631168 2773 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 08:18:22.631419 kubelet[2773]: I0819 08:18:22.631183 2773 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:18:22.661653 kubelet[2773]: E0819 08:18:22.661632 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:22.762126 kubelet[2773]: E0819 08:18:22.762092 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:22.798779 kubelet[2773]: E0819 08:18:22.798735 2773 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 19 08:18:22.862285 kubelet[2773]: E0819 08:18:22.862253 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:22.962841 kubelet[2773]: E0819 08:18:22.962736 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:23.063185 kubelet[2773]: E0819 08:18:23.063139 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:23.163638 kubelet[2773]: E0819 08:18:23.163594 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:23.529749 kubelet[2773]: E0819 08:18:23.264355 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:23.529749 kubelet[2773]: E0819 08:18:23.364796 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:23.529749 kubelet[2773]: E0819 08:18:23.465242 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:23.565738 kubelet[2773]: E0819 08:18:23.565712 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:23.666174 kubelet[2773]: E0819 08:18:23.666126 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:23.717945 kubelet[2773]: E0819 08:18:23.717916 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 19 08:18:23.758140 kubelet[2773]: E0819 08:18:23.758120 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-a-f24a6b1c57?timeout=10s\": dial tcp 10.200.8.11:6443: connect: connection refused" interval="3.2s" Aug 19 08:18:23.767227 kubelet[2773]: E0819 08:18:23.767200 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:23.831421 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 08:18:23.844946 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 08:18:23.847439 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 08:18:23.858175 kubelet[2773]: E0819 08:18:23.858145 2773 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 19 08:18:23.858326 kubelet[2773]: I0819 08:18:23.858311 2773 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:18:23.858361 kubelet[2773]: I0819 08:18:23.858328 2773 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:18:23.858872 kubelet[2773]: I0819 08:18:23.858601 2773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:18:23.859664 kubelet[2773]: E0819 08:18:23.859648 2773 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 08:18:23.859949 kubelet[2773]: E0819 08:18:23.859841 2773 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:23.962758 kubelet[2773]: I0819 08:18:23.962711 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:24.587836 kubelet[2773]: E0819 08:18:23.963177 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.11:6443/api/v1/nodes\": dial tcp 10.200.8.11:6443: connect: connection refused" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:24.587836 kubelet[2773]: I0819 08:18:23.967438 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93d13223a3291d0a1de7d7694fcd7305-ca-certs\") pod \"kube-controller-manager-ci-4426.0.0-a-f24a6b1c57\" (UID: \"93d13223a3291d0a1de7d7694fcd7305\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:24.587836 kubelet[2773]: I0819 08:18:23.967467 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/93d13223a3291d0a1de7d7694fcd7305-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.0.0-a-f24a6b1c57\" (UID: \"93d13223a3291d0a1de7d7694fcd7305\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:24.587836 kubelet[2773]: I0819 08:18:23.967488 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93d13223a3291d0a1de7d7694fcd7305-k8s-certs\") pod \"kube-controller-manager-ci-4426.0.0-a-f24a6b1c57\" (UID: \"93d13223a3291d0a1de7d7694fcd7305\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:24.587836 kubelet[2773]: I0819 08:18:23.967524 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/93d13223a3291d0a1de7d7694fcd7305-kubeconfig\") pod \"kube-controller-manager-ci-4426.0.0-a-f24a6b1c57\" (UID: \"93d13223a3291d0a1de7d7694fcd7305\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:24.588015 kubelet[2773]: I0819 08:18:23.967543 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93d13223a3291d0a1de7d7694fcd7305-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.0.0-a-f24a6b1c57\" (UID: \"93d13223a3291d0a1de7d7694fcd7305\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:24.588015 kubelet[2773]: I0819 08:18:24.165628 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:24.588015 kubelet[2773]: E0819 08:18:24.165965 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.11:6443/api/v1/nodes\": dial tcp 10.200.8.11:6443: connect: connection refused" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:24.588015 kubelet[2773]: E0819 08:18:24.336377 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-a-f24a6b1c57&limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 19 08:18:24.588015 kubelet[2773]: I0819 08:18:24.568074 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:24.588015 kubelet[2773]: E0819 08:18:24.568381 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.11:6443/api/v1/nodes\": dial tcp 10.200.8.11:6443: connect: connection refused" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:24.592028 kubelet[2773]: E0819 08:18:24.591980 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 19 08:18:25.500305 kubelet[2773]: E0819 08:18:25.104657 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 19 08:18:25.500305 kubelet[2773]: I0819 08:18:25.370132 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:25.500305 kubelet[2773]: E0819 08:18:25.370443 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.11:6443/api/v1/nodes\": dial tcp 10.200.8.11:6443: connect: connection refused" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:26.244297 systemd[1]: Created slice kubepods-burstable-pod93d13223a3291d0a1de7d7694fcd7305.slice - libcontainer container kubepods-burstable-pod93d13223a3291d0a1de7d7694fcd7305.slice. Aug 19 08:18:26.252224 kubelet[2773]: E0819 08:18:26.252183 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:26.252953 containerd[1729]: time="2025-08-19T08:18:26.252924425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.0.0-a-f24a6b1c57,Uid:93d13223a3291d0a1de7d7694fcd7305,Namespace:kube-system,Attempt:0,}" Aug 19 08:18:26.278080 kubelet[2773]: I0819 08:18:26.278053 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5c10ec70072d92b6b09cd59b6a8d57b-kubeconfig\") pod \"kube-scheduler-ci-4426.0.0-a-f24a6b1c57\" (UID: \"f5c10ec70072d92b6b09cd59b6a8d57b\") " pod="kube-system/kube-scheduler-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:26.959412 kubelet[2773]: E0819 08:18:26.959369 2773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.0.0-a-f24a6b1c57?timeout=10s\": dial tcp 10.200.8.11:6443: connect: connection refused" interval="6.4s" Aug 19 08:18:27.532193 kubelet[2773]: I0819 08:18:26.972591 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:27.532193 kubelet[2773]: E0819 08:18:26.972860 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.11:6443/api/v1/nodes\": dial tcp 10.200.8.11:6443: connect: connection refused" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:27.532193 kubelet[2773]: E0819 08:18:27.147312 2773 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 19 08:18:27.895716 systemd[1]: Created slice kubepods-burstable-podf5c10ec70072d92b6b09cd59b6a8d57b.slice - libcontainer container kubepods-burstable-podf5c10ec70072d92b6b09cd59b6a8d57b.slice. Aug 19 08:18:27.897430 kubelet[2773]: E0819 08:18:27.897409 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:27.897929 containerd[1729]: time="2025-08-19T08:18:27.897899354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.0.0-a-f24a6b1c57,Uid:f5c10ec70072d92b6b09cd59b6a8d57b,Namespace:kube-system,Attempt:0,}" Aug 19 08:18:27.951215 systemd[1]: Created slice kubepods-burstable-pod4b62c0fd2283cc091e3675a600e23b57.slice - libcontainer container kubepods-burstable-pod4b62c0fd2283cc091e3675a600e23b57.slice. Aug 19 08:18:27.952754 kubelet[2773]: E0819 08:18:27.952736 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:27.982176 kubelet[2773]: E0819 08:18:27.982113 2773 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.0.0-a-f24a6b1c57.185d1d2b1c65c0d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.0.0-a-f24a6b1c57,UID:ci-4426.0.0-a-f24a6b1c57,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.0.0-a-f24a6b1c57,},FirstTimestamp:2025-08-19 08:18:20.741591253 +0000 UTC m=+0.286529371,LastTimestamp:2025-08-19 08:18:20.741591253 +0000 UTC m=+0.286529371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.0.0-a-f24a6b1c57,}" Aug 19 08:18:27.987259 kubelet[2773]: I0819 08:18:27.987234 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b62c0fd2283cc091e3675a600e23b57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.0.0-a-f24a6b1c57\" (UID: \"4b62c0fd2283cc091e3675a600e23b57\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:27.987318 kubelet[2773]: I0819 08:18:27.987271 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b62c0fd2283cc091e3675a600e23b57-ca-certs\") pod \"kube-apiserver-ci-4426.0.0-a-f24a6b1c57\" (UID: \"4b62c0fd2283cc091e3675a600e23b57\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:27.987318 kubelet[2773]: I0819 08:18:27.987290 2773 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b62c0fd2283cc091e3675a600e23b57-k8s-certs\") pod \"kube-apiserver-ci-4426.0.0-a-f24a6b1c57\" (UID: \"4b62c0fd2283cc091e3675a600e23b57\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:28.025869 kubelet[2773]: E0819 08:18:28.025845 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 19 08:18:28.244010 kubelet[2773]: E0819 08:18:28.243979 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 19 08:18:28.253906 containerd[1729]: time="2025-08-19T08:18:28.253875812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.0.0-a-f24a6b1c57,Uid:4b62c0fd2283cc091e3675a600e23b57,Namespace:kube-system,Attempt:0,}" Aug 19 08:18:28.580852 kubelet[2773]: E0819 08:18:28.580753 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.0.0-a-f24a6b1c57&limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 19 08:18:29.031128 kubelet[2773]: E0819 08:18:28.928176 2773 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.11:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.11:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 19 08:18:29.149618 containerd[1729]: time="2025-08-19T08:18:29.148895671Z" level=info msg="connecting to shim be5cca1aabb1c5414af353808f9bb3ffac6ab693be548b9c6f72ae2b07073a5c" address="unix:///run/containerd/s/d8e53fae48c84bb07eb908cb96a1992d8a5ccd57c061a166f7f4b9f7c433046b" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:18:29.204860 systemd[1]: Started cri-containerd-be5cca1aabb1c5414af353808f9bb3ffac6ab693be548b9c6f72ae2b07073a5c.scope - libcontainer container be5cca1aabb1c5414af353808f9bb3ffac6ab693be548b9c6f72ae2b07073a5c. Aug 19 08:18:29.331224 containerd[1729]: time="2025-08-19T08:18:29.331133878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.0.0-a-f24a6b1c57,Uid:93d13223a3291d0a1de7d7694fcd7305,Namespace:kube-system,Attempt:0,} returns sandbox id \"be5cca1aabb1c5414af353808f9bb3ffac6ab693be548b9c6f72ae2b07073a5c\"" Aug 19 08:18:29.379336 containerd[1729]: time="2025-08-19T08:18:29.379303020Z" level=info msg="CreateContainer within sandbox \"be5cca1aabb1c5414af353808f9bb3ffac6ab693be548b9c6f72ae2b07073a5c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 08:18:29.748664 containerd[1729]: time="2025-08-19T08:18:29.748585143Z" level=info msg="connecting to shim 393556a9f0e21c9ac6999f86aa68facdeb571000fc529f5638b7fee70f99f57e" address="unix:///run/containerd/s/0899e1a07ac86c4188cd7f46f880b4f8be446a6b6956d678c3bd5988593bb27e" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:18:29.767814 systemd[1]: Started cri-containerd-393556a9f0e21c9ac6999f86aa68facdeb571000fc529f5638b7fee70f99f57e.scope - libcontainer container 393556a9f0e21c9ac6999f86aa68facdeb571000fc529f5638b7fee70f99f57e. Aug 19 08:18:29.984464 containerd[1729]: time="2025-08-19T08:18:29.984422264Z" level=info msg="Container c594d96f5eb7b84b9003090fb738eaceeeeaf57140a8931acc73bc7e626accbd: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:18:30.042414 containerd[1729]: time="2025-08-19T08:18:30.042297018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.0.0-a-f24a6b1c57,Uid:f5c10ec70072d92b6b09cd59b6a8d57b,Namespace:kube-system,Attempt:0,} returns sandbox id \"393556a9f0e21c9ac6999f86aa68facdeb571000fc529f5638b7fee70f99f57e\"" Aug 19 08:18:30.046966 containerd[1729]: time="2025-08-19T08:18:30.046938499Z" level=info msg="connecting to shim 2f0e2c85d7164a20d08fab1e316f70ec2834c7c04451d909647ad60e91b5f07f" address="unix:///run/containerd/s/79926e3b8dbddc7f57d040ac2fd33965d7cf5a3db87d5c902f07f35dd6557661" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:18:30.066826 systemd[1]: Started cri-containerd-2f0e2c85d7164a20d08fab1e316f70ec2834c7c04451d909647ad60e91b5f07f.scope - libcontainer container 2f0e2c85d7164a20d08fab1e316f70ec2834c7c04451d909647ad60e91b5f07f. Aug 19 08:18:30.133872 containerd[1729]: time="2025-08-19T08:18:30.133840518Z" level=info msg="CreateContainer within sandbox \"393556a9f0e21c9ac6999f86aa68facdeb571000fc529f5638b7fee70f99f57e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 08:18:30.174492 kubelet[2773]: I0819 08:18:30.174468 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:30.174811 kubelet[2773]: E0819 08:18:30.174771 2773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.11:6443/api/v1/nodes\": dial tcp 10.200.8.11:6443: connect: connection refused" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:30.483621 containerd[1729]: time="2025-08-19T08:18:30.481051353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.0.0-a-f24a6b1c57,Uid:4b62c0fd2283cc091e3675a600e23b57,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f0e2c85d7164a20d08fab1e316f70ec2834c7c04451d909647ad60e91b5f07f\"" Aug 19 08:18:30.485797 containerd[1729]: time="2025-08-19T08:18:30.485770114Z" level=info msg="CreateContainer within sandbox \"be5cca1aabb1c5414af353808f9bb3ffac6ab693be548b9c6f72ae2b07073a5c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c594d96f5eb7b84b9003090fb738eaceeeeaf57140a8931acc73bc7e626accbd\"" Aug 19 08:18:30.486354 containerd[1729]: time="2025-08-19T08:18:30.486318854Z" level=info msg="StartContainer for \"c594d96f5eb7b84b9003090fb738eaceeeeaf57140a8931acc73bc7e626accbd\"" Aug 19 08:18:30.487325 containerd[1729]: time="2025-08-19T08:18:30.487298639Z" level=info msg="connecting to shim c594d96f5eb7b84b9003090fb738eaceeeeaf57140a8931acc73bc7e626accbd" address="unix:///run/containerd/s/d8e53fae48c84bb07eb908cb96a1992d8a5ccd57c061a166f7f4b9f7c433046b" protocol=ttrpc version=3 Aug 19 08:18:30.509838 systemd[1]: Started cri-containerd-c594d96f5eb7b84b9003090fb738eaceeeeaf57140a8931acc73bc7e626accbd.scope - libcontainer container c594d96f5eb7b84b9003090fb738eaceeeeaf57140a8931acc73bc7e626accbd. Aug 19 08:18:30.590611 containerd[1729]: time="2025-08-19T08:18:30.590521058Z" level=info msg="CreateContainer within sandbox \"2f0e2c85d7164a20d08fab1e316f70ec2834c7c04451d909647ad60e91b5f07f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 08:18:30.597311 containerd[1729]: time="2025-08-19T08:18:30.597206728Z" level=info msg="StartContainer for \"c594d96f5eb7b84b9003090fb738eaceeeeaf57140a8931acc73bc7e626accbd\" returns successfully" Aug 19 08:18:30.790899 kubelet[2773]: E0819 08:18:30.789131 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:30.896097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766531026.mount: Deactivated successfully. Aug 19 08:18:31.086268 containerd[1729]: time="2025-08-19T08:18:31.086166818Z" level=info msg="Container 0ab86123621038bad13e9fc4d881dc3abfe2c49a19450dbd5e7d49e40eadfe8b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:18:31.089482 containerd[1729]: time="2025-08-19T08:18:31.089455651Z" level=info msg="Container 0a0c726e97b4904c91bb84a78817c65db749b17b8b94fbaa91eec2772f6aea54: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:18:31.105253 containerd[1729]: time="2025-08-19T08:18:31.105226539Z" level=info msg="CreateContainer within sandbox \"393556a9f0e21c9ac6999f86aa68facdeb571000fc529f5638b7fee70f99f57e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0ab86123621038bad13e9fc4d881dc3abfe2c49a19450dbd5e7d49e40eadfe8b\"" Aug 19 08:18:31.105817 containerd[1729]: time="2025-08-19T08:18:31.105714541Z" level=info msg="StartContainer for \"0ab86123621038bad13e9fc4d881dc3abfe2c49a19450dbd5e7d49e40eadfe8b\"" Aug 19 08:18:31.106857 containerd[1729]: time="2025-08-19T08:18:31.106802157Z" level=info msg="connecting to shim 0ab86123621038bad13e9fc4d881dc3abfe2c49a19450dbd5e7d49e40eadfe8b" address="unix:///run/containerd/s/0899e1a07ac86c4188cd7f46f880b4f8be446a6b6956d678c3bd5988593bb27e" protocol=ttrpc version=3 Aug 19 08:18:31.113402 containerd[1729]: time="2025-08-19T08:18:31.113376882Z" level=info msg="CreateContainer within sandbox \"2f0e2c85d7164a20d08fab1e316f70ec2834c7c04451d909647ad60e91b5f07f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a0c726e97b4904c91bb84a78817c65db749b17b8b94fbaa91eec2772f6aea54\"" Aug 19 08:18:31.114618 containerd[1729]: time="2025-08-19T08:18:31.114602764Z" level=info msg="StartContainer for \"0a0c726e97b4904c91bb84a78817c65db749b17b8b94fbaa91eec2772f6aea54\"" Aug 19 08:18:31.115498 containerd[1729]: time="2025-08-19T08:18:31.115478200Z" level=info msg="connecting to shim 0a0c726e97b4904c91bb84a78817c65db749b17b8b94fbaa91eec2772f6aea54" address="unix:///run/containerd/s/79926e3b8dbddc7f57d040ac2fd33965d7cf5a3db87d5c902f07f35dd6557661" protocol=ttrpc version=3 Aug 19 08:18:31.132866 systemd[1]: Started cri-containerd-0ab86123621038bad13e9fc4d881dc3abfe2c49a19450dbd5e7d49e40eadfe8b.scope - libcontainer container 0ab86123621038bad13e9fc4d881dc3abfe2c49a19450dbd5e7d49e40eadfe8b. Aug 19 08:18:31.147824 systemd[1]: Started cri-containerd-0a0c726e97b4904c91bb84a78817c65db749b17b8b94fbaa91eec2772f6aea54.scope - libcontainer container 0a0c726e97b4904c91bb84a78817c65db749b17b8b94fbaa91eec2772f6aea54. Aug 19 08:18:31.207825 containerd[1729]: time="2025-08-19T08:18:31.207803213Z" level=info msg="StartContainer for \"0ab86123621038bad13e9fc4d881dc3abfe2c49a19450dbd5e7d49e40eadfe8b\" returns successfully" Aug 19 08:18:31.224745 containerd[1729]: time="2025-08-19T08:18:31.224724927Z" level=info msg="StartContainer for \"0a0c726e97b4904c91bb84a78817c65db749b17b8b94fbaa91eec2772f6aea54\" returns successfully" Aug 19 08:18:31.795641 kubelet[2773]: E0819 08:18:31.795530 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:31.797000 kubelet[2773]: E0819 08:18:31.796854 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:31.797773 kubelet[2773]: E0819 08:18:31.797760 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:32.800706 kubelet[2773]: E0819 08:18:32.800292 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:32.800706 kubelet[2773]: E0819 08:18:32.800620 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:33.211228 kubelet[2773]: E0819 08:18:33.211134 2773 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4426.0.0-a-f24a6b1c57" not found Aug 19 08:18:33.362172 kubelet[2773]: E0819 08:18:33.362132 2773 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:33.557379 kubelet[2773]: E0819 08:18:33.557349 2773 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4426.0.0-a-f24a6b1c57" not found Aug 19 08:18:33.800423 kubelet[2773]: E0819 08:18:33.800385 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:33.800751 kubelet[2773]: E0819 08:18:33.800740 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:33.860016 kubelet[2773]: E0819 08:18:33.859928 2773 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:33.996411 kubelet[2773]: E0819 08:18:33.996372 2773 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4426.0.0-a-f24a6b1c57" not found Aug 19 08:18:34.617022 kubelet[2773]: E0819 08:18:34.616995 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:34.801350 kubelet[2773]: E0819 08:18:34.801325 2773 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:34.910552 kubelet[2773]: E0819 08:18:34.910454 2773 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4426.0.0-a-f24a6b1c57" not found Aug 19 08:18:36.576407 kubelet[2773]: I0819 08:18:36.576381 2773 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:36.583172 kubelet[2773]: I0819 08:18:36.583138 2773 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:36.583172 kubelet[2773]: E0819 08:18:36.583172 2773 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4426.0.0-a-f24a6b1c57\": node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:36.589993 kubelet[2773]: E0819 08:18:36.589951 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:36.690448 kubelet[2773]: E0819 08:18:36.690419 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:36.790554 kubelet[2773]: E0819 08:18:36.790501 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:36.891235 kubelet[2773]: E0819 08:18:36.891137 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:36.991803 kubelet[2773]: E0819 08:18:36.991769 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:37.080692 systemd[1]: Reload requested from client PID 3046 ('systemctl') (unit session-9.scope)... Aug 19 08:18:37.080706 systemd[1]: Reloading... Aug 19 08:18:37.092552 kubelet[2773]: E0819 08:18:37.092511 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:37.173706 zram_generator::config[3099]: No configuration found. Aug 19 08:18:37.192952 kubelet[2773]: E0819 08:18:37.192915 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:37.293865 kubelet[2773]: E0819 08:18:37.293831 2773 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4426.0.0-a-f24a6b1c57\" not found" Aug 19 08:18:37.347864 systemd[1]: Reloading finished in 266 ms. Aug 19 08:18:37.370255 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:18:37.380473 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 08:18:37.380660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:18:37.380723 systemd[1]: kubelet.service: Consumed 599ms CPU time, 130M memory peak. Aug 19 08:18:37.381981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 08:18:38.298927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 08:18:38.307968 (kubelet)[3160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 08:18:38.344701 kubelet[3160]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:18:38.344701 kubelet[3160]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 08:18:38.344701 kubelet[3160]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 08:18:38.344701 kubelet[3160]: I0819 08:18:38.343194 3160 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 08:18:38.349456 kubelet[3160]: I0819 08:18:38.349436 3160 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 19 08:18:38.349577 kubelet[3160]: I0819 08:18:38.349569 3160 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 08:18:38.349848 kubelet[3160]: I0819 08:18:38.349837 3160 server.go:956] "Client rotation is on, will bootstrap in background" Aug 19 08:18:38.351090 kubelet[3160]: I0819 08:18:38.351073 3160 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 19 08:18:38.353277 kubelet[3160]: I0819 08:18:38.353229 3160 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 08:18:38.356282 kubelet[3160]: I0819 08:18:38.356272 3160 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 08:18:38.358754 kubelet[3160]: I0819 08:18:38.358735 3160 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 08:18:38.359005 kubelet[3160]: I0819 08:18:38.358987 3160 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 08:18:38.359177 kubelet[3160]: I0819 08:18:38.359051 3160 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.0.0-a-f24a6b1c57","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 08:18:38.359279 kubelet[3160]: I0819 08:18:38.359272 3160 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 08:18:38.359315 kubelet[3160]: I0819 08:18:38.359311 3160 container_manager_linux.go:303] "Creating device plugin manager" Aug 19 08:18:38.359383 kubelet[3160]: I0819 08:18:38.359378 3160 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:18:38.359542 kubelet[3160]: I0819 08:18:38.359536 3160 kubelet.go:480] "Attempting to sync node with API server" Aug 19 08:18:38.360432 kubelet[3160]: I0819 08:18:38.360419 3160 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 08:18:38.360522 kubelet[3160]: I0819 08:18:38.360517 3160 kubelet.go:386] "Adding apiserver pod source" Aug 19 08:18:38.360560 kubelet[3160]: I0819 08:18:38.360555 3160 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 08:18:38.372004 kubelet[3160]: I0819 08:18:38.371989 3160 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 08:18:38.372510 kubelet[3160]: I0819 08:18:38.372501 3160 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 19 08:18:38.375503 kubelet[3160]: I0819 08:18:38.375492 3160 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 08:18:38.375593 kubelet[3160]: I0819 08:18:38.375588 3160 server.go:1289] "Started kubelet" Aug 19 08:18:38.378811 kubelet[3160]: I0819 08:18:38.378800 3160 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 08:18:38.384476 kubelet[3160]: I0819 08:18:38.384458 3160 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 08:18:38.387846 kubelet[3160]: I0819 08:18:38.378930 3160 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 08:18:38.388084 kubelet[3160]: I0819 08:18:38.388073 3160 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 08:18:38.389239 kubelet[3160]: I0819 08:18:38.378885 3160 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 08:18:38.390524 kubelet[3160]: I0819 08:18:38.390331 3160 server.go:317] "Adding debug handlers to kubelet server" Aug 19 08:18:38.392509 kubelet[3160]: I0819 08:18:38.392345 3160 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 08:18:38.392917 kubelet[3160]: I0819 08:18:38.392751 3160 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 08:18:38.393250 kubelet[3160]: I0819 08:18:38.393161 3160 reconciler.go:26] "Reconciler: start to sync state" Aug 19 08:18:38.400470 kubelet[3160]: I0819 08:18:38.399806 3160 factory.go:223] Registration of the containerd container factory successfully Aug 19 08:18:38.400470 kubelet[3160]: I0819 08:18:38.399819 3160 factory.go:223] Registration of the systemd container factory successfully Aug 19 08:18:38.400470 kubelet[3160]: I0819 08:18:38.399912 3160 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 08:18:38.405469 kubelet[3160]: I0819 08:18:38.405443 3160 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 19 08:18:38.410331 kubelet[3160]: I0819 08:18:38.409647 3160 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 19 08:18:38.410331 kubelet[3160]: I0819 08:18:38.409663 3160 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 19 08:18:38.410331 kubelet[3160]: I0819 08:18:38.409675 3160 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 08:18:38.410331 kubelet[3160]: I0819 08:18:38.410024 3160 kubelet.go:2436] "Starting kubelet main sync loop" Aug 19 08:18:38.410331 kubelet[3160]: E0819 08:18:38.410071 3160 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 08:18:38.411360 kubelet[3160]: E0819 08:18:38.411331 3160 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 08:18:38.441469 kubelet[3160]: I0819 08:18:38.441449 3160 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 08:18:38.441561 kubelet[3160]: I0819 08:18:38.441554 3160 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 08:18:38.441595 kubelet[3160]: I0819 08:18:38.441592 3160 state_mem.go:36] "Initialized new in-memory state store" Aug 19 08:18:38.441729 kubelet[3160]: I0819 08:18:38.441717 3160 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 08:18:38.441768 kubelet[3160]: I0819 08:18:38.441757 3160 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 08:18:38.441791 kubelet[3160]: I0819 08:18:38.441788 3160 policy_none.go:49] "None policy: Start" Aug 19 08:18:38.441829 kubelet[3160]: I0819 08:18:38.441826 3160 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 08:18:38.441852 kubelet[3160]: I0819 08:18:38.441850 3160 state_mem.go:35] "Initializing new in-memory state store" Aug 19 08:18:38.441928 kubelet[3160]: I0819 08:18:38.441925 3160 state_mem.go:75] "Updated machine memory state" Aug 19 08:18:38.444484 kubelet[3160]: E0819 08:18:38.444473 3160 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 19 08:18:38.445054 kubelet[3160]: I0819 08:18:38.445044 3160 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 08:18:38.445726 kubelet[3160]: I0819 08:18:38.445700 3160 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 08:18:38.446968 kubelet[3160]: E0819 08:18:38.446322 3160 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 08:18:38.448190 kubelet[3160]: I0819 08:18:38.448168 3160 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 08:18:38.509892 sudo[3199]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 19 08:18:38.510138 sudo[3199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 19 08:18:38.510832 kubelet[3160]: I0819 08:18:38.510671 3160 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.511352 kubelet[3160]: I0819 08:18:38.511039 3160 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.512652 kubelet[3160]: I0819 08:18:38.511147 3160 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.524969 kubelet[3160]: I0819 08:18:38.524954 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 19 08:18:38.528437 kubelet[3160]: I0819 08:18:38.528390 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 19 08:18:38.528596 kubelet[3160]: I0819 08:18:38.528543 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 19 08:18:38.555408 kubelet[3160]: I0819 08:18:38.555330 3160 kubelet_node_status.go:75] "Attempting to register node" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.564542 kubelet[3160]: I0819 08:18:38.564525 3160 kubelet_node_status.go:124] "Node was previously registered" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.564601 kubelet[3160]: I0819 08:18:38.564591 3160 kubelet_node_status.go:78] "Successfully registered node" node="ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.594837 kubelet[3160]: I0819 08:18:38.594815 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5c10ec70072d92b6b09cd59b6a8d57b-kubeconfig\") pod \"kube-scheduler-ci-4426.0.0-a-f24a6b1c57\" (UID: \"f5c10ec70072d92b6b09cd59b6a8d57b\") " pod="kube-system/kube-scheduler-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.594900 kubelet[3160]: I0819 08:18:38.594847 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b62c0fd2283cc091e3675a600e23b57-k8s-certs\") pod \"kube-apiserver-ci-4426.0.0-a-f24a6b1c57\" (UID: \"4b62c0fd2283cc091e3675a600e23b57\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.594900 kubelet[3160]: I0819 08:18:38.594867 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93d13223a3291d0a1de7d7694fcd7305-ca-certs\") pod \"kube-controller-manager-ci-4426.0.0-a-f24a6b1c57\" (UID: \"93d13223a3291d0a1de7d7694fcd7305\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.594900 kubelet[3160]: I0819 08:18:38.594882 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/93d13223a3291d0a1de7d7694fcd7305-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.0.0-a-f24a6b1c57\" (UID: \"93d13223a3291d0a1de7d7694fcd7305\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.594983 kubelet[3160]: I0819 08:18:38.594900 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93d13223a3291d0a1de7d7694fcd7305-k8s-certs\") pod \"kube-controller-manager-ci-4426.0.0-a-f24a6b1c57\" (UID: \"93d13223a3291d0a1de7d7694fcd7305\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.594983 kubelet[3160]: I0819 08:18:38.594916 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b62c0fd2283cc091e3675a600e23b57-ca-certs\") pod \"kube-apiserver-ci-4426.0.0-a-f24a6b1c57\" (UID: \"4b62c0fd2283cc091e3675a600e23b57\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.594983 kubelet[3160]: I0819 08:18:38.594935 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b62c0fd2283cc091e3675a600e23b57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.0.0-a-f24a6b1c57\" (UID: \"4b62c0fd2283cc091e3675a600e23b57\") " pod="kube-system/kube-apiserver-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.594983 kubelet[3160]: I0819 08:18:38.594954 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/93d13223a3291d0a1de7d7694fcd7305-kubeconfig\") pod \"kube-controller-manager-ci-4426.0.0-a-f24a6b1c57\" (UID: \"93d13223a3291d0a1de7d7694fcd7305\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.594983 kubelet[3160]: I0819 08:18:38.594970 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93d13223a3291d0a1de7d7694fcd7305-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.0.0-a-f24a6b1c57\" (UID: \"93d13223a3291d0a1de7d7694fcd7305\") " pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:38.827847 sudo[3199]: pam_unix(sudo:session): session closed for user root Aug 19 08:18:39.365486 kubelet[3160]: I0819 08:18:39.365459 3160 apiserver.go:52] "Watching apiserver" Aug 19 08:18:39.393939 kubelet[3160]: I0819 08:18:39.393907 3160 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 08:18:39.423988 kubelet[3160]: I0819 08:18:39.423787 3160 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:39.424174 kubelet[3160]: I0819 08:18:39.423787 3160 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:39.433708 kubelet[3160]: I0819 08:18:39.433475 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 19 08:18:39.433879 kubelet[3160]: E0819 08:18:39.433821 3160 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4426.0.0-a-f24a6b1c57\" already exists" pod="kube-system/kube-scheduler-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:39.434227 kubelet[3160]: I0819 08:18:39.434193 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 19 08:18:39.434227 kubelet[3160]: E0819 08:18:39.434228 3160 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4426.0.0-a-f24a6b1c57\" already exists" pod="kube-system/kube-apiserver-ci-4426.0.0-a-f24a6b1c57" Aug 19 08:18:39.449054 kubelet[3160]: I0819 08:18:39.448995 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426.0.0-a-f24a6b1c57" podStartSLOduration=1.4489822540000001 podStartE2EDuration="1.448982254s" podCreationTimestamp="2025-08-19 08:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:18:39.439693758 +0000 UTC m=+1.128925806" watchObservedRunningTime="2025-08-19 08:18:39.448982254 +0000 UTC m=+1.138214307" Aug 19 08:18:39.457557 kubelet[3160]: I0819 08:18:39.457507 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426.0.0-a-f24a6b1c57" podStartSLOduration=1.457498623 podStartE2EDuration="1.457498623s" podCreationTimestamp="2025-08-19 08:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:18:39.449242558 +0000 UTC m=+1.138474612" watchObservedRunningTime="2025-08-19 08:18:39.457498623 +0000 UTC m=+1.146730673" Aug 19 08:18:39.984709 sudo[2172]: pam_unix(sudo:session): session closed for user root Aug 19 08:18:40.093265 sshd[2171]: Connection closed by 10.200.16.10 port 55776 Aug 19 08:18:40.093784 sshd-session[2168]: pam_unix(sshd:session): session closed for user core Aug 19 08:18:40.096947 systemd[1]: sshd@6-10.200.8.11:22-10.200.16.10:55776.service: Deactivated successfully. Aug 19 08:18:40.098605 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 08:18:40.099062 systemd[1]: session-9.scope: Consumed 4.294s CPU time, 271.9M memory peak. Aug 19 08:18:40.100350 systemd-logind[1709]: Session 9 logged out. Waiting for processes to exit. Aug 19 08:18:40.101664 systemd-logind[1709]: Removed session 9. Aug 19 08:18:41.252562 kubelet[3160]: I0819 08:18:41.252536 3160 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 08:18:41.252913 containerd[1729]: time="2025-08-19T08:18:41.252835378Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 08:18:41.253069 kubelet[3160]: I0819 08:18:41.252974 3160 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 08:18:42.472357 kubelet[3160]: I0819 08:18:42.472301 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426.0.0-a-f24a6b1c57" podStartSLOduration=4.472269837 podStartE2EDuration="4.472269837s" podCreationTimestamp="2025-08-19 08:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:18:39.458109819 +0000 UTC m=+1.147341867" watchObservedRunningTime="2025-08-19 08:18:42.472269837 +0000 UTC m=+4.161501889" Aug 19 08:18:42.523990 systemd[1]: Created slice kubepods-besteffort-poda3d51b82_e2cd_4cbd_9137_c16c7aa87dff.slice - libcontainer container kubepods-besteffort-poda3d51b82_e2cd_4cbd_9137_c16c7aa87dff.slice. Aug 19 08:18:42.539149 systemd[1]: Created slice kubepods-burstable-pod7dbf04a3_6786_4d9e_915d_4a55e6d504c5.slice - libcontainer container kubepods-burstable-pod7dbf04a3_6786_4d9e_915d_4a55e6d504c5.slice. Aug 19 08:18:42.591464 systemd[1]: Created slice kubepods-besteffort-podb79f9c3d_fe19_4e77_bcac_fd599f32717c.slice - libcontainer container kubepods-besteffort-podb79f9c3d_fe19_4e77_bcac_fd599f32717c.slice. Aug 19 08:18:42.619964 kubelet[3160]: I0819 08:18:42.619932 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-host-proc-sys-net\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620065 kubelet[3160]: I0819 08:18:42.619974 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms5vx\" (UniqueName: \"kubernetes.io/projected/b79f9c3d-fe19-4e77-bcac-fd599f32717c-kube-api-access-ms5vx\") pod \"cilium-operator-6c4d7847fc-gnxxw\" (UID: \"b79f9c3d-fe19-4e77-bcac-fd599f32717c\") " pod="kube-system/cilium-operator-6c4d7847fc-gnxxw" Aug 19 08:18:42.620065 kubelet[3160]: I0819 08:18:42.619993 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3d51b82-e2cd-4cbd-9137-c16c7aa87dff-xtables-lock\") pod \"kube-proxy-rh8zv\" (UID: \"a3d51b82-e2cd-4cbd-9137-c16c7aa87dff\") " pod="kube-system/kube-proxy-rh8zv" Aug 19 08:18:42.620065 kubelet[3160]: I0819 08:18:42.620011 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-lib-modules\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620065 kubelet[3160]: I0819 08:18:42.620029 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cni-path\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620065 kubelet[3160]: I0819 08:18:42.620045 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-xtables-lock\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620192 kubelet[3160]: I0819 08:18:42.620060 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-config-path\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620192 kubelet[3160]: I0819 08:18:42.620076 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a3d51b82-e2cd-4cbd-9137-c16c7aa87dff-kube-proxy\") pod \"kube-proxy-rh8zv\" (UID: \"a3d51b82-e2cd-4cbd-9137-c16c7aa87dff\") " pod="kube-system/kube-proxy-rh8zv" Aug 19 08:18:42.620192 kubelet[3160]: I0819 08:18:42.620091 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-run\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620192 kubelet[3160]: I0819 08:18:42.620107 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-clustermesh-secrets\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620192 kubelet[3160]: I0819 08:18:42.620129 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-hubble-tls\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620192 kubelet[3160]: I0819 08:18:42.620145 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-bpf-maps\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620327 kubelet[3160]: I0819 08:18:42.620168 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-hostproc\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620327 kubelet[3160]: I0819 08:18:42.620185 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-etc-cni-netd\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620327 kubelet[3160]: I0819 08:18:42.620202 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-host-proc-sys-kernel\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620327 kubelet[3160]: I0819 08:18:42.620219 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8qqg\" (UniqueName: \"kubernetes.io/projected/a3d51b82-e2cd-4cbd-9137-c16c7aa87dff-kube-api-access-l8qqg\") pod \"kube-proxy-rh8zv\" (UID: \"a3d51b82-e2cd-4cbd-9137-c16c7aa87dff\") " pod="kube-system/kube-proxy-rh8zv" Aug 19 08:18:42.620327 kubelet[3160]: I0819 08:18:42.620235 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3d51b82-e2cd-4cbd-9137-c16c7aa87dff-lib-modules\") pod \"kube-proxy-rh8zv\" (UID: \"a3d51b82-e2cd-4cbd-9137-c16c7aa87dff\") " pod="kube-system/kube-proxy-rh8zv" Aug 19 08:18:42.620442 kubelet[3160]: I0819 08:18:42.620252 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-cgroup\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620442 kubelet[3160]: I0819 08:18:42.620268 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v95n6\" (UniqueName: \"kubernetes.io/projected/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-kube-api-access-v95n6\") pod \"cilium-mmbjp\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " pod="kube-system/cilium-mmbjp" Aug 19 08:18:42.620442 kubelet[3160]: I0819 08:18:42.620291 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b79f9c3d-fe19-4e77-bcac-fd599f32717c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-gnxxw\" (UID: \"b79f9c3d-fe19-4e77-bcac-fd599f32717c\") " pod="kube-system/cilium-operator-6c4d7847fc-gnxxw" Aug 19 08:18:42.834730 containerd[1729]: time="2025-08-19T08:18:42.834674797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rh8zv,Uid:a3d51b82-e2cd-4cbd-9137-c16c7aa87dff,Namespace:kube-system,Attempt:0,}" Aug 19 08:18:42.843194 containerd[1729]: time="2025-08-19T08:18:42.843160874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmbjp,Uid:7dbf04a3-6786-4d9e-915d-4a55e6d504c5,Namespace:kube-system,Attempt:0,}" Aug 19 08:18:42.896640 containerd[1729]: time="2025-08-19T08:18:42.896251835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gnxxw,Uid:b79f9c3d-fe19-4e77-bcac-fd599f32717c,Namespace:kube-system,Attempt:0,}" Aug 19 08:18:42.897895 containerd[1729]: time="2025-08-19T08:18:42.897869717Z" level=info msg="connecting to shim 452cd2fc94e2e4484be707f3311f9fcf4ffef9474f9b469aea1dc6dc8e2b38e6" address="unix:///run/containerd/s/c1ddf54336d978d8c90cc6cefe1edfce9b524beb21f1019a7fe5b3a58c91e496" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:18:42.918031 containerd[1729]: time="2025-08-19T08:18:42.917809444Z" level=info msg="connecting to shim 07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228" address="unix:///run/containerd/s/d8f81b3332dc98f4ba7a347a3f992634b101d99a307d106cace6765989b82e46" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:18:42.922874 systemd[1]: Started cri-containerd-452cd2fc94e2e4484be707f3311f9fcf4ffef9474f9b469aea1dc6dc8e2b38e6.scope - libcontainer container 452cd2fc94e2e4484be707f3311f9fcf4ffef9474f9b469aea1dc6dc8e2b38e6. Aug 19 08:18:42.943953 systemd[1]: Started cri-containerd-07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228.scope - libcontainer container 07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228. Aug 19 08:18:42.947834 containerd[1729]: time="2025-08-19T08:18:42.947809680Z" level=info msg="connecting to shim 8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c" address="unix:///run/containerd/s/e8c2e0731d22b0fd1096b3ba068732b822770f6eb2839a6bc1db05fd2d7eb095" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:18:42.959567 containerd[1729]: time="2025-08-19T08:18:42.959540111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rh8zv,Uid:a3d51b82-e2cd-4cbd-9137-c16c7aa87dff,Namespace:kube-system,Attempt:0,} returns sandbox id \"452cd2fc94e2e4484be707f3311f9fcf4ffef9474f9b469aea1dc6dc8e2b38e6\"" Aug 19 08:18:42.969770 containerd[1729]: time="2025-08-19T08:18:42.969747440Z" level=info msg="CreateContainer within sandbox \"452cd2fc94e2e4484be707f3311f9fcf4ffef9474f9b469aea1dc6dc8e2b38e6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 08:18:42.974925 systemd[1]: Started cri-containerd-8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c.scope - libcontainer container 8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c. Aug 19 08:18:42.988526 containerd[1729]: time="2025-08-19T08:18:42.988495119Z" level=info msg="Container a37c26ace89317c4763b75c32313cd4e23e8eeab8ea949a4fb77ae102fa201b1: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:18:43.001119 containerd[1729]: time="2025-08-19T08:18:43.001095693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mmbjp,Uid:7dbf04a3-6786-4d9e-915d-4a55e6d504c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\"" Aug 19 08:18:43.002803 containerd[1729]: time="2025-08-19T08:18:43.002574300Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 19 08:18:43.008979 containerd[1729]: time="2025-08-19T08:18:43.008954970Z" level=info msg="CreateContainer within sandbox \"452cd2fc94e2e4484be707f3311f9fcf4ffef9474f9b469aea1dc6dc8e2b38e6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a37c26ace89317c4763b75c32313cd4e23e8eeab8ea949a4fb77ae102fa201b1\"" Aug 19 08:18:43.009480 containerd[1729]: time="2025-08-19T08:18:43.009447256Z" level=info msg="StartContainer for \"a37c26ace89317c4763b75c32313cd4e23e8eeab8ea949a4fb77ae102fa201b1\"" Aug 19 08:18:43.010636 containerd[1729]: time="2025-08-19T08:18:43.010599034Z" level=info msg="connecting to shim a37c26ace89317c4763b75c32313cd4e23e8eeab8ea949a4fb77ae102fa201b1" address="unix:///run/containerd/s/c1ddf54336d978d8c90cc6cefe1edfce9b524beb21f1019a7fe5b3a58c91e496" protocol=ttrpc version=3 Aug 19 08:18:43.027280 containerd[1729]: time="2025-08-19T08:18:43.027255358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-gnxxw,Uid:b79f9c3d-fe19-4e77-bcac-fd599f32717c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\"" Aug 19 08:18:43.029804 systemd[1]: Started cri-containerd-a37c26ace89317c4763b75c32313cd4e23e8eeab8ea949a4fb77ae102fa201b1.scope - libcontainer container a37c26ace89317c4763b75c32313cd4e23e8eeab8ea949a4fb77ae102fa201b1. Aug 19 08:18:43.055749 containerd[1729]: time="2025-08-19T08:18:43.055713883Z" level=info msg="StartContainer for \"a37c26ace89317c4763b75c32313cd4e23e8eeab8ea949a4fb77ae102fa201b1\" returns successfully" Aug 19 08:18:43.442275 kubelet[3160]: I0819 08:18:43.442210 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rh8zv" podStartSLOduration=1.442192637 podStartE2EDuration="1.442192637s" podCreationTimestamp="2025-08-19 08:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:18:43.44206266 +0000 UTC m=+5.131294708" watchObservedRunningTime="2025-08-19 08:18:43.442192637 +0000 UTC m=+5.131424680" Aug 19 08:18:57.262617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192792486.mount: Deactivated successfully. Aug 19 08:19:01.649657 containerd[1729]: time="2025-08-19T08:19:01.649611588Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:19:01.652434 containerd[1729]: time="2025-08-19T08:19:01.652317963Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 19 08:19:01.654663 containerd[1729]: time="2025-08-19T08:19:01.654642437Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:19:01.655631 containerd[1729]: time="2025-08-19T08:19:01.655608418Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 18.653009618s" Aug 19 08:19:01.655767 containerd[1729]: time="2025-08-19T08:19:01.655710602Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 19 08:19:01.656928 containerd[1729]: time="2025-08-19T08:19:01.656906325Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 19 08:19:01.661786 containerd[1729]: time="2025-08-19T08:19:01.661762569Z" level=info msg="CreateContainer within sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 08:19:01.681468 containerd[1729]: time="2025-08-19T08:19:01.680878886Z" level=info msg="Container a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:19:01.699614 containerd[1729]: time="2025-08-19T08:19:01.699589357Z" level=info msg="CreateContainer within sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\"" Aug 19 08:19:01.699993 containerd[1729]: time="2025-08-19T08:19:01.699964811Z" level=info msg="StartContainer for \"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\"" Aug 19 08:19:01.700892 containerd[1729]: time="2025-08-19T08:19:01.700838227Z" level=info msg="connecting to shim a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e" address="unix:///run/containerd/s/d8f81b3332dc98f4ba7a347a3f992634b101d99a307d106cace6765989b82e46" protocol=ttrpc version=3 Aug 19 08:19:01.718825 systemd[1]: Started cri-containerd-a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e.scope - libcontainer container a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e. Aug 19 08:19:01.744573 containerd[1729]: time="2025-08-19T08:19:01.744552005Z" level=info msg="StartContainer for \"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\" returns successfully" Aug 19 08:19:01.751447 systemd[1]: cri-containerd-a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e.scope: Deactivated successfully. Aug 19 08:19:01.752545 containerd[1729]: time="2025-08-19T08:19:01.752523232Z" level=info msg="received exit event container_id:\"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\" id:\"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\" pid:3576 exited_at:{seconds:1755591541 nanos:751836150}" Aug 19 08:19:01.752741 containerd[1729]: time="2025-08-19T08:19:01.752720617Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\" id:\"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\" pid:3576 exited_at:{seconds:1755591541 nanos:751836150}" Aug 19 08:19:01.767066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e-rootfs.mount: Deactivated successfully. Aug 19 08:19:06.081845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount65311543.mount: Deactivated successfully. Aug 19 08:19:06.477886 containerd[1729]: time="2025-08-19T08:19:06.477785105Z" level=info msg="CreateContainer within sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 08:19:06.515650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445140643.mount: Deactivated successfully. Aug 19 08:19:06.518043 containerd[1729]: time="2025-08-19T08:19:06.517985245Z" level=info msg="Container 28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:19:06.523877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1717149613.mount: Deactivated successfully. Aug 19 08:19:06.539066 containerd[1729]: time="2025-08-19T08:19:06.539033747Z" level=info msg="CreateContainer within sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\"" Aug 19 08:19:06.539795 containerd[1729]: time="2025-08-19T08:19:06.539729886Z" level=info msg="StartContainer for \"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\"" Aug 19 08:19:06.540472 containerd[1729]: time="2025-08-19T08:19:06.540429178Z" level=info msg="connecting to shim 28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e" address="unix:///run/containerd/s/d8f81b3332dc98f4ba7a347a3f992634b101d99a307d106cace6765989b82e46" protocol=ttrpc version=3 Aug 19 08:19:06.574819 systemd[1]: Started cri-containerd-28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e.scope - libcontainer container 28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e. Aug 19 08:19:06.614897 containerd[1729]: time="2025-08-19T08:19:06.614851863Z" level=info msg="StartContainer for \"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\" returns successfully" Aug 19 08:19:06.625980 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 08:19:06.626299 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:19:06.626451 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:19:06.628213 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 08:19:06.636017 systemd[1]: cri-containerd-28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e.scope: Deactivated successfully. Aug 19 08:19:06.637240 containerd[1729]: time="2025-08-19T08:19:06.637217154Z" level=info msg="received exit event container_id:\"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\" id:\"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\" pid:3632 exited_at:{seconds:1755591546 nanos:636812697}" Aug 19 08:19:06.637424 containerd[1729]: time="2025-08-19T08:19:06.637351693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\" id:\"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\" pid:3632 exited_at:{seconds:1755591546 nanos:636812697}" Aug 19 08:19:06.654704 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 08:19:06.934164 containerd[1729]: time="2025-08-19T08:19:06.934127890Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:19:06.936880 containerd[1729]: time="2025-08-19T08:19:06.936848534Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 19 08:19:06.940694 containerd[1729]: time="2025-08-19T08:19:06.940636734Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 08:19:06.941590 containerd[1729]: time="2025-08-19T08:19:06.941507455Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.284389489s" Aug 19 08:19:06.941590 containerd[1729]: time="2025-08-19T08:19:06.941535269Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 19 08:19:06.947819 containerd[1729]: time="2025-08-19T08:19:06.947793645Z" level=info msg="CreateContainer within sandbox \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 19 08:19:06.967328 containerd[1729]: time="2025-08-19T08:19:06.967304503Z" level=info msg="Container adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:19:06.979279 containerd[1729]: time="2025-08-19T08:19:06.979256989Z" level=info msg="CreateContainer within sandbox \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\"" Aug 19 08:19:06.980512 containerd[1729]: time="2025-08-19T08:19:06.979674691Z" level=info msg="StartContainer for \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\"" Aug 19 08:19:06.980579 containerd[1729]: time="2025-08-19T08:19:06.980529869Z" level=info msg="connecting to shim adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3" address="unix:///run/containerd/s/e8c2e0731d22b0fd1096b3ba068732b822770f6eb2839a6bc1db05fd2d7eb095" protocol=ttrpc version=3 Aug 19 08:19:06.997845 systemd[1]: Started cri-containerd-adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3.scope - libcontainer container adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3. Aug 19 08:19:07.022448 containerd[1729]: time="2025-08-19T08:19:07.022419536Z" level=info msg="StartContainer for \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\" returns successfully" Aug 19 08:19:07.487639 containerd[1729]: time="2025-08-19T08:19:07.487584834Z" level=info msg="CreateContainer within sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 08:19:07.513812 containerd[1729]: time="2025-08-19T08:19:07.513779097Z" level=info msg="Container 35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:19:07.531705 containerd[1729]: time="2025-08-19T08:19:07.530772981Z" level=info msg="CreateContainer within sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\"" Aug 19 08:19:07.532173 containerd[1729]: time="2025-08-19T08:19:07.532150350Z" level=info msg="StartContainer for \"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\"" Aug 19 08:19:07.533751 containerd[1729]: time="2025-08-19T08:19:07.533727997Z" level=info msg="connecting to shim 35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542" address="unix:///run/containerd/s/d8f81b3332dc98f4ba7a347a3f992634b101d99a307d106cace6765989b82e46" protocol=ttrpc version=3 Aug 19 08:19:07.570826 systemd[1]: Started cri-containerd-35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542.scope - libcontainer container 35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542. Aug 19 08:19:07.614354 kubelet[3160]: I0819 08:19:07.614302 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-gnxxw" podStartSLOduration=1.700274409 podStartE2EDuration="25.614286023s" podCreationTimestamp="2025-08-19 08:18:42 +0000 UTC" firstStartedPulling="2025-08-19 08:18:43.028053972 +0000 UTC m=+4.717286017" lastFinishedPulling="2025-08-19 08:19:06.942065584 +0000 UTC m=+28.631297631" observedRunningTime="2025-08-19 08:19:07.549046445 +0000 UTC m=+29.238278495" watchObservedRunningTime="2025-08-19 08:19:07.614286023 +0000 UTC m=+29.303518079" Aug 19 08:19:07.644663 systemd[1]: cri-containerd-35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542.scope: Deactivated successfully. Aug 19 08:19:07.647862 containerd[1729]: time="2025-08-19T08:19:07.647661233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\" id:\"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\" pid:3717 exited_at:{seconds:1755591547 nanos:646605573}" Aug 19 08:19:07.648468 containerd[1729]: time="2025-08-19T08:19:07.648360560Z" level=info msg="received exit event container_id:\"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\" id:\"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\" pid:3717 exited_at:{seconds:1755591547 nanos:646605573}" Aug 19 08:19:07.660138 containerd[1729]: time="2025-08-19T08:19:07.660081956Z" level=info msg="StartContainer for \"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\" returns successfully" Aug 19 08:19:07.674734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542-rootfs.mount: Deactivated successfully. Aug 19 08:19:08.495720 containerd[1729]: time="2025-08-19T08:19:08.495288463Z" level=info msg="CreateContainer within sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 08:19:08.520296 containerd[1729]: time="2025-08-19T08:19:08.520268222Z" level=info msg="Container c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:19:08.534805 containerd[1729]: time="2025-08-19T08:19:08.534777629Z" level=info msg="CreateContainer within sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\"" Aug 19 08:19:08.535348 containerd[1729]: time="2025-08-19T08:19:08.535287193Z" level=info msg="StartContainer for \"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\"" Aug 19 08:19:08.536716 containerd[1729]: time="2025-08-19T08:19:08.536662430Z" level=info msg="connecting to shim c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3" address="unix:///run/containerd/s/d8f81b3332dc98f4ba7a347a3f992634b101d99a307d106cace6765989b82e46" protocol=ttrpc version=3 Aug 19 08:19:08.552807 systemd[1]: Started cri-containerd-c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3.scope - libcontainer container c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3. Aug 19 08:19:08.570847 systemd[1]: cri-containerd-c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3.scope: Deactivated successfully. Aug 19 08:19:08.572109 containerd[1729]: time="2025-08-19T08:19:08.572077110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\" id:\"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\" pid:3759 exited_at:{seconds:1755591548 nanos:571828328}" Aug 19 08:19:08.574359 containerd[1729]: time="2025-08-19T08:19:08.574329740Z" level=info msg="received exit event container_id:\"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\" id:\"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\" pid:3759 exited_at:{seconds:1755591548 nanos:571828328}" Aug 19 08:19:08.575630 containerd[1729]: time="2025-08-19T08:19:08.575610725Z" level=info msg="StartContainer for \"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\" returns successfully" Aug 19 08:19:08.594980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3-rootfs.mount: Deactivated successfully. Aug 19 08:19:09.492065 containerd[1729]: time="2025-08-19T08:19:09.491889708Z" level=info msg="CreateContainer within sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 08:19:09.515019 containerd[1729]: time="2025-08-19T08:19:09.514778873Z" level=info msg="Container c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:19:09.527620 containerd[1729]: time="2025-08-19T08:19:09.527588979Z" level=info msg="CreateContainer within sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\"" Aug 19 08:19:09.528713 containerd[1729]: time="2025-08-19T08:19:09.528135182Z" level=info msg="StartContainer for \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\"" Aug 19 08:19:09.529153 containerd[1729]: time="2025-08-19T08:19:09.529117569Z" level=info msg="connecting to shim c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7" address="unix:///run/containerd/s/d8f81b3332dc98f4ba7a347a3f992634b101d99a307d106cace6765989b82e46" protocol=ttrpc version=3 Aug 19 08:19:09.555838 systemd[1]: Started cri-containerd-c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7.scope - libcontainer container c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7. Aug 19 08:19:09.583187 containerd[1729]: time="2025-08-19T08:19:09.583161669Z" level=info msg="StartContainer for \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" returns successfully" Aug 19 08:19:09.636316 containerd[1729]: time="2025-08-19T08:19:09.636285935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" id:\"73c22890c9a5a9ff9c81b7ebf88d7e5b47b5f494148281679d322328d2c05213\" pid:3828 exited_at:{seconds:1755591549 nanos:636074863}" Aug 19 08:19:09.700907 kubelet[3160]: I0819 08:19:09.700886 3160 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 19 08:19:09.746261 systemd[1]: Created slice kubepods-burstable-pod8685a4b9_f664_4704_b7f9_3217ddbe5bdd.slice - libcontainer container kubepods-burstable-pod8685a4b9_f664_4704_b7f9_3217ddbe5bdd.slice. Aug 19 08:19:09.751548 systemd[1]: Created slice kubepods-burstable-pod43f3e7e8_73f0_4b0f_88cd_6b9fc3c3a3b5.slice - libcontainer container kubepods-burstable-pod43f3e7e8_73f0_4b0f_88cd_6b9fc3c3a3b5.slice. Aug 19 08:19:09.807293 kubelet[3160]: I0819 08:19:09.807235 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7tjj\" (UniqueName: \"kubernetes.io/projected/8685a4b9-f664-4704-b7f9-3217ddbe5bdd-kube-api-access-h7tjj\") pod \"coredns-674b8bbfcf-rjhm7\" (UID: \"8685a4b9-f664-4704-b7f9-3217ddbe5bdd\") " pod="kube-system/coredns-674b8bbfcf-rjhm7" Aug 19 08:19:09.807293 kubelet[3160]: I0819 08:19:09.807269 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8685a4b9-f664-4704-b7f9-3217ddbe5bdd-config-volume\") pod \"coredns-674b8bbfcf-rjhm7\" (UID: \"8685a4b9-f664-4704-b7f9-3217ddbe5bdd\") " pod="kube-system/coredns-674b8bbfcf-rjhm7" Aug 19 08:19:09.807508 kubelet[3160]: I0819 08:19:09.807384 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43f3e7e8-73f0-4b0f-88cd-6b9fc3c3a3b5-config-volume\") pod \"coredns-674b8bbfcf-zrfvt\" (UID: \"43f3e7e8-73f0-4b0f-88cd-6b9fc3c3a3b5\") " pod="kube-system/coredns-674b8bbfcf-zrfvt" Aug 19 08:19:09.807508 kubelet[3160]: I0819 08:19:09.807406 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl9d5\" (UniqueName: \"kubernetes.io/projected/43f3e7e8-73f0-4b0f-88cd-6b9fc3c3a3b5-kube-api-access-pl9d5\") pod \"coredns-674b8bbfcf-zrfvt\" (UID: \"43f3e7e8-73f0-4b0f-88cd-6b9fc3c3a3b5\") " pod="kube-system/coredns-674b8bbfcf-zrfvt" Aug 19 08:19:10.049294 containerd[1729]: time="2025-08-19T08:19:10.049047948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rjhm7,Uid:8685a4b9-f664-4704-b7f9-3217ddbe5bdd,Namespace:kube-system,Attempt:0,}" Aug 19 08:19:10.054894 containerd[1729]: time="2025-08-19T08:19:10.054850692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zrfvt,Uid:43f3e7e8-73f0-4b0f-88cd-6b9fc3c3a3b5,Namespace:kube-system,Attempt:0,}" Aug 19 08:19:11.741665 systemd-networkd[1507]: cilium_host: Link UP Aug 19 08:19:11.742862 systemd-networkd[1507]: cilium_net: Link UP Aug 19 08:19:11.743611 systemd-networkd[1507]: cilium_net: Gained carrier Aug 19 08:19:11.745019 systemd-networkd[1507]: cilium_host: Gained carrier Aug 19 08:19:11.745402 systemd-networkd[1507]: cilium_net: Gained IPv6LL Aug 19 08:19:11.888764 systemd-networkd[1507]: cilium_vxlan: Link UP Aug 19 08:19:11.888958 systemd-networkd[1507]: cilium_vxlan: Gained carrier Aug 19 08:19:12.090712 kernel: NET: Registered PF_ALG protocol family Aug 19 08:19:12.404789 systemd-networkd[1507]: cilium_host: Gained IPv6LL Aug 19 08:19:12.606259 systemd-networkd[1507]: lxc_health: Link UP Aug 19 08:19:12.612764 systemd-networkd[1507]: lxc_health: Gained carrier Aug 19 08:19:12.865478 kubelet[3160]: I0819 08:19:12.865370 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mmbjp" podStartSLOduration=12.211133883 podStartE2EDuration="30.865354716s" podCreationTimestamp="2025-08-19 08:18:42 +0000 UTC" firstStartedPulling="2025-08-19 08:18:43.002285492 +0000 UTC m=+4.691517541" lastFinishedPulling="2025-08-19 08:19:01.65650633 +0000 UTC m=+23.345738374" observedRunningTime="2025-08-19 08:19:10.509741181 +0000 UTC m=+32.198973232" watchObservedRunningTime="2025-08-19 08:19:12.865354716 +0000 UTC m=+34.554586814" Aug 19 08:19:13.087936 systemd-networkd[1507]: lxcfd51e3553478: Link UP Aug 19 08:19:13.090790 kernel: eth0: renamed from tmpf5c69 Aug 19 08:19:13.093193 systemd-networkd[1507]: lxcfd51e3553478: Gained carrier Aug 19 08:19:13.117370 systemd-networkd[1507]: lxc412ad7e7269e: Link UP Aug 19 08:19:13.117773 kernel: eth0: renamed from tmpe4f9b Aug 19 08:19:13.119823 systemd-networkd[1507]: lxc412ad7e7269e: Gained carrier Aug 19 08:19:13.172810 systemd-networkd[1507]: cilium_vxlan: Gained IPv6LL Aug 19 08:19:14.581768 systemd-networkd[1507]: lxc_health: Gained IPv6LL Aug 19 08:19:14.772829 systemd-networkd[1507]: lxcfd51e3553478: Gained IPv6LL Aug 19 08:19:15.094744 systemd-networkd[1507]: lxc412ad7e7269e: Gained IPv6LL Aug 19 08:19:15.746716 containerd[1729]: time="2025-08-19T08:19:15.744767171Z" level=info msg="connecting to shim e4f9b663642626d7578a6448ceea45aec018d54bd736da5379f7e8fdb51a275a" address="unix:///run/containerd/s/7a08cc95e82658bbf2f58988600f65503ac99f3f50fe278dc092a2aafa6f8f7e" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:19:15.784154 containerd[1729]: time="2025-08-19T08:19:15.784113713Z" level=info msg="connecting to shim f5c69ebc97c9207b03e68fed37d762b93cd386d23fb7806ef58fb4b8c491ac92" address="unix:///run/containerd/s/16ed9f6d886b707e86c4d69da91e0ac5fec81f9a1a39adaee08da9890b96d443" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:19:15.785932 systemd[1]: Started cri-containerd-e4f9b663642626d7578a6448ceea45aec018d54bd736da5379f7e8fdb51a275a.scope - libcontainer container e4f9b663642626d7578a6448ceea45aec018d54bd736da5379f7e8fdb51a275a. Aug 19 08:19:15.819906 systemd[1]: Started cri-containerd-f5c69ebc97c9207b03e68fed37d762b93cd386d23fb7806ef58fb4b8c491ac92.scope - libcontainer container f5c69ebc97c9207b03e68fed37d762b93cd386d23fb7806ef58fb4b8c491ac92. Aug 19 08:19:15.852981 containerd[1729]: time="2025-08-19T08:19:15.852954153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zrfvt,Uid:43f3e7e8-73f0-4b0f-88cd-6b9fc3c3a3b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4f9b663642626d7578a6448ceea45aec018d54bd736da5379f7e8fdb51a275a\"" Aug 19 08:19:15.863018 containerd[1729]: time="2025-08-19T08:19:15.862902591Z" level=info msg="CreateContainer within sandbox \"e4f9b663642626d7578a6448ceea45aec018d54bd736da5379f7e8fdb51a275a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:19:15.864621 containerd[1729]: time="2025-08-19T08:19:15.864601563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rjhm7,Uid:8685a4b9-f664-4704-b7f9-3217ddbe5bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5c69ebc97c9207b03e68fed37d762b93cd386d23fb7806ef58fb4b8c491ac92\"" Aug 19 08:19:15.871346 containerd[1729]: time="2025-08-19T08:19:15.871329753Z" level=info msg="CreateContainer within sandbox \"f5c69ebc97c9207b03e68fed37d762b93cd386d23fb7806ef58fb4b8c491ac92\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 08:19:15.885624 containerd[1729]: time="2025-08-19T08:19:15.885551408Z" level=info msg="Container 0ed46e109fac815fea58c49c4a17f01db18c2382857a0e82a104ef47c795babc: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:19:15.890288 containerd[1729]: time="2025-08-19T08:19:15.890265103Z" level=info msg="Container 2813ed4699a4016ea81471a910370e2e5ba7f3278689b625915d22a16af5db9d: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:19:15.920869 containerd[1729]: time="2025-08-19T08:19:15.920842595Z" level=info msg="CreateContainer within sandbox \"e4f9b663642626d7578a6448ceea45aec018d54bd736da5379f7e8fdb51a275a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ed46e109fac815fea58c49c4a17f01db18c2382857a0e82a104ef47c795babc\"" Aug 19 08:19:15.922145 containerd[1729]: time="2025-08-19T08:19:15.921306136Z" level=info msg="StartContainer for \"0ed46e109fac815fea58c49c4a17f01db18c2382857a0e82a104ef47c795babc\"" Aug 19 08:19:15.922145 containerd[1729]: time="2025-08-19T08:19:15.922127079Z" level=info msg="connecting to shim 0ed46e109fac815fea58c49c4a17f01db18c2382857a0e82a104ef47c795babc" address="unix:///run/containerd/s/7a08cc95e82658bbf2f58988600f65503ac99f3f50fe278dc092a2aafa6f8f7e" protocol=ttrpc version=3 Aug 19 08:19:15.928098 containerd[1729]: time="2025-08-19T08:19:15.928001640Z" level=info msg="CreateContainer within sandbox \"f5c69ebc97c9207b03e68fed37d762b93cd386d23fb7806ef58fb4b8c491ac92\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2813ed4699a4016ea81471a910370e2e5ba7f3278689b625915d22a16af5db9d\"" Aug 19 08:19:15.929514 containerd[1729]: time="2025-08-19T08:19:15.928745298Z" level=info msg="StartContainer for \"2813ed4699a4016ea81471a910370e2e5ba7f3278689b625915d22a16af5db9d\"" Aug 19 08:19:15.929514 containerd[1729]: time="2025-08-19T08:19:15.929453366Z" level=info msg="connecting to shim 2813ed4699a4016ea81471a910370e2e5ba7f3278689b625915d22a16af5db9d" address="unix:///run/containerd/s/16ed9f6d886b707e86c4d69da91e0ac5fec81f9a1a39adaee08da9890b96d443" protocol=ttrpc version=3 Aug 19 08:19:15.944813 systemd[1]: Started cri-containerd-0ed46e109fac815fea58c49c4a17f01db18c2382857a0e82a104ef47c795babc.scope - libcontainer container 0ed46e109fac815fea58c49c4a17f01db18c2382857a0e82a104ef47c795babc. Aug 19 08:19:15.948308 systemd[1]: Started cri-containerd-2813ed4699a4016ea81471a910370e2e5ba7f3278689b625915d22a16af5db9d.scope - libcontainer container 2813ed4699a4016ea81471a910370e2e5ba7f3278689b625915d22a16af5db9d. Aug 19 08:19:15.985716 containerd[1729]: time="2025-08-19T08:19:15.985638749Z" level=info msg="StartContainer for \"2813ed4699a4016ea81471a910370e2e5ba7f3278689b625915d22a16af5db9d\" returns successfully" Aug 19 08:19:15.986372 containerd[1729]: time="2025-08-19T08:19:15.985993750Z" level=info msg="StartContainer for \"0ed46e109fac815fea58c49c4a17f01db18c2382857a0e82a104ef47c795babc\" returns successfully" Aug 19 08:19:16.533218 kubelet[3160]: I0819 08:19:16.532654 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rjhm7" podStartSLOduration=34.532638404 podStartE2EDuration="34.532638404s" podCreationTimestamp="2025-08-19 08:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:19:16.516156231 +0000 UTC m=+38.205388282" watchObservedRunningTime="2025-08-19 08:19:16.532638404 +0000 UTC m=+38.221870455" Aug 19 08:19:16.561884 kubelet[3160]: I0819 08:19:16.561838 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zrfvt" podStartSLOduration=34.561823067 podStartE2EDuration="34.561823067s" podCreationTimestamp="2025-08-19 08:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:19:16.561596548 +0000 UTC m=+38.250828601" watchObservedRunningTime="2025-08-19 08:19:16.561823067 +0000 UTC m=+38.251055125" Aug 19 08:19:16.734324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902428528.mount: Deactivated successfully. Aug 19 08:20:19.313569 systemd[1]: Started sshd@7-10.200.8.11:22-10.200.16.10:36344.service - OpenSSH per-connection server daemon (10.200.16.10:36344). Aug 19 08:20:19.953640 sshd[4484]: Accepted publickey for core from 10.200.16.10 port 36344 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:20:19.954834 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:20:19.959297 systemd-logind[1709]: New session 10 of user core. Aug 19 08:20:19.962855 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 08:20:20.463293 sshd[4487]: Connection closed by 10.200.16.10 port 36344 Aug 19 08:20:20.463850 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Aug 19 08:20:20.466981 systemd[1]: sshd@7-10.200.8.11:22-10.200.16.10:36344.service: Deactivated successfully. Aug 19 08:20:20.468580 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 08:20:20.469240 systemd-logind[1709]: Session 10 logged out. Waiting for processes to exit. Aug 19 08:20:20.470379 systemd-logind[1709]: Removed session 10. Aug 19 08:20:25.580303 systemd[1]: Started sshd@8-10.200.8.11:22-10.200.16.10:46764.service - OpenSSH per-connection server daemon (10.200.16.10:46764). Aug 19 08:20:26.223712 sshd[4500]: Accepted publickey for core from 10.200.16.10 port 46764 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:20:26.224898 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:20:26.228810 systemd-logind[1709]: New session 11 of user core. Aug 19 08:20:26.234843 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 08:20:26.790475 sshd[4503]: Connection closed by 10.200.16.10 port 46764 Aug 19 08:20:26.791064 sshd-session[4500]: pam_unix(sshd:session): session closed for user core Aug 19 08:20:26.793985 systemd[1]: sshd@8-10.200.8.11:22-10.200.16.10:46764.service: Deactivated successfully. Aug 19 08:20:26.795638 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 08:20:26.797307 systemd-logind[1709]: Session 11 logged out. Waiting for processes to exit. Aug 19 08:20:26.798115 systemd-logind[1709]: Removed session 11. Aug 19 08:20:31.905763 systemd[1]: Started sshd@9-10.200.8.11:22-10.200.16.10:53338.service - OpenSSH per-connection server daemon (10.200.16.10:53338). Aug 19 08:20:32.545590 sshd[4518]: Accepted publickey for core from 10.200.16.10 port 53338 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:20:32.546664 sshd-session[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:20:32.550778 systemd-logind[1709]: New session 12 of user core. Aug 19 08:20:32.554857 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 08:20:33.115801 sshd[4521]: Connection closed by 10.200.16.10 port 53338 Aug 19 08:20:33.116385 sshd-session[4518]: pam_unix(sshd:session): session closed for user core Aug 19 08:20:33.119830 systemd[1]: sshd@9-10.200.8.11:22-10.200.16.10:53338.service: Deactivated successfully. Aug 19 08:20:33.121547 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 08:20:33.122306 systemd-logind[1709]: Session 12 logged out. Waiting for processes to exit. Aug 19 08:20:33.123469 systemd-logind[1709]: Removed session 12. Aug 19 08:20:38.243831 systemd[1]: Started sshd@10-10.200.8.11:22-10.200.16.10:53350.service - OpenSSH per-connection server daemon (10.200.16.10:53350). Aug 19 08:20:38.911268 sshd[4535]: Accepted publickey for core from 10.200.16.10 port 53350 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:20:38.912459 sshd-session[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:20:38.916818 systemd-logind[1709]: New session 13 of user core. Aug 19 08:20:38.921855 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 08:20:39.430095 sshd[4540]: Connection closed by 10.200.16.10 port 53350 Aug 19 08:20:39.430655 sshd-session[4535]: pam_unix(sshd:session): session closed for user core Aug 19 08:20:39.434121 systemd[1]: sshd@10-10.200.8.11:22-10.200.16.10:53350.service: Deactivated successfully. Aug 19 08:20:39.435869 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 08:20:39.436598 systemd-logind[1709]: Session 13 logged out. Waiting for processes to exit. Aug 19 08:20:39.437887 systemd-logind[1709]: Removed session 13. Aug 19 08:20:39.559415 systemd[1]: Started sshd@11-10.200.8.11:22-10.200.16.10:53364.service - OpenSSH per-connection server daemon (10.200.16.10:53364). Aug 19 08:20:40.234923 sshd[4553]: Accepted publickey for core from 10.200.16.10 port 53364 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:20:40.236078 sshd-session[4553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:20:40.239866 systemd-logind[1709]: New session 14 of user core. Aug 19 08:20:40.244845 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 08:20:40.784495 sshd[4556]: Connection closed by 10.200.16.10 port 53364 Aug 19 08:20:40.785041 sshd-session[4553]: pam_unix(sshd:session): session closed for user core Aug 19 08:20:40.788452 systemd[1]: sshd@11-10.200.8.11:22-10.200.16.10:53364.service: Deactivated successfully. Aug 19 08:20:40.790119 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 08:20:40.790899 systemd-logind[1709]: Session 14 logged out. Waiting for processes to exit. Aug 19 08:20:40.792119 systemd-logind[1709]: Removed session 14. Aug 19 08:20:40.904427 systemd[1]: Started sshd@12-10.200.8.11:22-10.200.16.10:35412.service - OpenSSH per-connection server daemon (10.200.16.10:35412). Aug 19 08:20:41.580288 sshd[4567]: Accepted publickey for core from 10.200.16.10 port 35412 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:20:41.581475 sshd-session[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:20:41.585907 systemd-logind[1709]: New session 15 of user core. Aug 19 08:20:41.590828 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 08:20:42.112071 sshd[4570]: Connection closed by 10.200.16.10 port 35412 Aug 19 08:20:42.112601 sshd-session[4567]: pam_unix(sshd:session): session closed for user core Aug 19 08:20:42.115486 systemd[1]: sshd@12-10.200.8.11:22-10.200.16.10:35412.service: Deactivated successfully. Aug 19 08:20:42.117281 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 08:20:42.119044 systemd-logind[1709]: Session 15 logged out. Waiting for processes to exit. Aug 19 08:20:42.119941 systemd-logind[1709]: Removed session 15. Aug 19 08:20:47.234382 systemd[1]: Started sshd@13-10.200.8.11:22-10.200.16.10:35416.service - OpenSSH per-connection server daemon (10.200.16.10:35416). Aug 19 08:20:47.905417 sshd[4584]: Accepted publickey for core from 10.200.16.10 port 35416 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:20:47.906496 sshd-session[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:20:47.910479 systemd-logind[1709]: New session 16 of user core. Aug 19 08:20:47.916816 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 08:20:48.425618 sshd[4587]: Connection closed by 10.200.16.10 port 35416 Aug 19 08:20:48.426195 sshd-session[4584]: pam_unix(sshd:session): session closed for user core Aug 19 08:20:48.429440 systemd[1]: sshd@13-10.200.8.11:22-10.200.16.10:35416.service: Deactivated successfully. Aug 19 08:20:48.431304 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 08:20:48.432726 systemd-logind[1709]: Session 16 logged out. Waiting for processes to exit. Aug 19 08:20:48.434038 systemd-logind[1709]: Removed session 16. Aug 19 08:20:53.543640 systemd[1]: Started sshd@14-10.200.8.11:22-10.200.16.10:56770.service - OpenSSH per-connection server daemon (10.200.16.10:56770). Aug 19 08:20:54.214575 sshd[4599]: Accepted publickey for core from 10.200.16.10 port 56770 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:20:54.215663 sshd-session[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:20:54.219852 systemd-logind[1709]: New session 17 of user core. Aug 19 08:20:54.225832 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 08:20:54.735547 sshd[4602]: Connection closed by 10.200.16.10 port 56770 Aug 19 08:20:54.736140 sshd-session[4599]: pam_unix(sshd:session): session closed for user core Aug 19 08:20:54.739546 systemd[1]: sshd@14-10.200.8.11:22-10.200.16.10:56770.service: Deactivated successfully. Aug 19 08:20:54.741461 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 08:20:54.742398 systemd-logind[1709]: Session 17 logged out. Waiting for processes to exit. Aug 19 08:20:54.743479 systemd-logind[1709]: Removed session 17. Aug 19 08:20:54.853605 systemd[1]: Started sshd@15-10.200.8.11:22-10.200.16.10:56786.service - OpenSSH per-connection server daemon (10.200.16.10:56786). Aug 19 08:20:55.526168 sshd[4614]: Accepted publickey for core from 10.200.16.10 port 56786 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:20:55.527295 sshd-session[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:20:55.531211 systemd-logind[1709]: New session 18 of user core. Aug 19 08:20:55.534839 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 08:20:56.108260 sshd[4617]: Connection closed by 10.200.16.10 port 56786 Aug 19 08:20:56.108867 sshd-session[4614]: pam_unix(sshd:session): session closed for user core Aug 19 08:20:56.112254 systemd[1]: sshd@15-10.200.8.11:22-10.200.16.10:56786.service: Deactivated successfully. Aug 19 08:20:56.114282 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 08:20:56.115163 systemd-logind[1709]: Session 18 logged out. Waiting for processes to exit. Aug 19 08:20:56.116454 systemd-logind[1709]: Removed session 18. Aug 19 08:20:56.225626 systemd[1]: Started sshd@16-10.200.8.11:22-10.200.16.10:56796.service - OpenSSH per-connection server daemon (10.200.16.10:56796). Aug 19 08:20:56.895369 sshd[4626]: Accepted publickey for core from 10.200.16.10 port 56796 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:20:56.896473 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:20:56.899957 systemd-logind[1709]: New session 19 of user core. Aug 19 08:20:56.904838 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 08:20:58.600710 sshd[4629]: Connection closed by 10.200.16.10 port 56796 Aug 19 08:20:58.601257 sshd-session[4626]: pam_unix(sshd:session): session closed for user core Aug 19 08:20:58.604047 systemd[1]: sshd@16-10.200.8.11:22-10.200.16.10:56796.service: Deactivated successfully. Aug 19 08:20:58.606036 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 08:20:58.607288 systemd-logind[1709]: Session 19 logged out. Waiting for processes to exit. Aug 19 08:20:58.608024 systemd-logind[1709]: Removed session 19. Aug 19 08:20:58.727315 systemd[1]: Started sshd@17-10.200.8.11:22-10.200.16.10:56802.service - OpenSSH per-connection server daemon (10.200.16.10:56802). Aug 19 08:20:59.398103 sshd[4646]: Accepted publickey for core from 10.200.16.10 port 56802 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:20:59.399292 sshd-session[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:20:59.403330 systemd-logind[1709]: New session 20 of user core. Aug 19 08:20:59.409852 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 08:21:00.003499 sshd[4649]: Connection closed by 10.200.16.10 port 56802 Aug 19 08:21:00.004122 sshd-session[4646]: pam_unix(sshd:session): session closed for user core Aug 19 08:21:00.007147 systemd[1]: sshd@17-10.200.8.11:22-10.200.16.10:56802.service: Deactivated successfully. Aug 19 08:21:00.009626 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 08:21:00.010433 systemd-logind[1709]: Session 20 logged out. Waiting for processes to exit. Aug 19 08:21:00.012352 systemd-logind[1709]: Removed session 20. Aug 19 08:21:00.120578 systemd[1]: Started sshd@18-10.200.8.11:22-10.200.16.10:56818.service - OpenSSH per-connection server daemon (10.200.16.10:56818). Aug 19 08:21:00.795161 sshd[4659]: Accepted publickey for core from 10.200.16.10 port 56818 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:21:00.796323 sshd-session[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:21:00.800657 systemd-logind[1709]: New session 21 of user core. Aug 19 08:21:00.806881 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 08:21:01.308867 sshd[4662]: Connection closed by 10.200.16.10 port 56818 Aug 19 08:21:01.309444 sshd-session[4659]: pam_unix(sshd:session): session closed for user core Aug 19 08:21:01.312527 systemd[1]: sshd@18-10.200.8.11:22-10.200.16.10:56818.service: Deactivated successfully. Aug 19 08:21:01.314088 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 08:21:01.314840 systemd-logind[1709]: Session 21 logged out. Waiting for processes to exit. Aug 19 08:21:01.316049 systemd-logind[1709]: Removed session 21. Aug 19 08:21:06.427830 systemd[1]: Started sshd@19-10.200.8.11:22-10.200.16.10:34118.service - OpenSSH per-connection server daemon (10.200.16.10:34118). Aug 19 08:21:07.095917 sshd[4676]: Accepted publickey for core from 10.200.16.10 port 34118 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:21:07.097056 sshd-session[4676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:21:07.101443 systemd-logind[1709]: New session 22 of user core. Aug 19 08:21:07.104872 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 08:21:07.615870 sshd[4679]: Connection closed by 10.200.16.10 port 34118 Aug 19 08:21:07.616421 sshd-session[4676]: pam_unix(sshd:session): session closed for user core Aug 19 08:21:07.619231 systemd[1]: sshd@19-10.200.8.11:22-10.200.16.10:34118.service: Deactivated successfully. Aug 19 08:21:07.620953 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 08:21:07.622129 systemd-logind[1709]: Session 22 logged out. Waiting for processes to exit. Aug 19 08:21:07.623101 systemd-logind[1709]: Removed session 22. Aug 19 08:21:12.753610 systemd[1]: Started sshd@20-10.200.8.11:22-10.200.16.10:45520.service - OpenSSH per-connection server daemon (10.200.16.10:45520). Aug 19 08:21:13.456763 sshd[4691]: Accepted publickey for core from 10.200.16.10 port 45520 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:21:13.457931 sshd-session[4691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:21:13.461759 systemd-logind[1709]: New session 23 of user core. Aug 19 08:21:13.465840 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 08:21:15.729970 sshd[4696]: Connection closed by 10.200.16.10 port 45520 Aug 19 08:21:15.730519 sshd-session[4691]: pam_unix(sshd:session): session closed for user core Aug 19 08:21:15.733364 systemd[1]: sshd@20-10.200.8.11:22-10.200.16.10:45520.service: Deactivated successfully. Aug 19 08:21:15.734945 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 08:21:15.736173 systemd-logind[1709]: Session 23 logged out. Waiting for processes to exit. Aug 19 08:21:15.737346 systemd-logind[1709]: Removed session 23. Aug 19 08:21:15.864766 systemd[1]: Started sshd@21-10.200.8.11:22-10.200.16.10:45536.service - OpenSSH per-connection server daemon (10.200.16.10:45536). Aug 19 08:21:16.568096 sshd[4708]: Accepted publickey for core from 10.200.16.10 port 45536 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:21:16.569237 sshd-session[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:21:16.572737 systemd-logind[1709]: New session 24 of user core. Aug 19 08:21:16.576819 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 19 08:21:18.266858 containerd[1729]: time="2025-08-19T08:21:18.266735509Z" level=info msg="StopContainer for \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\" with timeout 30 (s)" Aug 19 08:21:18.268424 containerd[1729]: time="2025-08-19T08:21:18.268370970Z" level=info msg="Stop container \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\" with signal terminated" Aug 19 08:21:18.275619 containerd[1729]: time="2025-08-19T08:21:18.275312408Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 08:21:18.278947 systemd[1]: cri-containerd-adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3.scope: Deactivated successfully. Aug 19 08:21:18.281431 containerd[1729]: time="2025-08-19T08:21:18.281251052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" id:\"ef14396d095ec5232b744d4642e861c402f213dcafacb2a200871e59c28bcf3e\" pid:4731 exited_at:{seconds:1755591678 nanos:280673200}" Aug 19 08:21:18.281431 containerd[1729]: time="2025-08-19T08:21:18.281309248Z" level=info msg="received exit event container_id:\"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\" id:\"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\" pid:3681 exited_at:{seconds:1755591678 nanos:281150082}" Aug 19 08:21:18.281731 containerd[1729]: time="2025-08-19T08:21:18.281677199Z" level=info msg="TaskExit event in podsandbox handler container_id:\"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\" id:\"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\" pid:3681 exited_at:{seconds:1755591678 nanos:281150082}" Aug 19 08:21:18.282605 containerd[1729]: time="2025-08-19T08:21:18.282552641Z" level=info msg="StopContainer for \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" with timeout 2 (s)" Aug 19 08:21:18.283187 containerd[1729]: time="2025-08-19T08:21:18.282822567Z" level=info msg="Stop container \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" with signal terminated" Aug 19 08:21:18.290819 systemd-networkd[1507]: lxc_health: Link DOWN Aug 19 08:21:18.290825 systemd-networkd[1507]: lxc_health: Lost carrier Aug 19 08:21:18.306264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3-rootfs.mount: Deactivated successfully. Aug 19 08:21:18.307251 systemd[1]: cri-containerd-c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7.scope: Deactivated successfully. Aug 19 08:21:18.308414 systemd[1]: cri-containerd-c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7.scope: Consumed 4.848s CPU time, 124M memory peak, 136K read from disk, 13.3M written to disk. Aug 19 08:21:18.309102 containerd[1729]: time="2025-08-19T08:21:18.309069096Z" level=info msg="received exit event container_id:\"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" id:\"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" pid:3797 exited_at:{seconds:1755591678 nanos:308154703}" Aug 19 08:21:18.309405 containerd[1729]: time="2025-08-19T08:21:18.309387330Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" id:\"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" pid:3797 exited_at:{seconds:1755591678 nanos:308154703}" Aug 19 08:21:18.323673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7-rootfs.mount: Deactivated successfully. Aug 19 08:21:18.350003 containerd[1729]: time="2025-08-19T08:21:18.349970261Z" level=info msg="StopContainer for \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" returns successfully" Aug 19 08:21:18.350632 containerd[1729]: time="2025-08-19T08:21:18.350598720Z" level=info msg="StopPodSandbox for \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\"" Aug 19 08:21:18.350710 containerd[1729]: time="2025-08-19T08:21:18.350672551Z" level=info msg="Container to stop \"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:21:18.350741 containerd[1729]: time="2025-08-19T08:21:18.350711753Z" level=info msg="Container to stop \"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:21:18.350741 containerd[1729]: time="2025-08-19T08:21:18.350722483Z" level=info msg="Container to stop \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:21:18.350741 containerd[1729]: time="2025-08-19T08:21:18.350733382Z" level=info msg="Container to stop \"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:21:18.350808 containerd[1729]: time="2025-08-19T08:21:18.350741837Z" level=info msg="Container to stop \"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:21:18.354699 containerd[1729]: time="2025-08-19T08:21:18.354613753Z" level=info msg="StopContainer for \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\" returns successfully" Aug 19 08:21:18.355300 containerd[1729]: time="2025-08-19T08:21:18.355042005Z" level=info msg="StopPodSandbox for \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\"" Aug 19 08:21:18.355300 containerd[1729]: time="2025-08-19T08:21:18.355091392Z" level=info msg="Container to stop \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 08:21:18.357326 systemd[1]: cri-containerd-07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228.scope: Deactivated successfully. Aug 19 08:21:18.359436 containerd[1729]: time="2025-08-19T08:21:18.359413707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" id:\"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" pid:3334 exit_status:137 exited_at:{seconds:1755591678 nanos:359023979}" Aug 19 08:21:18.365258 systemd[1]: cri-containerd-8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c.scope: Deactivated successfully. Aug 19 08:21:18.389037 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228-rootfs.mount: Deactivated successfully. Aug 19 08:21:18.396401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c-rootfs.mount: Deactivated successfully. Aug 19 08:21:18.415717 containerd[1729]: time="2025-08-19T08:21:18.415474573Z" level=info msg="shim disconnected" id=8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c namespace=k8s.io Aug 19 08:21:18.415717 containerd[1729]: time="2025-08-19T08:21:18.415496747Z" level=warning msg="cleaning up after shim disconnected" id=8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c namespace=k8s.io Aug 19 08:21:18.415717 containerd[1729]: time="2025-08-19T08:21:18.415503675Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 08:21:18.416908 containerd[1729]: time="2025-08-19T08:21:18.416882343Z" level=info msg="shim disconnected" id=07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228 namespace=k8s.io Aug 19 08:21:18.417013 containerd[1729]: time="2025-08-19T08:21:18.416995240Z" level=warning msg="cleaning up after shim disconnected" id=07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228 namespace=k8s.io Aug 19 08:21:18.417039 containerd[1729]: time="2025-08-19T08:21:18.417013314Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 08:21:18.428965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c-shm.mount: Deactivated successfully. Aug 19 08:21:18.429244 containerd[1729]: time="2025-08-19T08:21:18.429083227Z" level=info msg="received exit event sandbox_id:\"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\" exit_status:137 exited_at:{seconds:1755591678 nanos:370932806}" Aug 19 08:21:18.429244 containerd[1729]: time="2025-08-19T08:21:18.429162220Z" level=info msg="TearDown network for sandbox \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\" successfully" Aug 19 08:21:18.429244 containerd[1729]: time="2025-08-19T08:21:18.429175116Z" level=info msg="StopPodSandbox for \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\" returns successfully" Aug 19 08:21:18.431113 containerd[1729]: time="2025-08-19T08:21:18.431049453Z" level=info msg="received exit event sandbox_id:\"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" exit_status:137 exited_at:{seconds:1755591678 nanos:359023979}" Aug 19 08:21:18.431372 containerd[1729]: time="2025-08-19T08:21:18.431354997Z" level=info msg="TearDown network for sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" successfully" Aug 19 08:21:18.431413 containerd[1729]: time="2025-08-19T08:21:18.431373230Z" level=info msg="StopPodSandbox for \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" returns successfully" Aug 19 08:21:18.431852 containerd[1729]: time="2025-08-19T08:21:18.431094564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\" id:\"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\" pid:3354 exit_status:137 exited_at:{seconds:1755591678 nanos:370932806}" Aug 19 08:21:18.485308 kubelet[3160]: E0819 08:21:18.485275 3160 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 19 08:21:18.519445 kubelet[3160]: I0819 08:21:18.518612 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:21:18.519445 kubelet[3160]: I0819 08:21:18.518633 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-lib-modules\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519445 kubelet[3160]: I0819 08:21:18.518670 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-hubble-tls\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519445 kubelet[3160]: I0819 08:21:18.518696 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-bpf-maps\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519445 kubelet[3160]: I0819 08:21:18.518711 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-hostproc\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519445 kubelet[3160]: I0819 08:21:18.518725 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-xtables-lock\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519629 kubelet[3160]: I0819 08:21:18.518745 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b79f9c3d-fe19-4e77-bcac-fd599f32717c-cilium-config-path\") pod \"b79f9c3d-fe19-4e77-bcac-fd599f32717c\" (UID: \"b79f9c3d-fe19-4e77-bcac-fd599f32717c\") " Aug 19 08:21:18.519629 kubelet[3160]: I0819 08:21:18.518759 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-run\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519629 kubelet[3160]: I0819 08:21:18.518772 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-cgroup\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519629 kubelet[3160]: I0819 08:21:18.518790 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-host-proc-sys-net\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519629 kubelet[3160]: I0819 08:21:18.518805 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cni-path\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519629 kubelet[3160]: I0819 08:21:18.518826 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-clustermesh-secrets\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519790 kubelet[3160]: I0819 08:21:18.518846 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-config-path\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519790 kubelet[3160]: I0819 08:21:18.518865 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v95n6\" (UniqueName: \"kubernetes.io/projected/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-kube-api-access-v95n6\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519790 kubelet[3160]: I0819 08:21:18.518882 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-etc-cni-netd\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519790 kubelet[3160]: I0819 08:21:18.518899 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-host-proc-sys-kernel\") pod \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\" (UID: \"7dbf04a3-6786-4d9e-915d-4a55e6d504c5\") " Aug 19 08:21:18.519790 kubelet[3160]: I0819 08:21:18.518920 3160 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms5vx\" (UniqueName: \"kubernetes.io/projected/b79f9c3d-fe19-4e77-bcac-fd599f32717c-kube-api-access-ms5vx\") pod \"b79f9c3d-fe19-4e77-bcac-fd599f32717c\" (UID: \"b79f9c3d-fe19-4e77-bcac-fd599f32717c\") " Aug 19 08:21:18.519790 kubelet[3160]: I0819 08:21:18.518957 3160 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-lib-modules\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.520159 kubelet[3160]: I0819 08:21:18.520132 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:21:18.521208 kubelet[3160]: I0819 08:21:18.521178 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:21:18.521208 kubelet[3160]: I0819 08:21:18.521208 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cni-path" (OuterVolumeSpecName: "cni-path") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:21:18.521778 kubelet[3160]: I0819 08:21:18.521726 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:21:18.521778 kubelet[3160]: I0819 08:21:18.521758 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-hostproc" (OuterVolumeSpecName: "hostproc") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:21:18.521778 kubelet[3160]: I0819 08:21:18.521777 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:21:18.523223 kubelet[3160]: I0819 08:21:18.523195 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:21:18.523287 kubelet[3160]: I0819 08:21:18.523271 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 08:21:18.523406 kubelet[3160]: I0819 08:21:18.523319 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79f9c3d-fe19-4e77-bcac-fd599f32717c-kube-api-access-ms5vx" (OuterVolumeSpecName: "kube-api-access-ms5vx") pod "b79f9c3d-fe19-4e77-bcac-fd599f32717c" (UID: "b79f9c3d-fe19-4e77-bcac-fd599f32717c"). InnerVolumeSpecName "kube-api-access-ms5vx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 08:21:18.523484 kubelet[3160]: I0819 08:21:18.523470 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:21:18.523561 kubelet[3160]: I0819 08:21:18.523549 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 08:21:18.525559 kubelet[3160]: I0819 08:21:18.525467 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b79f9c3d-fe19-4e77-bcac-fd599f32717c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b79f9c3d-fe19-4e77-bcac-fd599f32717c" (UID: "b79f9c3d-fe19-4e77-bcac-fd599f32717c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 08:21:18.526200 kubelet[3160]: I0819 08:21:18.526181 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 19 08:21:18.526675 kubelet[3160]: I0819 08:21:18.526634 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-kube-api-access-v95n6" (OuterVolumeSpecName: "kube-api-access-v95n6") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "kube-api-access-v95n6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 08:21:18.527115 kubelet[3160]: I0819 08:21:18.527087 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7dbf04a3-6786-4d9e-915d-4a55e6d504c5" (UID: "7dbf04a3-6786-4d9e-915d-4a55e6d504c5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 08:21:18.620075 kubelet[3160]: I0819 08:21:18.620051 3160 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v95n6\" (UniqueName: \"kubernetes.io/projected/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-kube-api-access-v95n6\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620075 kubelet[3160]: I0819 08:21:18.620075 3160 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-etc-cni-netd\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620200 kubelet[3160]: I0819 08:21:18.620084 3160 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-host-proc-sys-kernel\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620200 kubelet[3160]: I0819 08:21:18.620092 3160 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ms5vx\" (UniqueName: \"kubernetes.io/projected/b79f9c3d-fe19-4e77-bcac-fd599f32717c-kube-api-access-ms5vx\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620200 kubelet[3160]: I0819 08:21:18.620101 3160 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-hubble-tls\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620200 kubelet[3160]: I0819 08:21:18.620112 3160 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-bpf-maps\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620200 kubelet[3160]: I0819 08:21:18.620120 3160 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-hostproc\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620200 kubelet[3160]: I0819 08:21:18.620128 3160 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-xtables-lock\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620200 kubelet[3160]: I0819 08:21:18.620136 3160 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b79f9c3d-fe19-4e77-bcac-fd599f32717c-cilium-config-path\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620200 kubelet[3160]: I0819 08:21:18.620146 3160 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-run\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620334 kubelet[3160]: I0819 08:21:18.620156 3160 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-cgroup\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620334 kubelet[3160]: I0819 08:21:18.620164 3160 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-host-proc-sys-net\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620334 kubelet[3160]: I0819 08:21:18.620174 3160 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cni-path\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620334 kubelet[3160]: I0819 08:21:18.620185 3160 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-clustermesh-secrets\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.620334 kubelet[3160]: I0819 08:21:18.620194 3160 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7dbf04a3-6786-4d9e-915d-4a55e6d504c5-cilium-config-path\") on node \"ci-4426.0.0-a-f24a6b1c57\" DevicePath \"\"" Aug 19 08:21:18.703242 kubelet[3160]: I0819 08:21:18.703197 3160 scope.go:117] "RemoveContainer" containerID="adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3" Aug 19 08:21:18.708513 containerd[1729]: time="2025-08-19T08:21:18.708435852Z" level=info msg="RemoveContainer for \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\"" Aug 19 08:21:18.711364 systemd[1]: Removed slice kubepods-besteffort-podb79f9c3d_fe19_4e77_bcac_fd599f32717c.slice - libcontainer container kubepods-besteffort-podb79f9c3d_fe19_4e77_bcac_fd599f32717c.slice. Aug 19 08:21:18.717364 containerd[1729]: time="2025-08-19T08:21:18.717291578Z" level=info msg="RemoveContainer for \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\" returns successfully" Aug 19 08:21:18.719705 kubelet[3160]: I0819 08:21:18.718940 3160 scope.go:117] "RemoveContainer" containerID="adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3" Aug 19 08:21:18.720761 systemd[1]: Removed slice kubepods-burstable-pod7dbf04a3_6786_4d9e_915d_4a55e6d504c5.slice - libcontainer container kubepods-burstable-pod7dbf04a3_6786_4d9e_915d_4a55e6d504c5.slice. Aug 19 08:21:18.721829 systemd[1]: kubepods-burstable-pod7dbf04a3_6786_4d9e_915d_4a55e6d504c5.slice: Consumed 4.917s CPU time, 124.5M memory peak, 136K read from disk, 13.3M written to disk. Aug 19 08:21:18.728954 containerd[1729]: time="2025-08-19T08:21:18.728767157Z" level=error msg="ContainerStatus for \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\": not found" Aug 19 08:21:18.730080 kubelet[3160]: E0819 08:21:18.730054 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\": not found" containerID="adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3" Aug 19 08:21:18.730153 kubelet[3160]: I0819 08:21:18.730079 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3"} err="failed to get container status \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"adc3d025cdb6cd82f4c1eb200db74b57c2f3cad52c9c23bc29ff4d299d1526e3\": not found" Aug 19 08:21:18.730153 kubelet[3160]: I0819 08:21:18.730112 3160 scope.go:117] "RemoveContainer" containerID="c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7" Aug 19 08:21:18.731414 containerd[1729]: time="2025-08-19T08:21:18.731387645Z" level=info msg="RemoveContainer for \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\"" Aug 19 08:21:18.738252 containerd[1729]: time="2025-08-19T08:21:18.738162613Z" level=info msg="RemoveContainer for \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" returns successfully" Aug 19 08:21:18.739560 kubelet[3160]: I0819 08:21:18.739533 3160 scope.go:117] "RemoveContainer" containerID="c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3" Aug 19 08:21:18.742077 containerd[1729]: time="2025-08-19T08:21:18.742046651Z" level=info msg="RemoveContainer for \"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\"" Aug 19 08:21:18.748003 containerd[1729]: time="2025-08-19T08:21:18.747974340Z" level=info msg="RemoveContainer for \"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\" returns successfully" Aug 19 08:21:18.748127 kubelet[3160]: I0819 08:21:18.748104 3160 scope.go:117] "RemoveContainer" containerID="35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542" Aug 19 08:21:18.749570 containerd[1729]: time="2025-08-19T08:21:18.749540770Z" level=info msg="RemoveContainer for \"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\"" Aug 19 08:21:18.755559 containerd[1729]: time="2025-08-19T08:21:18.755522899Z" level=info msg="RemoveContainer for \"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\" returns successfully" Aug 19 08:21:18.755708 kubelet[3160]: I0819 08:21:18.755696 3160 scope.go:117] "RemoveContainer" containerID="28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e" Aug 19 08:21:18.756951 containerd[1729]: time="2025-08-19T08:21:18.756918793Z" level=info msg="RemoveContainer for \"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\"" Aug 19 08:21:18.763928 containerd[1729]: time="2025-08-19T08:21:18.763893226Z" level=info msg="RemoveContainer for \"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\" returns successfully" Aug 19 08:21:18.764059 kubelet[3160]: I0819 08:21:18.764047 3160 scope.go:117] "RemoveContainer" containerID="a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e" Aug 19 08:21:18.765216 containerd[1729]: time="2025-08-19T08:21:18.765195061Z" level=info msg="RemoveContainer for \"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\"" Aug 19 08:21:18.772058 containerd[1729]: time="2025-08-19T08:21:18.771827938Z" level=info msg="RemoveContainer for \"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\" returns successfully" Aug 19 08:21:18.772119 kubelet[3160]: I0819 08:21:18.771986 3160 scope.go:117] "RemoveContainer" containerID="c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7" Aug 19 08:21:18.772248 containerd[1729]: time="2025-08-19T08:21:18.772140107Z" level=error msg="ContainerStatus for \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\": not found" Aug 19 08:21:18.772279 kubelet[3160]: E0819 08:21:18.772237 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\": not found" containerID="c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7" Aug 19 08:21:18.772279 kubelet[3160]: I0819 08:21:18.772264 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7"} err="failed to get container status \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7bbc6450a5cd6eb6edf731ee5bb5544c8d95686fd975bad435789789a96b0b7\": not found" Aug 19 08:21:18.772335 kubelet[3160]: I0819 08:21:18.772282 3160 scope.go:117] "RemoveContainer" containerID="c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3" Aug 19 08:21:18.772529 containerd[1729]: time="2025-08-19T08:21:18.772504016Z" level=error msg="ContainerStatus for \"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\": not found" Aug 19 08:21:18.772650 kubelet[3160]: E0819 08:21:18.772596 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\": not found" containerID="c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3" Aug 19 08:21:18.772650 kubelet[3160]: I0819 08:21:18.772624 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3"} err="failed to get container status \"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7974e892c4a6be62191675bf29c9c6f1c2800df1928e6634ddb330da5c18bd3\": not found" Aug 19 08:21:18.772650 kubelet[3160]: I0819 08:21:18.772640 3160 scope.go:117] "RemoveContainer" containerID="35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542" Aug 19 08:21:18.772906 kubelet[3160]: E0819 08:21:18.772892 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\": not found" containerID="35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542" Aug 19 08:21:18.772941 containerd[1729]: time="2025-08-19T08:21:18.772788863Z" level=error msg="ContainerStatus for \"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\": not found" Aug 19 08:21:18.772968 kubelet[3160]: I0819 08:21:18.772909 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542"} err="failed to get container status \"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\": rpc error: code = NotFound desc = an error occurred when try to find container \"35dba19155251073e86b135879e248ff14597a4c31c74bdabda59e92d3d6b542\": not found" Aug 19 08:21:18.772968 kubelet[3160]: I0819 08:21:18.772961 3160 scope.go:117] "RemoveContainer" containerID="28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e" Aug 19 08:21:18.773162 containerd[1729]: time="2025-08-19T08:21:18.773112786Z" level=error msg="ContainerStatus for \"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\": not found" Aug 19 08:21:18.773249 kubelet[3160]: E0819 08:21:18.773210 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\": not found" containerID="28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e" Aug 19 08:21:18.773249 kubelet[3160]: I0819 08:21:18.773227 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e"} err="failed to get container status \"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\": rpc error: code = NotFound desc = an error occurred when try to find container \"28a9911eb8a4ae1858af8bfbbdffccdf96bfc81a1ca0dae8a9857776a2fe007e\": not found" Aug 19 08:21:18.773311 kubelet[3160]: I0819 08:21:18.773240 3160 scope.go:117] "RemoveContainer" containerID="a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e" Aug 19 08:21:18.773513 containerd[1729]: time="2025-08-19T08:21:18.773466099Z" level=error msg="ContainerStatus for \"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\": not found" Aug 19 08:21:18.773604 kubelet[3160]: E0819 08:21:18.773544 3160 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\": not found" containerID="a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e" Aug 19 08:21:18.773604 kubelet[3160]: I0819 08:21:18.773562 3160 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e"} err="failed to get container status \"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a08bae5b05f2b4bb1f4c57a56ed98774b03d5e4fda070d50c2e335cd308bf70e\": not found" Aug 19 08:21:19.306163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228-shm.mount: Deactivated successfully. Aug 19 08:21:19.306255 systemd[1]: var-lib-kubelet-pods-7dbf04a3\x2d6786\x2d4d9e\x2d915d\x2d4a55e6d504c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv95n6.mount: Deactivated successfully. Aug 19 08:21:19.306324 systemd[1]: var-lib-kubelet-pods-7dbf04a3\x2d6786\x2d4d9e\x2d915d\x2d4a55e6d504c5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 19 08:21:19.306378 systemd[1]: var-lib-kubelet-pods-b79f9c3d\x2dfe19\x2d4e77\x2dbcac\x2dfd599f32717c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dms5vx.mount: Deactivated successfully. Aug 19 08:21:19.306439 systemd[1]: var-lib-kubelet-pods-7dbf04a3\x2d6786\x2d4d9e\x2d915d\x2d4a55e6d504c5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 19 08:21:20.298251 sshd[4711]: Connection closed by 10.200.16.10 port 45536 Aug 19 08:21:20.298584 sshd-session[4708]: pam_unix(sshd:session): session closed for user core Aug 19 08:21:20.302580 systemd[1]: sshd@21-10.200.8.11:22-10.200.16.10:45536.service: Deactivated successfully. Aug 19 08:21:20.304085 systemd[1]: session-24.scope: Deactivated successfully. Aug 19 08:21:20.304811 systemd-logind[1709]: Session 24 logged out. Waiting for processes to exit. Aug 19 08:21:20.306054 systemd-logind[1709]: Removed session 24. Aug 19 08:21:20.412742 kubelet[3160]: I0819 08:21:20.412711 3160 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dbf04a3-6786-4d9e-915d-4a55e6d504c5" path="/var/lib/kubelet/pods/7dbf04a3-6786-4d9e-915d-4a55e6d504c5/volumes" Aug 19 08:21:20.413229 kubelet[3160]: I0819 08:21:20.413210 3160 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79f9c3d-fe19-4e77-bcac-fd599f32717c" path="/var/lib/kubelet/pods/b79f9c3d-fe19-4e77-bcac-fd599f32717c/volumes" Aug 19 08:21:20.429549 systemd[1]: Started sshd@22-10.200.8.11:22-10.200.16.10:47774.service - OpenSSH per-connection server daemon (10.200.16.10:47774). Aug 19 08:21:21.133520 sshd[4865]: Accepted publickey for core from 10.200.16.10 port 47774 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:21:21.134638 sshd-session[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:21:21.138774 systemd-logind[1709]: New session 25 of user core. Aug 19 08:21:21.142833 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 19 08:21:21.966134 systemd[1]: Created slice kubepods-burstable-podd7719c73_7194_4f34_bbce_c5b000912de4.slice - libcontainer container kubepods-burstable-podd7719c73_7194_4f34_bbce_c5b000912de4.slice. Aug 19 08:21:22.040076 kubelet[3160]: I0819 08:21:22.040024 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7719c73-7194-4f34-bbce-c5b000912de4-xtables-lock\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040076 kubelet[3160]: I0819 08:21:22.040078 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7719c73-7194-4f34-bbce-c5b000912de4-hostproc\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040523 kubelet[3160]: I0819 08:21:22.040096 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7719c73-7194-4f34-bbce-c5b000912de4-hubble-tls\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040523 kubelet[3160]: I0819 08:21:22.040129 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7719c73-7194-4f34-bbce-c5b000912de4-bpf-maps\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040523 kubelet[3160]: I0819 08:21:22.040144 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d7719c73-7194-4f34-bbce-c5b000912de4-cilium-ipsec-secrets\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040523 kubelet[3160]: I0819 08:21:22.040160 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7719c73-7194-4f34-bbce-c5b000912de4-host-proc-sys-kernel\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040523 kubelet[3160]: I0819 08:21:22.040177 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7719c73-7194-4f34-bbce-c5b000912de4-clustermesh-secrets\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040523 kubelet[3160]: I0819 08:21:22.040203 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7719c73-7194-4f34-bbce-c5b000912de4-host-proc-sys-net\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040630 kubelet[3160]: I0819 08:21:22.040229 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7719c73-7194-4f34-bbce-c5b000912de4-cilium-run\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040630 kubelet[3160]: I0819 08:21:22.040246 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7719c73-7194-4f34-bbce-c5b000912de4-cilium-cgroup\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040630 kubelet[3160]: I0819 08:21:22.040270 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7719c73-7194-4f34-bbce-c5b000912de4-cilium-config-path\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040630 kubelet[3160]: I0819 08:21:22.040291 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7719c73-7194-4f34-bbce-c5b000912de4-cni-path\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040630 kubelet[3160]: I0819 08:21:22.040310 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7719c73-7194-4f34-bbce-c5b000912de4-etc-cni-netd\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040630 kubelet[3160]: I0819 08:21:22.040327 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7719c73-7194-4f34-bbce-c5b000912de4-lib-modules\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.040751 kubelet[3160]: I0819 08:21:22.040342 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pn4f\" (UniqueName: \"kubernetes.io/projected/d7719c73-7194-4f34-bbce-c5b000912de4-kube-api-access-2pn4f\") pod \"cilium-k5hd9\" (UID: \"d7719c73-7194-4f34-bbce-c5b000912de4\") " pod="kube-system/cilium-k5hd9" Aug 19 08:21:22.197727 sshd[4868]: Connection closed by 10.200.16.10 port 47774 Aug 19 08:21:22.198182 sshd-session[4865]: pam_unix(sshd:session): session closed for user core Aug 19 08:21:22.201333 systemd[1]: sshd@22-10.200.8.11:22-10.200.16.10:47774.service: Deactivated successfully. Aug 19 08:21:22.202960 systemd[1]: session-25.scope: Deactivated successfully. Aug 19 08:21:22.203642 systemd-logind[1709]: Session 25 logged out. Waiting for processes to exit. Aug 19 08:21:22.205006 systemd-logind[1709]: Removed session 25. Aug 19 08:21:22.233676 kubelet[3160]: I0819 08:21:22.232526 3160 setters.go:618] "Node became not ready" node="ci-4426.0.0-a-f24a6b1c57" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-19T08:21:22Z","lastTransitionTime":"2025-08-19T08:21:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 19 08:21:22.270472 containerd[1729]: time="2025-08-19T08:21:22.270432198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5hd9,Uid:d7719c73-7194-4f34-bbce-c5b000912de4,Namespace:kube-system,Attempt:0,}" Aug 19 08:21:22.297562 containerd[1729]: time="2025-08-19T08:21:22.297395383Z" level=info msg="connecting to shim 066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2" address="unix:///run/containerd/s/793951d3575518a945a220019e0c6d72dcaf8cb9007247e7e7e7e489bd6b0205" namespace=k8s.io protocol=ttrpc version=3 Aug 19 08:21:22.324827 systemd[1]: Started cri-containerd-066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2.scope - libcontainer container 066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2. Aug 19 08:21:22.326413 systemd[1]: Started sshd@23-10.200.8.11:22-10.200.16.10:47786.service - OpenSSH per-connection server daemon (10.200.16.10:47786). Aug 19 08:21:22.361600 containerd[1729]: time="2025-08-19T08:21:22.361575935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k5hd9,Uid:d7719c73-7194-4f34-bbce-c5b000912de4,Namespace:kube-system,Attempt:0,} returns sandbox id \"066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2\"" Aug 19 08:21:22.369983 containerd[1729]: time="2025-08-19T08:21:22.369957336Z" level=info msg="CreateContainer within sandbox \"066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 08:21:22.388752 containerd[1729]: time="2025-08-19T08:21:22.388728406Z" level=info msg="Container aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:21:22.398471 containerd[1729]: time="2025-08-19T08:21:22.398446780Z" level=info msg="CreateContainer within sandbox \"066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6\"" Aug 19 08:21:22.398853 containerd[1729]: time="2025-08-19T08:21:22.398831930Z" level=info msg="StartContainer for \"aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6\"" Aug 19 08:21:22.399454 containerd[1729]: time="2025-08-19T08:21:22.399430924Z" level=info msg="connecting to shim aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6" address="unix:///run/containerd/s/793951d3575518a945a220019e0c6d72dcaf8cb9007247e7e7e7e489bd6b0205" protocol=ttrpc version=3 Aug 19 08:21:22.415824 systemd[1]: Started cri-containerd-aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6.scope - libcontainer container aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6. Aug 19 08:21:22.437656 containerd[1729]: time="2025-08-19T08:21:22.437620491Z" level=info msg="StartContainer for \"aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6\" returns successfully" Aug 19 08:21:22.441121 systemd[1]: cri-containerd-aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6.scope: Deactivated successfully. Aug 19 08:21:22.443507 containerd[1729]: time="2025-08-19T08:21:22.443420693Z" level=info msg="received exit event container_id:\"aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6\" id:\"aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6\" pid:4944 exited_at:{seconds:1755591682 nanos:443166088}" Aug 19 08:21:22.443507 containerd[1729]: time="2025-08-19T08:21:22.443477462Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6\" id:\"aba8080333c5a1d0cc5ad8e41d395a57d647fdf09bf43db549a2d272df4d7af6\" pid:4944 exited_at:{seconds:1755591682 nanos:443166088}" Aug 19 08:21:22.731320 containerd[1729]: time="2025-08-19T08:21:22.731201856Z" level=info msg="CreateContainer within sandbox \"066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 08:21:22.744817 containerd[1729]: time="2025-08-19T08:21:22.744782898Z" level=info msg="Container 4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:21:22.756044 containerd[1729]: time="2025-08-19T08:21:22.756011269Z" level=info msg="CreateContainer within sandbox \"066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b\"" Aug 19 08:21:22.756476 containerd[1729]: time="2025-08-19T08:21:22.756457341Z" level=info msg="StartContainer for \"4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b\"" Aug 19 08:21:22.757456 containerd[1729]: time="2025-08-19T08:21:22.757418631Z" level=info msg="connecting to shim 4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b" address="unix:///run/containerd/s/793951d3575518a945a220019e0c6d72dcaf8cb9007247e7e7e7e489bd6b0205" protocol=ttrpc version=3 Aug 19 08:21:22.776861 systemd[1]: Started cri-containerd-4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b.scope - libcontainer container 4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b. Aug 19 08:21:22.801779 containerd[1729]: time="2025-08-19T08:21:22.801751316Z" level=info msg="StartContainer for \"4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b\" returns successfully" Aug 19 08:21:22.806063 systemd[1]: cri-containerd-4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b.scope: Deactivated successfully. Aug 19 08:21:22.807050 containerd[1729]: time="2025-08-19T08:21:22.806985608Z" level=info msg="received exit event container_id:\"4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b\" id:\"4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b\" pid:4986 exited_at:{seconds:1755591682 nanos:806675427}" Aug 19 08:21:22.807178 containerd[1729]: time="2025-08-19T08:21:22.807155872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b\" id:\"4d98958c32926982d24f5b4d654c3c8ce5f272e1b278fe01a869195d9cd6db7b\" pid:4986 exited_at:{seconds:1755591682 nanos:806675427}" Aug 19 08:21:23.039125 sshd[4914]: Accepted publickey for core from 10.200.16.10 port 47786 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:21:23.040253 sshd-session[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:21:23.044114 systemd-logind[1709]: New session 26 of user core. Aug 19 08:21:23.046851 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 19 08:21:23.486646 kubelet[3160]: E0819 08:21:23.486592 3160 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 19 08:21:23.601615 sshd[5019]: Connection closed by 10.200.16.10 port 47786 Aug 19 08:21:23.602178 sshd-session[4914]: pam_unix(sshd:session): session closed for user core Aug 19 08:21:23.604829 systemd[1]: sshd@23-10.200.8.11:22-10.200.16.10:47786.service: Deactivated successfully. Aug 19 08:21:23.606658 systemd[1]: session-26.scope: Deactivated successfully. Aug 19 08:21:23.608508 systemd-logind[1709]: Session 26 logged out. Waiting for processes to exit. Aug 19 08:21:23.610213 systemd-logind[1709]: Removed session 26. Aug 19 08:21:23.732510 systemd[1]: Started sshd@24-10.200.8.11:22-10.200.16.10:47792.service - OpenSSH per-connection server daemon (10.200.16.10:47792). Aug 19 08:21:23.737974 containerd[1729]: time="2025-08-19T08:21:23.737885409Z" level=info msg="CreateContainer within sandbox \"066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 08:21:23.763615 containerd[1729]: time="2025-08-19T08:21:23.760279297Z" level=info msg="Container 6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:21:23.765296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946491456.mount: Deactivated successfully. Aug 19 08:21:23.776462 containerd[1729]: time="2025-08-19T08:21:23.776434917Z" level=info msg="CreateContainer within sandbox \"066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b\"" Aug 19 08:21:23.776926 containerd[1729]: time="2025-08-19T08:21:23.776899875Z" level=info msg="StartContainer for \"6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b\"" Aug 19 08:21:23.778343 containerd[1729]: time="2025-08-19T08:21:23.778279756Z" level=info msg="connecting to shim 6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b" address="unix:///run/containerd/s/793951d3575518a945a220019e0c6d72dcaf8cb9007247e7e7e7e489bd6b0205" protocol=ttrpc version=3 Aug 19 08:21:23.801828 systemd[1]: Started cri-containerd-6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b.scope - libcontainer container 6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b. Aug 19 08:21:23.873839 systemd[1]: cri-containerd-6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b.scope: Deactivated successfully. Aug 19 08:21:23.876657 containerd[1729]: time="2025-08-19T08:21:23.876620453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b\" id:\"6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b\" pid:5042 exited_at:{seconds:1755591683 nanos:876349054}" Aug 19 08:21:23.876866 containerd[1729]: time="2025-08-19T08:21:23.876753875Z" level=info msg="received exit event container_id:\"6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b\" id:\"6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b\" pid:5042 exited_at:{seconds:1755591683 nanos:876349054}" Aug 19 08:21:23.878648 containerd[1729]: time="2025-08-19T08:21:23.878572493Z" level=info msg="StartContainer for \"6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b\" returns successfully" Aug 19 08:21:23.894091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d0b86d4143b7116868fcab9e23592b7fffa0fcc4cb8ef2f94e533c6a8e89b2b-rootfs.mount: Deactivated successfully. Aug 19 08:21:24.442150 sshd[5026]: Accepted publickey for core from 10.200.16.10 port 47792 ssh2: RSA SHA256:rwMkWSJkyuzYLtFMdKgQVeSj1yB6THB2GYbfaCk21Xc Aug 19 08:21:24.443391 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 08:21:24.447523 systemd-logind[1709]: New session 27 of user core. Aug 19 08:21:24.455811 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 19 08:21:24.739056 containerd[1729]: time="2025-08-19T08:21:24.739019972Z" level=info msg="CreateContainer within sandbox \"066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 08:21:24.761700 containerd[1729]: time="2025-08-19T08:21:24.761635376Z" level=info msg="Container 803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:21:24.776013 containerd[1729]: time="2025-08-19T08:21:24.775985924Z" level=info msg="CreateContainer within sandbox \"066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70\"" Aug 19 08:21:24.776329 containerd[1729]: time="2025-08-19T08:21:24.776306612Z" level=info msg="StartContainer for \"803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70\"" Aug 19 08:21:24.776927 containerd[1729]: time="2025-08-19T08:21:24.776907148Z" level=info msg="connecting to shim 803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70" address="unix:///run/containerd/s/793951d3575518a945a220019e0c6d72dcaf8cb9007247e7e7e7e489bd6b0205" protocol=ttrpc version=3 Aug 19 08:21:24.794924 systemd[1]: Started cri-containerd-803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70.scope - libcontainer container 803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70. Aug 19 08:21:24.814100 systemd[1]: cri-containerd-803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70.scope: Deactivated successfully. Aug 19 08:21:24.817002 containerd[1729]: time="2025-08-19T08:21:24.816803121Z" level=info msg="TaskExit event in podsandbox handler container_id:\"803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70\" id:\"803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70\" pid:5083 exited_at:{seconds:1755591684 nanos:816047361}" Aug 19 08:21:24.817854 containerd[1729]: time="2025-08-19T08:21:24.817819756Z" level=info msg="received exit event container_id:\"803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70\" id:\"803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70\" pid:5083 exited_at:{seconds:1755591684 nanos:816047361}" Aug 19 08:21:24.827077 containerd[1729]: time="2025-08-19T08:21:24.827041078Z" level=info msg="StartContainer for \"803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70\" returns successfully" Aug 19 08:21:25.744818 containerd[1729]: time="2025-08-19T08:21:25.744779163Z" level=info msg="CreateContainer within sandbox \"066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 08:21:25.752999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-803b1d60a00ffb9af891d4bb25471af3b4d9b729b7514ce8b6912bd24da59b70-rootfs.mount: Deactivated successfully. Aug 19 08:21:25.763061 containerd[1729]: time="2025-08-19T08:21:25.761263820Z" level=info msg="Container e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b: CDI devices from CRI Config.CDIDevices: []" Aug 19 08:21:25.775835 containerd[1729]: time="2025-08-19T08:21:25.775808049Z" level=info msg="CreateContainer within sandbox \"066b49752c9320efcb90278ab08ff6d3d8c626a1486e1fa237ce6e80a9b952f2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b\"" Aug 19 08:21:25.776198 containerd[1729]: time="2025-08-19T08:21:25.776177472Z" level=info msg="StartContainer for \"e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b\"" Aug 19 08:21:25.776903 containerd[1729]: time="2025-08-19T08:21:25.776862775Z" level=info msg="connecting to shim e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b" address="unix:///run/containerd/s/793951d3575518a945a220019e0c6d72dcaf8cb9007247e7e7e7e489bd6b0205" protocol=ttrpc version=3 Aug 19 08:21:25.794824 systemd[1]: Started cri-containerd-e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b.scope - libcontainer container e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b. Aug 19 08:21:25.821235 containerd[1729]: time="2025-08-19T08:21:25.821196250Z" level=info msg="StartContainer for \"e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b\" returns successfully" Aug 19 08:21:25.884982 containerd[1729]: time="2025-08-19T08:21:25.884954543Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b\" id:\"f3559f6caeda4ac5a205836f2dd7243537f408e4f910a1057f3641b3b4b2c042\" pid:5160 exited_at:{seconds:1755591685 nanos:884576282}" Aug 19 08:21:26.137745 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Aug 19 08:21:27.261070 kubelet[3160]: I0819 08:21:27.260920 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k5hd9" podStartSLOduration=6.260861027 podStartE2EDuration="6.260861027s" podCreationTimestamp="2025-08-19 08:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 08:21:27.258549009 +0000 UTC m=+168.947781067" watchObservedRunningTime="2025-08-19 08:21:27.260861027 +0000 UTC m=+168.950093078" Aug 19 08:21:29.362254 containerd[1729]: time="2025-08-19T08:21:29.362203300Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b\" id:\"5802f72889c85fe8036731536686da271764e26ac4d8682a3b357406ff95b011\" pid:5564 exit_status:1 exited_at:{seconds:1755591689 nanos:361592254}" Aug 19 08:21:29.609948 systemd-networkd[1507]: lxc_health: Link UP Aug 19 08:21:29.610367 systemd-networkd[1507]: lxc_health: Gained carrier Aug 19 08:21:31.480646 containerd[1729]: time="2025-08-19T08:21:31.480602483Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b\" id:\"e5b5c9ae8007eaa7ee502dbe2d94a35f4b0afb72561c8e9e0a8f35fbed7aa6b1\" pid:5684 exited_at:{seconds:1755591691 nanos:480245604}" Aug 19 08:21:31.668936 systemd-networkd[1507]: lxc_health: Gained IPv6LL Aug 19 08:21:33.659183 containerd[1729]: time="2025-08-19T08:21:33.659126270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b\" id:\"b11086ec0cf80b3d4ce648ee4199550ea157e1e945a55f10cf2cf96f278adf42\" pid:5727 exited_at:{seconds:1755591693 nanos:658585907}" Aug 19 08:21:35.733627 containerd[1729]: time="2025-08-19T08:21:35.733581896Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b\" id:\"cb0e8dece5f72115edc86304f3c8519f447a68b1973c1c791c9044406acff141\" pid:5750 exited_at:{seconds:1755591695 nanos:733179320}" Aug 19 08:21:37.813287 containerd[1729]: time="2025-08-19T08:21:37.813234137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e24093c8c35a999e1e2d5dd6dc06be058b2bbf36ed8d490ddb02e783be4af87b\" id:\"851be03130ef74356c91ee1c76b44544cec742c74b9e1490de38316af33fe7c9\" pid:5773 exited_at:{seconds:1755591697 nanos:812827941}" Aug 19 08:21:37.939232 sshd[5070]: Connection closed by 10.200.16.10 port 47792 Aug 19 08:21:37.939837 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Aug 19 08:21:37.943081 systemd[1]: sshd@24-10.200.8.11:22-10.200.16.10:47792.service: Deactivated successfully. Aug 19 08:21:37.944886 systemd[1]: session-27.scope: Deactivated successfully. Aug 19 08:21:37.945753 systemd-logind[1709]: Session 27 logged out. Waiting for processes to exit. Aug 19 08:21:37.946860 systemd-logind[1709]: Removed session 27. Aug 19 08:21:38.405165 containerd[1729]: time="2025-08-19T08:21:38.405129476Z" level=info msg="StopPodSandbox for \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\"" Aug 19 08:21:38.405340 containerd[1729]: time="2025-08-19T08:21:38.405265336Z" level=info msg="TearDown network for sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" successfully" Aug 19 08:21:38.405340 containerd[1729]: time="2025-08-19T08:21:38.405279685Z" level=info msg="StopPodSandbox for \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" returns successfully" Aug 19 08:21:38.405643 containerd[1729]: time="2025-08-19T08:21:38.405624696Z" level=info msg="RemovePodSandbox for \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\"" Aug 19 08:21:38.405700 containerd[1729]: time="2025-08-19T08:21:38.405648459Z" level=info msg="Forcibly stopping sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\"" Aug 19 08:21:38.405771 containerd[1729]: time="2025-08-19T08:21:38.405758002Z" level=info msg="TearDown network for sandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" successfully" Aug 19 08:21:38.406790 containerd[1729]: time="2025-08-19T08:21:38.406756259Z" level=info msg="Ensure that sandbox 07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228 in task-service has been cleanup successfully" Aug 19 08:21:38.418506 containerd[1729]: time="2025-08-19T08:21:38.418471056Z" level=info msg="RemovePodSandbox \"07f5015964658d02bd32c39525d9108beee42f3395f7cc7824a1ab385d285228\" returns successfully" Aug 19 08:21:38.418948 containerd[1729]: time="2025-08-19T08:21:38.418914153Z" level=info msg="StopPodSandbox for \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\"" Aug 19 08:21:38.419047 containerd[1729]: time="2025-08-19T08:21:38.419019820Z" level=info msg="TearDown network for sandbox \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\" successfully" Aug 19 08:21:38.419085 containerd[1729]: time="2025-08-19T08:21:38.419045914Z" level=info msg="StopPodSandbox for \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\" returns successfully" Aug 19 08:21:38.419439 containerd[1729]: time="2025-08-19T08:21:38.419419507Z" level=info msg="RemovePodSandbox for \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\"" Aug 19 08:21:38.419706 containerd[1729]: time="2025-08-19T08:21:38.419523540Z" level=info msg="Forcibly stopping sandbox \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\"" Aug 19 08:21:38.419706 containerd[1729]: time="2025-08-19T08:21:38.419604785Z" level=info msg="TearDown network for sandbox \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\" successfully" Aug 19 08:21:38.420847 containerd[1729]: time="2025-08-19T08:21:38.420818799Z" level=info msg="Ensure that sandbox 8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c in task-service has been cleanup successfully" Aug 19 08:21:38.427314 containerd[1729]: time="2025-08-19T08:21:38.427279377Z" level=info msg="RemovePodSandbox \"8fc34a369ed15bcc9b65f0da0cf7170aa38e46679fae4781099af96b0e730c0c\" returns successfully"