Sep 12 17:47:33.008264 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 15:34:39 -00 2025 Sep 12 17:47:33.008292 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:47:33.008304 kernel: BIOS-provided physical RAM map: Sep 12 17:47:33.008311 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:47:33.008317 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Sep 12 17:47:33.008324 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Sep 12 17:47:33.008333 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Sep 12 17:47:33.008342 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Sep 12 17:47:33.008349 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Sep 12 17:47:33.008356 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Sep 12 17:47:33.008363 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Sep 12 17:47:33.008370 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Sep 12 17:47:33.008377 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Sep 12 17:47:33.008384 kernel: printk: legacy bootconsole [earlyser0] enabled Sep 12 17:47:33.008395 kernel: NX (Execute Disable) protection: active Sep 12 17:47:33.008402 kernel: APIC: Static calls initialized Sep 12 17:47:33.008410 kernel: efi: EFI v2.7 by Microsoft Sep 12 17:47:33.008418 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5518 RNG=0x3ffd2018 Sep 12 17:47:33.008426 kernel: random: crng init done Sep 12 17:47:33.008433 kernel: secureboot: Secure boot disabled Sep 12 17:47:33.008441 kernel: SMBIOS 3.1.0 present. Sep 12 17:47:33.008448 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Sep 12 17:47:33.008456 kernel: DMI: Memory slots populated: 2/2 Sep 12 17:47:33.008465 kernel: Hypervisor detected: Microsoft Hyper-V Sep 12 17:47:33.008473 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Sep 12 17:47:33.008480 kernel: Hyper-V: Nested features: 0x3e0101 Sep 12 17:47:33.008488 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Sep 12 17:47:33.008495 kernel: Hyper-V: Using hypercall for remote TLB flush Sep 12 17:47:33.008503 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 17:47:33.008511 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Sep 12 17:47:33.008519 kernel: tsc: Detected 2299.999 MHz processor Sep 12 17:47:33.008526 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:47:33.008535 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:47:33.008543 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Sep 12 17:47:33.008553 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:47:33.008561 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:47:33.008569 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Sep 12 17:47:33.008577 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Sep 12 17:47:33.008584 kernel: Using GB pages for direct mapping Sep 12 17:47:33.008593 kernel: ACPI: Early table checksum verification disabled Sep 12 17:47:33.008604 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Sep 12 17:47:33.008613 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:33.008622 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:33.008630 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Sep 12 17:47:33.008638 kernel: ACPI: FACS 0x000000003FFFE000 000040 Sep 12 17:47:33.008647 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:33.008655 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:33.008665 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:33.008674 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 12 17:47:33.008682 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Sep 12 17:47:33.008690 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 12 17:47:33.008699 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Sep 12 17:47:33.008707 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Sep 12 17:47:33.008715 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Sep 12 17:47:33.008724 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Sep 12 17:47:33.008732 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Sep 12 17:47:33.008742 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Sep 12 17:47:33.008750 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Sep 12 17:47:33.008758 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Sep 12 17:47:33.008766 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Sep 12 17:47:33.008775 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 12 17:47:33.008783 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Sep 12 17:47:33.008791 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Sep 12 17:47:33.008800 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Sep 12 17:47:33.008808 kernel: Zone ranges: Sep 12 17:47:33.008818 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:47:33.008826 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 12 17:47:33.008834 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 17:47:33.008842 kernel: Device empty Sep 12 17:47:33.008851 kernel: Movable zone start for each node Sep 12 17:47:33.008859 kernel: Early memory node ranges Sep 12 17:47:33.008867 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:47:33.008876 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Sep 12 17:47:33.008884 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Sep 12 17:47:33.008893 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Sep 12 17:47:33.008902 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Sep 12 17:47:33.008910 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Sep 12 17:47:33.008919 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:47:33.008927 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:47:33.008935 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 12 17:47:33.008944 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Sep 12 17:47:33.008952 kernel: ACPI: PM-Timer IO Port: 0x408 Sep 12 17:47:33.008960 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:47:33.008970 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:47:33.008978 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:47:33.008986 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Sep 12 17:47:33.008994 kernel: TSC deadline timer available Sep 12 17:47:33.009002 kernel: CPU topo: Max. logical packages: 1 Sep 12 17:47:33.009011 kernel: CPU topo: Max. logical dies: 1 Sep 12 17:47:33.009019 kernel: CPU topo: Max. dies per package: 1 Sep 12 17:47:33.009027 kernel: CPU topo: Max. threads per core: 2 Sep 12 17:47:33.009035 kernel: CPU topo: Num. cores per package: 1 Sep 12 17:47:33.009044 kernel: CPU topo: Num. threads per package: 2 Sep 12 17:47:33.009052 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Sep 12 17:47:33.009060 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Sep 12 17:47:33.009068 kernel: Booting paravirtualized kernel on Hyper-V Sep 12 17:47:33.009077 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:47:33.009085 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:47:33.009093 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Sep 12 17:47:33.009112 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Sep 12 17:47:33.009120 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:47:33.009130 kernel: Hyper-V: PV spinlocks enabled Sep 12 17:47:33.009138 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:47:33.009148 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:47:33.009157 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:47:33.009165 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Sep 12 17:47:33.009174 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:47:33.009182 kernel: Fallback order for Node 0: 0 Sep 12 17:47:33.009191 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Sep 12 17:47:33.009201 kernel: Policy zone: Normal Sep 12 17:47:33.009209 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:47:33.009217 kernel: software IO TLB: area num 2. Sep 12 17:47:33.009225 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:47:33.009234 kernel: ftrace: allocating 40125 entries in 157 pages Sep 12 17:47:33.009242 kernel: ftrace: allocated 157 pages with 5 groups Sep 12 17:47:33.009250 kernel: Dynamic Preempt: voluntary Sep 12 17:47:33.009259 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:47:33.009271 kernel: rcu: RCU event tracing is enabled. Sep 12 17:47:33.009287 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:47:33.009296 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:47:33.009305 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:47:33.009315 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:47:33.009325 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:47:33.009333 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:47:33.009342 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:47:33.009352 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:47:33.009361 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:47:33.009370 kernel: Using NULL legacy PIC Sep 12 17:47:33.009381 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Sep 12 17:47:33.009390 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:47:33.009399 kernel: Console: colour dummy device 80x25 Sep 12 17:47:33.009407 kernel: printk: legacy console [tty1] enabled Sep 12 17:47:33.009417 kernel: printk: legacy console [ttyS0] enabled Sep 12 17:47:33.009426 kernel: printk: legacy bootconsole [earlyser0] disabled Sep 12 17:47:33.009435 kernel: ACPI: Core revision 20240827 Sep 12 17:47:33.009445 kernel: Failed to register legacy timer interrupt Sep 12 17:47:33.009454 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:47:33.009463 kernel: x2apic enabled Sep 12 17:47:33.009472 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:47:33.009481 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Sep 12 17:47:33.009490 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 12 17:47:33.009499 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Sep 12 17:47:33.009509 kernel: Hyper-V: Using IPI hypercalls Sep 12 17:47:33.009518 kernel: APIC: send_IPI() replaced with hv_send_ipi() Sep 12 17:47:33.009528 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Sep 12 17:47:33.009538 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Sep 12 17:47:33.009547 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Sep 12 17:47:33.009556 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Sep 12 17:47:33.009565 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Sep 12 17:47:33.009574 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 12 17:47:33.009582 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Sep 12 17:47:33.009591 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:47:33.009600 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 17:47:33.009610 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 17:47:33.009618 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:47:33.009626 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:47:33.009639 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:47:33.009653 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Sep 12 17:47:33.009666 kernel: RETBleed: Vulnerable Sep 12 17:47:33.009674 kernel: Speculative Store Bypass: Vulnerable Sep 12 17:47:33.009681 kernel: active return thunk: its_return_thunk Sep 12 17:47:33.009690 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:47:33.009698 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:47:33.009707 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:47:33.009717 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:47:33.009726 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Sep 12 17:47:33.009734 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Sep 12 17:47:33.009743 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Sep 12 17:47:33.009751 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Sep 12 17:47:33.009760 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Sep 12 17:47:33.009768 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Sep 12 17:47:33.009777 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:47:33.009785 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Sep 12 17:47:33.009794 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Sep 12 17:47:33.009804 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Sep 12 17:47:33.009812 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Sep 12 17:47:33.009821 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Sep 12 17:47:33.009829 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Sep 12 17:47:33.009838 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Sep 12 17:47:33.009846 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:47:33.009855 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:47:33.009863 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:47:33.009872 kernel: landlock: Up and running. Sep 12 17:47:33.009880 kernel: SELinux: Initializing. Sep 12 17:47:33.009889 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:47:33.009897 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:47:33.009907 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Sep 12 17:47:33.009916 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Sep 12 17:47:33.009925 kernel: signal: max sigframe size: 11952 Sep 12 17:47:33.009933 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:47:33.009942 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:47:33.009951 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:47:33.009960 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:47:33.009969 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:47:33.009978 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:47:33.009988 kernel: .... node #0, CPUs: #1 Sep 12 17:47:33.009996 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:47:33.010005 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Sep 12 17:47:33.010014 kernel: Memory: 8077032K/8383228K available (14336K kernel code, 2432K rwdata, 9960K rodata, 54040K init, 2924K bss, 299988K reserved, 0K cma-reserved) Sep 12 17:47:33.010023 kernel: devtmpfs: initialized Sep 12 17:47:33.010032 kernel: x86/mm: Memory block size: 128MB Sep 12 17:47:33.010041 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Sep 12 17:47:33.010050 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:47:33.010059 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:47:33.010069 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:47:33.010078 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:47:33.010087 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:47:33.010115 kernel: audit: type=2000 audit(1757699250.029:1): state=initialized audit_enabled=0 res=1 Sep 12 17:47:33.010124 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:47:33.010133 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:47:33.010142 kernel: cpuidle: using governor menu Sep 12 17:47:33.010150 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:47:33.010159 kernel: dca service started, version 1.12.1 Sep 12 17:47:33.010169 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Sep 12 17:47:33.010178 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Sep 12 17:47:33.010187 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:47:33.010196 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:47:33.010205 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:47:33.010213 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:47:33.010222 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:47:33.010231 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:47:33.010239 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:47:33.010250 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:47:33.010258 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:47:33.010267 kernel: ACPI: Interpreter enabled Sep 12 17:47:33.010276 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:47:33.010284 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:47:33.010293 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:47:33.010302 kernel: PCI: Ignoring E820 reservations for host bridge windows Sep 12 17:47:33.010311 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Sep 12 17:47:33.010320 kernel: iommu: Default domain type: Translated Sep 12 17:47:33.010330 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:47:33.010338 kernel: efivars: Registered efivars operations Sep 12 17:47:33.010347 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:47:33.010356 kernel: PCI: System does not support PCI Sep 12 17:47:33.010364 kernel: vgaarb: loaded Sep 12 17:47:33.010373 kernel: clocksource: Switched to clocksource tsc-early Sep 12 17:47:33.010382 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:47:33.010391 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:47:33.010399 kernel: pnp: PnP ACPI init Sep 12 17:47:33.010409 kernel: pnp: PnP ACPI: found 3 devices Sep 12 17:47:33.010418 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:47:33.010427 kernel: NET: Registered PF_INET protocol family Sep 12 17:47:33.010436 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:47:33.010445 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Sep 12 17:47:33.010454 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:47:33.010463 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:47:33.010472 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 12 17:47:33.010481 kernel: TCP: Hash tables configured (established 65536 bind 65536) Sep 12 17:47:33.010491 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:47:33.010499 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Sep 12 17:47:33.010508 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:47:33.010517 kernel: NET: Registered PF_XDP protocol family Sep 12 17:47:33.010526 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:47:33.010534 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 17:47:33.010543 kernel: software IO TLB: mapped [mem 0x000000003a9d3000-0x000000003e9d3000] (64MB) Sep 12 17:47:33.010552 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Sep 12 17:47:33.010561 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Sep 12 17:47:33.010571 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Sep 12 17:47:33.010580 kernel: clocksource: Switched to clocksource tsc Sep 12 17:47:33.010589 kernel: Initialise system trusted keyrings Sep 12 17:47:33.010597 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Sep 12 17:47:33.010606 kernel: Key type asymmetric registered Sep 12 17:47:33.010615 kernel: Asymmetric key parser 'x509' registered Sep 12 17:47:33.010624 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:47:33.010633 kernel: io scheduler mq-deadline registered Sep 12 17:47:33.010641 kernel: io scheduler kyber registered Sep 12 17:47:33.010651 kernel: io scheduler bfq registered Sep 12 17:47:33.010660 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:47:33.010669 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:47:33.010678 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:47:33.010687 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 12 17:47:33.010696 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:47:33.010705 kernel: i8042: PNP: No PS/2 controller found. Sep 12 17:47:33.010843 kernel: rtc_cmos 00:02: registered as rtc0 Sep 12 17:47:33.010922 kernel: rtc_cmos 00:02: setting system clock to 2025-09-12T17:47:32 UTC (1757699252) Sep 12 17:47:33.010993 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Sep 12 17:47:33.011004 kernel: intel_pstate: Intel P-state driver initializing Sep 12 17:47:33.011012 kernel: efifb: probing for efifb Sep 12 17:47:33.011021 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 12 17:47:33.011030 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 12 17:47:33.011039 kernel: efifb: scrolling: redraw Sep 12 17:47:33.011048 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 12 17:47:33.011058 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:47:33.011067 kernel: fb0: EFI VGA frame buffer device Sep 12 17:47:33.011077 kernel: pstore: Using crash dump compression: deflate Sep 12 17:47:33.011086 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:47:33.011113 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:47:33.011122 kernel: Segment Routing with IPv6 Sep 12 17:47:33.011131 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:47:33.011140 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:47:33.011148 kernel: Key type dns_resolver registered Sep 12 17:47:33.011157 kernel: IPI shorthand broadcast: enabled Sep 12 17:47:33.011168 kernel: sched_clock: Marking stable (3135188400, 108768838)->(3591119685, -347162447) Sep 12 17:47:33.011177 kernel: registered taskstats version 1 Sep 12 17:47:33.011185 kernel: Loading compiled-in X.509 certificates Sep 12 17:47:33.011194 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: f1ae8d6e9bfae84d90f4136cf098b0465b2a5bd7' Sep 12 17:47:33.011203 kernel: Demotion targets for Node 0: null Sep 12 17:47:33.011212 kernel: Key type .fscrypt registered Sep 12 17:47:33.011220 kernel: Key type fscrypt-provisioning registered Sep 12 17:47:33.011229 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:47:33.011240 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:47:33.011249 kernel: ima: No architecture policies found Sep 12 17:47:33.011257 kernel: clk: Disabling unused clocks Sep 12 17:47:33.011266 kernel: Warning: unable to open an initial console. Sep 12 17:47:33.011274 kernel: Freeing unused kernel image (initmem) memory: 54040K Sep 12 17:47:33.011283 kernel: Write protecting the kernel read-only data: 24576k Sep 12 17:47:33.011292 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Sep 12 17:47:33.011301 kernel: Run /init as init process Sep 12 17:47:33.011310 kernel: with arguments: Sep 12 17:47:33.011320 kernel: /init Sep 12 17:47:33.011329 kernel: with environment: Sep 12 17:47:33.011338 kernel: HOME=/ Sep 12 17:47:33.011346 kernel: TERM=linux Sep 12 17:47:33.011355 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:47:33.011365 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:47:33.011378 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:47:33.011388 systemd[1]: Detected virtualization microsoft. Sep 12 17:47:33.011399 systemd[1]: Detected architecture x86-64. Sep 12 17:47:33.011409 systemd[1]: Running in initrd. Sep 12 17:47:33.011418 systemd[1]: No hostname configured, using default hostname. Sep 12 17:47:33.011428 systemd[1]: Hostname set to . Sep 12 17:47:33.011438 systemd[1]: Initializing machine ID from random generator. Sep 12 17:47:33.011447 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:47:33.011456 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:47:33.011465 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:47:33.011477 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:47:33.011487 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:47:33.011496 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:47:33.011506 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:47:33.011517 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:47:33.011526 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:47:33.011536 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:47:33.011547 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:47:33.011556 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:47:33.011566 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:47:33.011575 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:47:33.011584 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:47:33.011593 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:47:33.011603 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:47:33.011612 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:47:33.011623 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:47:33.011633 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:47:33.011642 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:47:33.011651 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:47:33.011661 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:47:33.011670 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:47:33.011680 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:47:33.011689 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:47:33.011699 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:47:33.011710 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:47:33.011720 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:47:33.011739 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:47:33.011750 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:33.011774 systemd-journald[205]: Collecting audit messages is disabled. Sep 12 17:47:33.011800 systemd-journald[205]: Journal started Sep 12 17:47:33.011824 systemd-journald[205]: Runtime Journal (/run/log/journal/2eadeaebe1b74e7b9e9dba6a5d8a6339) is 8M, max 158.9M, 150.9M free. Sep 12 17:47:33.014600 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:47:33.021128 systemd-modules-load[207]: Inserted module 'overlay' Sep 12 17:47:33.023244 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:47:33.029462 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:47:33.029931 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:47:33.042000 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:47:33.050199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:47:33.058711 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:33.066723 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:47:33.073938 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:47:33.073959 kernel: Bridge firewalling registered Sep 12 17:47:33.070508 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:47:33.075684 systemd-modules-load[207]: Inserted module 'br_netfilter' Sep 12 17:47:33.082183 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:47:33.082534 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:47:33.082728 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:47:33.086222 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:47:33.086795 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:47:33.101171 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:47:33.103038 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:47:33.112445 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:47:33.125523 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:47:33.128924 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:47:33.147461 systemd-resolved[233]: Positive Trust Anchors: Sep 12 17:47:33.147473 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:47:33.147511 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:47:33.166644 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=271a44cc8ea1639cfb6fdf777202a5f025fda0b3ce9b293cc4e0e7047aecb858 Sep 12 17:47:33.181025 systemd-resolved[233]: Defaulting to hostname 'linux'. Sep 12 17:47:33.184166 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:47:33.189376 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:47:33.224116 kernel: SCSI subsystem initialized Sep 12 17:47:33.232113 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:47:33.241121 kernel: iscsi: registered transport (tcp) Sep 12 17:47:33.258235 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:47:33.258275 kernel: QLogic iSCSI HBA Driver Sep 12 17:47:33.271676 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:47:33.285025 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:47:33.286655 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:47:33.318672 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:47:33.323203 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:47:33.379116 kernel: raid6: avx512x4 gen() 40012 MB/s Sep 12 17:47:33.396110 kernel: raid6: avx512x2 gen() 43530 MB/s Sep 12 17:47:33.413107 kernel: raid6: avx512x1 gen() 26154 MB/s Sep 12 17:47:33.431111 kernel: raid6: avx2x4 gen() 34762 MB/s Sep 12 17:47:33.449108 kernel: raid6: avx2x2 gen() 36987 MB/s Sep 12 17:47:33.467127 kernel: raid6: avx2x1 gen() 28476 MB/s Sep 12 17:47:33.467141 kernel: raid6: using algorithm avx512x2 gen() 43530 MB/s Sep 12 17:47:33.485653 kernel: raid6: .... xor() 30453 MB/s, rmw enabled Sep 12 17:47:33.485674 kernel: raid6: using avx512x2 recovery algorithm Sep 12 17:47:33.505120 kernel: xor: automatically using best checksumming function avx Sep 12 17:47:33.625117 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:47:33.629754 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:47:33.633233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:47:33.659280 systemd-udevd[453]: Using default interface naming scheme 'v255'. Sep 12 17:47:33.663340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:47:33.669211 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:47:33.694031 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Sep 12 17:47:33.711790 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:47:33.714206 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:47:33.748524 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:47:33.755213 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:47:33.798143 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:47:33.824278 kernel: AES CTR mode by8 optimization enabled Sep 12 17:47:33.826309 kernel: hv_vmbus: Vmbus version:5.3 Sep 12 17:47:33.830803 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:33.839742 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 12 17:47:33.836854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:33.847588 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 12 17:47:33.849379 kernel: hv_vmbus: registering driver hv_netvsc Sep 12 17:47:33.848398 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:33.858436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:33.867666 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:33.871070 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:33.882337 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 12 17:47:33.882377 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2da279 (unnamed net_device) (uninitialized): VF slot 1 added Sep 12 17:47:33.882540 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 12 17:47:33.883448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:33.897194 kernel: PTP clock support registered Sep 12 17:47:33.897212 kernel: hv_vmbus: registering driver hv_pci Sep 12 17:47:33.897227 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Sep 12 17:47:33.910559 kernel: hv_vmbus: registering driver hv_storvsc Sep 12 17:47:33.924322 kernel: hv_utils: Registering HyperV Utility Driver Sep 12 17:47:33.924351 kernel: hv_vmbus: registering driver hv_utils Sep 12 17:47:33.932122 kernel: scsi host0: storvsc_host_t Sep 12 17:47:33.935134 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:47:33.935169 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Sep 12 17:47:33.938454 kernel: hv_utils: Shutdown IC version 3.2 Sep 12 17:47:33.938490 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Sep 12 17:47:33.938628 kernel: hv_utils: Heartbeat IC version 3.0 Sep 12 17:47:33.941555 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Sep 12 17:47:33.941721 kernel: hv_utils: TimeSync IC version 4.0 Sep 12 17:47:34.412205 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:47:34.412396 systemd-resolved[233]: Clock change detected. Flushing caches. Sep 12 17:47:34.421286 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Sep 12 17:47:34.421335 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Sep 12 17:47:34.418754 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:34.434767 kernel: hv_vmbus: registering driver hid_hyperv Sep 12 17:47:34.441087 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 12 17:47:34.441122 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:47:34.441255 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 12 17:47:34.441357 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Sep 12 17:47:34.449802 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 12 17:47:34.450005 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:47:34.452475 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 12 17:47:34.459812 kernel: nvme nvme0: pci function c05b:00:00.0 Sep 12 17:47:34.460020 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Sep 12 17:47:34.471871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#275 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:47:34.487785 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#311 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:47:34.620796 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:47:34.627742 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:34.869749 kernel: nvme nvme0: using unchecked data buffer Sep 12 17:47:35.038651 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Sep 12 17:47:35.066413 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 12 17:47:35.080307 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Sep 12 17:47:35.106183 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 12 17:47:35.109609 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Sep 12 17:47:35.112974 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:47:35.119811 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:47:35.123794 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:47:35.128780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:47:35.132150 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:47:35.141449 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:47:35.150801 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:35.165844 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:47:35.170280 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:35.382842 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Sep 12 17:47:35.383062 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Sep 12 17:47:35.385705 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Sep 12 17:47:35.387411 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Sep 12 17:47:35.393008 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Sep 12 17:47:35.396849 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Sep 12 17:47:35.402076 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Sep 12 17:47:35.402101 kernel: pci 7870:00:00.0: enabling Extended Tags Sep 12 17:47:35.417702 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Sep 12 17:47:35.417931 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Sep 12 17:47:35.421816 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Sep 12 17:47:35.425708 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Sep 12 17:47:35.436750 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Sep 12 17:47:35.439857 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2da279 eth0: VF registering: eth1 Sep 12 17:47:35.440030 kernel: mana 7870:00:00.0 eth1: joined to eth0 Sep 12 17:47:35.444773 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Sep 12 17:47:36.175841 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:47:36.175974 disk-uuid[674]: The operation has completed successfully. Sep 12 17:47:36.221133 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:47:36.221219 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:47:36.259660 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:47:36.274847 sh[715]: Success Sep 12 17:47:36.304760 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:47:36.304802 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:47:36.306786 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:47:36.314783 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 12 17:47:36.521188 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:47:36.525824 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:47:36.535009 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:47:36.545742 kernel: BTRFS: device fsid 74707491-1b86-4926-8bdb-c533ce2a0c32 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (728) Sep 12 17:47:36.545776 kernel: BTRFS info (device dm-0): first mount of filesystem 74707491-1b86-4926-8bdb-c533ce2a0c32 Sep 12 17:47:36.547674 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:36.786239 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:47:36.786437 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:47:36.786513 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:47:36.818435 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:47:36.821445 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:47:36.824984 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:47:36.825616 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:47:36.837242 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:47:36.856898 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:7) scanned by mount (751) Sep 12 17:47:36.859530 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:36.859562 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:36.878866 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:47:36.878913 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 12 17:47:36.880287 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:47:36.884778 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:36.885013 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:47:36.889218 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:47:36.926929 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:47:36.930857 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:47:36.959100 systemd-networkd[897]: lo: Link UP Sep 12 17:47:36.959108 systemd-networkd[897]: lo: Gained carrier Sep 12 17:47:36.965487 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 12 17:47:36.960134 systemd-networkd[897]: Enumeration completed Sep 12 17:47:36.976421 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 12 17:47:36.960511 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:36.960514 systemd-networkd[897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:47:36.960791 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:47:36.965570 systemd[1]: Reached target network.target - Network. Sep 12 17:47:36.979743 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2da279 eth0: Data path switched to VF: enP30832s1 Sep 12 17:47:36.979935 systemd-networkd[897]: enP30832s1: Link UP Sep 12 17:47:36.980132 systemd-networkd[897]: eth0: Link UP Sep 12 17:47:36.980601 systemd-networkd[897]: eth0: Gained carrier Sep 12 17:47:36.980613 systemd-networkd[897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:36.992089 systemd-networkd[897]: enP30832s1: Gained carrier Sep 12 17:47:37.001754 systemd-networkd[897]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 17:47:37.867540 ignition[828]: Ignition 2.21.0 Sep 12 17:47:37.867558 ignition[828]: Stage: fetch-offline Sep 12 17:47:37.871122 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:47:37.867702 ignition[828]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:37.876787 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:47:37.867710 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:37.869420 ignition[828]: parsed url from cmdline: "" Sep 12 17:47:37.869426 ignition[828]: no config URL provided Sep 12 17:47:37.869433 ignition[828]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:47:37.869444 ignition[828]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:47:37.869450 ignition[828]: failed to fetch config: resource requires networking Sep 12 17:47:37.869742 ignition[828]: Ignition finished successfully Sep 12 17:47:37.901806 ignition[915]: Ignition 2.21.0 Sep 12 17:47:37.901817 ignition[915]: Stage: fetch Sep 12 17:47:37.902031 ignition[915]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:37.902040 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:37.902125 ignition[915]: parsed url from cmdline: "" Sep 12 17:47:37.902128 ignition[915]: no config URL provided Sep 12 17:47:37.902133 ignition[915]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:47:37.902139 ignition[915]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:47:37.902183 ignition[915]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 12 17:47:37.965983 ignition[915]: GET result: OK Sep 12 17:47:37.966044 ignition[915]: config has been read from IMDS userdata Sep 12 17:47:37.966070 ignition[915]: parsing config with SHA512: 2969110978724189f06d242aee2be560b381b3482746c14980cdc58b0a8913337a66be8023847d823a53ef5cfd2cf6c6c3729136eb630b2eb8b6d3bb28e4ee5c Sep 12 17:47:37.972634 unknown[915]: fetched base config from "system" Sep 12 17:47:37.972645 unknown[915]: fetched base config from "system" Sep 12 17:47:37.973042 ignition[915]: fetch: fetch complete Sep 12 17:47:37.972649 unknown[915]: fetched user config from "azure" Sep 12 17:47:37.973047 ignition[915]: fetch: fetch passed Sep 12 17:47:37.975245 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:47:37.973089 ignition[915]: Ignition finished successfully Sep 12 17:47:37.984892 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:47:38.011609 ignition[921]: Ignition 2.21.0 Sep 12 17:47:38.011619 ignition[921]: Stage: kargs Sep 12 17:47:38.011820 ignition[921]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:38.015108 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:47:38.011828 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:38.015798 systemd-networkd[897]: eth0: Gained IPv6LL Sep 12 17:47:38.012576 ignition[921]: kargs: kargs passed Sep 12 17:47:38.018675 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:47:38.012610 ignition[921]: Ignition finished successfully Sep 12 17:47:38.037692 ignition[927]: Ignition 2.21.0 Sep 12 17:47:38.038033 ignition[927]: Stage: disks Sep 12 17:47:38.038236 ignition[927]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:38.038245 ignition[927]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:38.039015 ignition[927]: disks: disks passed Sep 12 17:47:38.043034 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:47:38.039047 ignition[927]: Ignition finished successfully Sep 12 17:47:38.046143 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:47:38.053468 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:47:38.057840 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:47:38.061398 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:47:38.067598 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:47:38.073004 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:47:38.129126 systemd-fsck[936]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Sep 12 17:47:38.133297 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:47:38.139288 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:47:38.417785 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 26739aba-b0be-4ce3-bfbd-ca4dbcbe2426 r/w with ordered data mode. Quota mode: none. Sep 12 17:47:38.417526 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:47:38.419148 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:47:38.452448 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:47:38.470797 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:47:38.479746 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:7) scanned by mount (945) Sep 12 17:47:38.479783 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:38.481907 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:47:38.485229 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:38.487495 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:47:38.487528 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:47:38.496079 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:47:38.501613 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:47:38.501634 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 12 17:47:38.501650 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:47:38.503914 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:47:38.507938 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:47:38.887925 coreos-metadata[947]: Sep 12 17:47:38.887 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:47:38.892158 coreos-metadata[947]: Sep 12 17:47:38.892 INFO Fetch successful Sep 12 17:47:38.892158 coreos-metadata[947]: Sep 12 17:47:38.892 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:47:38.903668 coreos-metadata[947]: Sep 12 17:47:38.903 INFO Fetch successful Sep 12 17:47:38.917431 coreos-metadata[947]: Sep 12 17:47:38.917 INFO wrote hostname ci-4426.1.0-a-9facd647c1 to /sysroot/etc/hostname Sep 12 17:47:38.919233 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:47:39.027136 initrd-setup-root[975]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:47:39.060250 initrd-setup-root[982]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:47:39.064476 initrd-setup-root[989]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:47:39.084335 initrd-setup-root[996]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:47:39.917126 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:47:39.918964 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:47:39.927289 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:47:39.936265 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:47:39.939796 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:39.961040 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:47:39.963430 ignition[1063]: INFO : Ignition 2.21.0 Sep 12 17:47:39.963430 ignition[1063]: INFO : Stage: mount Sep 12 17:47:39.963430 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:39.963430 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:39.963430 ignition[1063]: INFO : mount: mount passed Sep 12 17:47:39.963430 ignition[1063]: INFO : Ignition finished successfully Sep 12 17:47:39.972298 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:47:39.977389 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:47:39.989871 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:47:40.009743 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:7) scanned by mount (1075) Sep 12 17:47:40.012809 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5410dae6-8d31-4ea4-a4b4-868064445761 Sep 12 17:47:40.012846 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:47:40.017094 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:47:40.017200 kernel: BTRFS info (device nvme0n1p6): turning on async discard Sep 12 17:47:40.018063 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Sep 12 17:47:40.019696 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:47:40.043351 ignition[1092]: INFO : Ignition 2.21.0 Sep 12 17:47:40.043351 ignition[1092]: INFO : Stage: files Sep 12 17:47:40.048129 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:40.048129 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:40.048129 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:47:40.048129 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:47:40.048129 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:47:40.100213 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:47:40.102274 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:47:40.102274 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:47:40.100544 unknown[1092]: wrote ssh authorized keys file for user: core Sep 12 17:47:40.115624 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:47:40.119803 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:47:40.394459 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:47:40.442898 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:47:40.446809 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:47:40.446809 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 12 17:47:40.626125 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:47:40.807421 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:47:40.807421 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:47:40.816836 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:47:40.816836 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:47:40.816836 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:47:40.816836 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:47:40.816836 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:47:40.816836 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:47:40.816836 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:47:40.816836 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:47:40.816836 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:47:40.816836 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:40.846756 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:40.846756 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:40.846756 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:47:41.280681 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:47:41.837704 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:47:41.837704 ignition[1092]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:47:41.869178 ignition[1092]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:47:41.877441 ignition[1092]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:47:41.877441 ignition[1092]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:47:41.884799 ignition[1092]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:47:41.884799 ignition[1092]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:47:41.884799 ignition[1092]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:47:41.884799 ignition[1092]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:47:41.884799 ignition[1092]: INFO : files: files passed Sep 12 17:47:41.884799 ignition[1092]: INFO : Ignition finished successfully Sep 12 17:47:41.881738 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:47:41.887464 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:47:41.896215 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:47:41.900106 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:47:41.900195 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:47:41.940064 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:41.940064 initrd-setup-root-after-ignition[1122]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:41.944714 initrd-setup-root-after-ignition[1126]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:47:41.944114 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:47:41.951679 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:47:41.952687 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:47:42.001232 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:47:42.001319 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:47:42.005955 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:47:42.006103 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:47:42.006166 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:47:42.006805 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:47:42.031623 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:47:42.035807 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:47:42.048390 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:47:42.049147 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:47:42.049399 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:47:42.049699 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:47:42.049863 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:47:42.050349 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:47:42.050669 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:47:42.059895 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:47:42.063879 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:47:42.067890 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:47:42.068745 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:47:42.069052 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:47:42.069360 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:47:42.077855 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:47:42.081390 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:47:42.086940 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:47:42.092465 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:47:42.092571 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:47:42.101334 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:47:42.105561 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:47:42.109697 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:47:42.109808 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:47:42.113503 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:47:42.113609 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:47:42.126830 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:47:42.126958 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:47:42.127187 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:47:42.127268 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:47:42.127428 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:47:42.127503 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:47:42.129916 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:47:42.131831 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:47:42.142533 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:47:42.142672 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:47:42.147389 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:47:42.147509 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:47:42.172778 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:47:42.174091 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:47:42.179636 ignition[1146]: INFO : Ignition 2.21.0 Sep 12 17:47:42.179636 ignition[1146]: INFO : Stage: umount Sep 12 17:47:42.185535 ignition[1146]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:47:42.185535 ignition[1146]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 12 17:47:42.185535 ignition[1146]: INFO : umount: umount passed Sep 12 17:47:42.185535 ignition[1146]: INFO : Ignition finished successfully Sep 12 17:47:42.182121 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:47:42.182192 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:47:42.193570 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:47:42.193938 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:47:42.193965 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:47:42.197775 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:47:42.197809 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:47:42.203719 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:47:42.203774 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:47:42.207207 systemd[1]: Stopped target network.target - Network. Sep 12 17:47:42.211424 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:47:42.211503 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:47:42.215521 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:47:42.217674 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:47:42.221846 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:47:42.225460 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:47:42.234340 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:47:42.236795 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:47:42.236826 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:47:42.240789 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:47:42.240817 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:47:42.241657 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:47:42.241699 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:47:42.242051 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:47:42.242083 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:47:42.242233 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:47:42.242592 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:47:42.251413 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:47:42.251493 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:47:42.260844 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:47:42.261068 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:47:42.261163 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:47:42.265920 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:47:42.266375 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:47:42.268066 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:47:42.268098 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:47:42.270231 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:47:42.270260 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:47:42.270301 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:47:42.338294 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2da279 eth0: Data path switched from VF: enP30832s1 Sep 12 17:47:42.270529 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:47:42.270556 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:47:42.347791 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 12 17:47:42.272112 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:47:42.272158 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:47:42.282730 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:47:42.282783 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:47:42.286484 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:47:42.301675 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:47:42.301751 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:47:42.302018 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:47:42.302135 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:47:42.305603 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:47:42.305660 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:47:42.309574 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:47:42.309607 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:47:42.312832 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:47:42.312873 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:47:42.315145 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:47:42.315189 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:47:42.315446 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:47:42.315480 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:47:42.317108 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:47:42.325544 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:47:42.325600 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:47:42.326401 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:47:42.326439 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:47:42.339020 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:47:42.339065 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:47:42.345406 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:47:42.345453 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:47:42.350034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:42.350067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:42.361491 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:47:42.361528 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 12 17:47:42.361550 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:47:42.361573 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:47:42.361856 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:47:42.361929 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:47:42.364617 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:47:42.364686 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:47:42.365047 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:47:42.365106 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:47:42.366627 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:47:42.376092 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:47:42.376155 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:47:42.459776 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Sep 12 17:47:42.381840 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:47:42.392235 systemd[1]: Switching root. Sep 12 17:47:42.463624 systemd-journald[205]: Journal stopped Sep 12 17:47:48.381156 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:47:48.381192 kernel: SELinux: policy capability open_perms=1 Sep 12 17:47:48.381205 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:47:48.381214 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:47:48.381222 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:47:48.381231 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:47:48.381241 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:47:48.381251 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:47:48.381260 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:47:48.381270 kernel: audit: type=1403 audit(1757699263.936:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:47:48.381281 systemd[1]: Successfully loaded SELinux policy in 140.624ms. Sep 12 17:47:48.381294 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 3.999ms. Sep 12 17:47:48.381306 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:47:48.381316 systemd[1]: Detected virtualization microsoft. Sep 12 17:47:48.381329 systemd[1]: Detected architecture x86-64. Sep 12 17:47:48.381338 systemd[1]: Detected first boot. Sep 12 17:47:48.381349 systemd[1]: Hostname set to . Sep 12 17:47:48.381358 systemd[1]: Initializing machine ID from random generator. Sep 12 17:47:48.383763 zram_generator::config[1190]: No configuration found. Sep 12 17:47:48.383789 kernel: Guest personality initialized and is inactive Sep 12 17:47:48.383801 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Sep 12 17:47:48.383811 kernel: Initialized host personality Sep 12 17:47:48.383820 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:47:48.383831 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:47:48.383843 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:47:48.383853 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:47:48.383865 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:47:48.383875 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:47:48.383885 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:47:48.383895 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:47:48.383904 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:47:48.383914 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:47:48.383923 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:47:48.383934 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:47:48.383944 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:47:48.383954 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:47:48.383964 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:47:48.383974 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:47:48.383983 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:47:48.383993 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:47:48.384006 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:47:48.384017 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:47:48.384028 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:47:48.384038 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:47:48.384048 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:47:48.384058 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:47:48.384068 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:47:48.384078 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:47:48.384088 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:47:48.384100 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:47:48.384110 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:47:48.384120 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:47:48.384130 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:47:48.384140 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:47:48.384149 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:47:48.384162 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:47:48.384172 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:47:48.384182 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:47:48.384192 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:47:48.384202 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:47:48.384212 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:47:48.384222 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:47:48.384234 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:47:48.384245 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:48.384255 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:47:48.384265 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:47:48.384275 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:47:48.384285 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:47:48.384296 systemd[1]: Reached target machines.target - Containers. Sep 12 17:47:48.384306 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:47:48.384318 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:48.384328 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:47:48.384338 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:47:48.384348 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:47:48.384358 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:47:48.384368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:47:48.384378 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:47:48.384388 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:47:48.384400 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:47:48.384411 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:47:48.384421 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:47:48.384431 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:47:48.384441 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:47:48.384452 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:48.384462 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:47:48.384472 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:47:48.384482 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:47:48.384494 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:47:48.384504 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:47:48.384514 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:47:48.384525 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:47:48.384535 systemd[1]: Stopped verity-setup.service. Sep 12 17:47:48.384545 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:48.384556 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:47:48.384566 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:47:48.384577 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:47:48.384588 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:47:48.384598 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:47:48.384608 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:47:48.384618 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:47:48.384628 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:47:48.384638 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:47:48.384648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:47:48.384660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:47:48.384670 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:47:48.384680 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:47:48.384690 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:47:48.384700 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:47:48.387815 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:47:48.387855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:48.387870 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:47:48.387880 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:47:48.387894 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:47:48.387904 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:47:48.387914 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:47:48.387924 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:47:48.387934 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:47:48.387967 systemd-journald[1276]: Collecting audit messages is disabled. Sep 12 17:47:48.387998 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:47:48.388011 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:47:48.388022 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:47:48.388033 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:47:48.388045 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:47:48.388056 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:47:48.388067 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:47:48.388078 systemd-journald[1276]: Journal started Sep 12 17:47:48.388102 systemd-journald[1276]: Runtime Journal (/run/log/journal/e6b72704948b4e44b600206a2f596e7a) is 8M, max 158.9M, 150.9M free. Sep 12 17:47:47.859142 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:47:47.870303 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:47:48.392120 kernel: fuse: init (API version 7.41) Sep 12 17:47:47.870655 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:47:48.404263 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:47:48.404423 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:47:48.404926 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:47:48.409595 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:47:48.428027 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:47:48.433809 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:47:48.440870 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:47:48.445218 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:47:48.447431 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:47:48.450552 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:47:48.466908 kernel: loop: module loaded Sep 12 17:47:48.467343 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:47:48.467493 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:47:48.468935 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:47:48.613712 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Sep 12 17:47:48.613753 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Sep 12 17:47:48.616779 systemd-journald[1276]: Time spent on flushing to /var/log/journal/e6b72704948b4e44b600206a2f596e7a is 24.852ms for 1000 entries. Sep 12 17:47:48.616779 systemd-journald[1276]: System Journal (/var/log/journal/e6b72704948b4e44b600206a2f596e7a) is 8M, max 2.6G, 2.6G free. Sep 12 17:47:49.294092 systemd-journald[1276]: Received client request to flush runtime journal. Sep 12 17:47:49.294157 kernel: loop0: detected capacity change from 0 to 128016 Sep 12 17:47:49.294175 kernel: ACPI: bus type drm_connector registered Sep 12 17:47:48.620780 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:47:48.748016 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:47:48.748215 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:47:48.755796 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:47:48.761875 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:47:48.767526 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:47:48.771090 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:47:48.780871 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:47:49.295161 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:47:49.851967 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:47:49.855936 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:47:49.875518 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Sep 12 17:47:49.875536 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Sep 12 17:47:49.877907 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:47:50.242790 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:47:50.309762 kernel: loop1: detected capacity change from 0 to 29272 Sep 12 17:47:51.892750 kernel: loop2: detected capacity change from 0 to 221472 Sep 12 17:47:52.806073 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:47:52.810172 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:47:52.836453 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Sep 12 17:47:52.873744 kernel: loop3: detected capacity change from 0 to 111000 Sep 12 17:47:53.562464 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:47:53.569391 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:47:53.592121 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:47:53.641748 kernel: hv_vmbus: registering driver hv_balloon Sep 12 17:47:53.644010 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 12 17:47:53.649753 kernel: hv_vmbus: registering driver hyperv_fb Sep 12 17:47:53.652812 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 12 17:47:53.654836 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 12 17:47:53.655749 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:47:53.659874 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:47:53.677132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:53.685055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:53.685228 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:53.690101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:53.814758 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:47:53.826540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:47:53.827029 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:53.829581 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:47:53.833869 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:47:53.933748 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#296 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Sep 12 17:47:54.029245 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:47:54.065855 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:47:54.557470 systemd-networkd[1366]: lo: Link UP Sep 12 17:47:54.557817 systemd-networkd[1366]: lo: Gained carrier Sep 12 17:47:54.559170 systemd-networkd[1366]: Enumeration completed Sep 12 17:47:54.559286 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:47:54.559473 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:54.559476 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:47:54.561664 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:47:54.563760 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Sep 12 17:47:54.567207 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:47:54.571746 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Sep 12 17:47:54.571986 kernel: hv_netvsc f8615163-0000-1000-2000-7ced8d2da279 eth0: Data path switched to VF: enP30832s1 Sep 12 17:47:54.575115 systemd-networkd[1366]: enP30832s1: Link UP Sep 12 17:47:54.575199 systemd-networkd[1366]: eth0: Link UP Sep 12 17:47:54.575202 systemd-networkd[1366]: eth0: Gained carrier Sep 12 17:47:54.575217 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:47:54.583952 systemd-networkd[1366]: enP30832s1: Gained carrier Sep 12 17:47:54.593773 systemd-networkd[1366]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 17:47:54.623744 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Sep 12 17:47:54.671037 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:47:54.724747 kernel: loop4: detected capacity change from 0 to 128016 Sep 12 17:47:54.733741 kernel: loop5: detected capacity change from 0 to 29272 Sep 12 17:47:54.742754 kernel: loop6: detected capacity change from 0 to 221472 Sep 12 17:47:55.114096 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Sep 12 17:47:55.117846 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:47:55.364405 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:47:55.554768 kernel: loop7: detected capacity change from 0 to 111000 Sep 12 17:47:55.562394 (sd-merge)[1436]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 12 17:47:55.562781 (sd-merge)[1436]: Merged extensions into '/usr'. Sep 12 17:47:55.742893 systemd-networkd[1366]: eth0: Gained IPv6LL Sep 12 17:47:55.744882 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:47:55.839658 systemd[1]: Reload requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:47:55.839672 systemd[1]: Reloading... Sep 12 17:47:55.891804 zram_generator::config[1481]: No configuration found. Sep 12 17:47:56.072760 systemd[1]: Reloading finished in 232 ms. Sep 12 17:47:56.102580 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:47:56.104544 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:47:56.115584 systemd[1]: Starting ensure-sysext.service... Sep 12 17:47:56.118864 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:47:56.138587 systemd[1]: Reload requested from client PID 1540 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:47:56.138607 systemd[1]: Reloading... Sep 12 17:47:56.166988 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:47:56.167017 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:47:56.167268 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:47:56.167489 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:47:56.168177 systemd-tmpfiles[1541]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:47:56.168417 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Sep 12 17:47:56.168463 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Sep 12 17:47:56.171835 systemd-tmpfiles[1541]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:47:56.171847 systemd-tmpfiles[1541]: Skipping /boot Sep 12 17:47:56.179877 systemd-tmpfiles[1541]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:47:56.179892 systemd-tmpfiles[1541]: Skipping /boot Sep 12 17:47:56.192750 zram_generator::config[1571]: No configuration found. Sep 12 17:47:56.369225 systemd[1]: Reloading finished in 230 ms. Sep 12 17:47:56.398330 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:47:56.407371 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:47:56.411422 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:47:56.417577 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:47:56.422253 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:47:56.427676 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:47:56.433834 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:56.433980 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:56.438384 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:47:56.442463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:47:56.447713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:47:56.449876 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:56.450001 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:56.450102 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:56.461898 systemd[1]: Finished ensure-sysext.service. Sep 12 17:47:56.464372 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:47:56.466855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:47:56.466991 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:47:56.471973 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:47:56.472108 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:47:56.474257 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:47:56.474390 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:47:56.479656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:56.480299 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:47:56.481206 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:47:56.484919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:47:56.484957 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:47:56.484993 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:47:56.485031 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:47:56.485064 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:47:56.488818 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:47:56.496935 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:47:56.497098 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:47:56.709284 systemd-resolved[1634]: Positive Trust Anchors: Sep 12 17:47:56.709295 systemd-resolved[1634]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:47:56.709327 systemd-resolved[1634]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:47:56.712420 systemd-resolved[1634]: Using system hostname 'ci-4426.1.0-a-9facd647c1'. Sep 12 17:47:56.713817 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:47:56.715617 systemd[1]: Reached target network.target - Network. Sep 12 17:47:56.716844 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:47:56.719763 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:47:57.020162 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:47:57.801876 augenrules[1667]: No rules Sep 12 17:47:57.803012 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:47:57.803231 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:47:58.506879 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:47:58.509971 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:48:02.726478 ldconfig[1285]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:48:02.736357 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:48:02.739137 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:48:02.753199 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:48:02.755954 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:48:02.757421 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:48:02.760799 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:48:02.763788 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 12 17:48:02.765994 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:48:02.767241 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:48:02.770770 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:48:02.773774 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:48:02.773806 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:48:02.776765 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:48:02.778655 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:48:02.782602 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:48:02.785315 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:48:02.788900 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:48:02.791778 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:48:02.801198 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:48:02.804014 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:48:02.807284 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:48:02.809391 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:48:02.812775 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:48:02.815817 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:48:02.815840 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:48:02.817934 systemd[1]: Starting chronyd.service - NTP client/server... Sep 12 17:48:02.821827 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:48:02.825040 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:48:02.832848 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:48:02.836543 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:48:02.840294 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:48:02.856765 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:48:02.859137 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:48:02.860884 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 12 17:48:02.863323 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Sep 12 17:48:02.865071 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Sep 12 17:48:02.870871 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Sep 12 17:48:02.873780 jq[1687]: false Sep 12 17:48:02.875841 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:02.879096 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:48:02.884033 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:48:02.886563 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:48:02.891865 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:48:02.896371 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Refreshing passwd entry cache Sep 12 17:48:02.895175 oslogin_cache_refresh[1689]: Refreshing passwd entry cache Sep 12 17:48:02.900967 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:48:02.906003 chronyd[1679]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Sep 12 17:48:02.907613 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:48:02.910203 chronyd[1679]: Timezone right/UTC failed leap second check, ignoring Sep 12 17:48:02.910504 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:48:02.910346 chronyd[1679]: Loaded seccomp filter (level 2) Sep 12 17:48:02.910940 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:48:02.912913 KVP[1690]: KVP starting; pid is:1690 Sep 12 17:48:02.913351 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:48:02.921110 KVP[1690]: KVP LIC Version: 3.1 Sep 12 17:48:02.921376 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:48:02.921809 kernel: hv_utils: KVP IC version 4.0 Sep 12 17:48:02.926686 extend-filesystems[1688]: Found /dev/nvme0n1p6 Sep 12 17:48:02.929865 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Failure getting users, quitting Sep 12 17:48:02.929865 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:48:02.929865 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Refreshing group entry cache Sep 12 17:48:02.928635 systemd[1]: Started chronyd.service - NTP client/server. Sep 12 17:48:02.928175 oslogin_cache_refresh[1689]: Failure getting users, quitting Sep 12 17:48:02.928257 oslogin_cache_refresh[1689]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 12 17:48:02.928297 oslogin_cache_refresh[1689]: Refreshing group entry cache Sep 12 17:48:02.934782 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:48:02.935973 jq[1703]: true Sep 12 17:48:02.938057 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:48:02.938780 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:48:02.942847 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Failure getting groups, quitting Sep 12 17:48:02.942847 google_oslogin_nss_cache[1689]: oslogin_cache_refresh[1689]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:48:02.942839 oslogin_cache_refresh[1689]: Failure getting groups, quitting Sep 12 17:48:02.942848 oslogin_cache_refresh[1689]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 12 17:48:02.944160 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 12 17:48:02.944356 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 12 17:48:02.946993 extend-filesystems[1688]: Found /dev/nvme0n1p9 Sep 12 17:48:02.951518 extend-filesystems[1688]: Checking size of /dev/nvme0n1p9 Sep 12 17:48:02.965547 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:48:02.965764 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:48:02.983987 (ntainerd)[1729]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:48:02.998325 extend-filesystems[1688]: Old size kept for /dev/nvme0n1p9 Sep 12 17:48:03.000204 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:48:03.002817 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:48:03.005648 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:48:03.005850 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:48:03.011239 update_engine[1701]: I20250912 17:48:03.011169 1701 main.cc:92] Flatcar Update Engine starting Sep 12 17:48:03.021614 jq[1712]: true Sep 12 17:48:03.020445 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:48:03.025612 tar[1709]: linux-amd64/helm Sep 12 17:48:03.040976 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:48:03.040870 dbus-daemon[1682]: [system] SELinux support is enabled Sep 12 17:48:03.047700 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:48:03.048775 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:48:03.051066 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:48:03.051089 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:48:03.067338 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:48:03.068318 update_engine[1701]: I20250912 17:48:03.068092 1701 update_check_scheduler.cc:74] Next update check in 6m34s Sep 12 17:48:03.075166 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:48:03.132472 systemd-logind[1699]: New seat seat0. Sep 12 17:48:03.135547 systemd-logind[1699]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:48:03.135678 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:48:03.166083 bash[1769]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:48:03.168117 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:48:03.171425 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:48:03.190806 coreos-metadata[1681]: Sep 12 17:48:03.190 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 12 17:48:03.209017 coreos-metadata[1681]: Sep 12 17:48:03.208 INFO Fetch successful Sep 12 17:48:03.209017 coreos-metadata[1681]: Sep 12 17:48:03.208 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 12 17:48:03.223021 coreos-metadata[1681]: Sep 12 17:48:03.222 INFO Fetch successful Sep 12 17:48:03.224856 coreos-metadata[1681]: Sep 12 17:48:03.223 INFO Fetching http://168.63.129.16/machine/b2fb49e0-d27b-4918-9fb2-37ae08372894/bdd84fc1%2D94f0%2D4454%2Db74d%2Dd5baa18b546b.%5Fci%2D4426.1.0%2Da%2D9facd647c1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 12 17:48:03.231128 coreos-metadata[1681]: Sep 12 17:48:03.230 INFO Fetch successful Sep 12 17:48:03.231128 coreos-metadata[1681]: Sep 12 17:48:03.231 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 12 17:48:03.244829 coreos-metadata[1681]: Sep 12 17:48:03.244 INFO Fetch successful Sep 12 17:48:03.302241 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:48:03.305339 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:48:03.401120 locksmithd[1756]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:48:03.493387 sshd_keygen[1724]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:48:03.542493 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:48:03.546936 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:48:03.550741 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 12 17:48:03.576655 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:48:03.576859 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:48:03.584330 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:48:03.599868 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 12 17:48:03.615128 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:48:03.621009 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:48:03.625744 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:48:03.630302 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:48:03.680971 tar[1709]: linux-amd64/LICENSE Sep 12 17:48:03.681104 tar[1709]: linux-amd64/README.md Sep 12 17:48:03.695301 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:48:03.822329 containerd[1729]: time="2025-09-12T17:48:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:48:03.822329 containerd[1729]: time="2025-09-12T17:48:03.821201152Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832247042Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.444µs" Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832274400Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832292733Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832423756Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832436403Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832457042Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832503222Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832516310Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832748766Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832763572Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832774679Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:48:03.832972 containerd[1729]: time="2025-09-12T17:48:03.832782962Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:48:03.833268 containerd[1729]: time="2025-09-12T17:48:03.832857485Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:48:03.833268 containerd[1729]: time="2025-09-12T17:48:03.833015322Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:48:03.833268 containerd[1729]: time="2025-09-12T17:48:03.833036119Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:48:03.833268 containerd[1729]: time="2025-09-12T17:48:03.833060967Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:48:03.833268 containerd[1729]: time="2025-09-12T17:48:03.833084222Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:48:03.833366 containerd[1729]: time="2025-09-12T17:48:03.833293153Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:48:03.833366 containerd[1729]: time="2025-09-12T17:48:03.833335162Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845784190Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845834060Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845851877Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845865503Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845878703Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845889839Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845904114Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845916722Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845927617Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845943745Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845954417Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.845966754Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.846067610Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:48:03.846608 containerd[1729]: time="2025-09-12T17:48:03.846083045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846102102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846118462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846131139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846143292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846154132Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846165182Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846180770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846193596Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846204065Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846272512Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846289043Z" level=info msg="Start snapshots syncer" Sep 12 17:48:03.846961 containerd[1729]: time="2025-09-12T17:48:03.846308079Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:48:03.847196 containerd[1729]: time="2025-09-12T17:48:03.846540666Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:48:03.847196 containerd[1729]: time="2025-09-12T17:48:03.846586659Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:48:03.847329 containerd[1729]: time="2025-09-12T17:48:03.846648894Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:48:03.847458 containerd[1729]: time="2025-09-12T17:48:03.847387816Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:48:03.847458 containerd[1729]: time="2025-09-12T17:48:03.847422796Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:48:03.847458 containerd[1729]: time="2025-09-12T17:48:03.847437143Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:48:03.847458 containerd[1729]: time="2025-09-12T17:48:03.847452595Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:48:03.847578 containerd[1729]: time="2025-09-12T17:48:03.847466985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:48:03.847578 containerd[1729]: time="2025-09-12T17:48:03.847478175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:48:03.847578 containerd[1729]: time="2025-09-12T17:48:03.847490135Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:48:03.847578 containerd[1729]: time="2025-09-12T17:48:03.847513263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:48:03.847578 containerd[1729]: time="2025-09-12T17:48:03.847523929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:48:03.847578 containerd[1729]: time="2025-09-12T17:48:03.847534939Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:48:03.847578 containerd[1729]: time="2025-09-12T17:48:03.847574424Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:48:03.847985 containerd[1729]: time="2025-09-12T17:48:03.847589306Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:48:03.847985 containerd[1729]: time="2025-09-12T17:48:03.847598577Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:48:03.847985 containerd[1729]: time="2025-09-12T17:48:03.847608586Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:48:03.847985 containerd[1729]: time="2025-09-12T17:48:03.847629211Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:48:03.847985 containerd[1729]: time="2025-09-12T17:48:03.847638939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:48:03.847985 containerd[1729]: time="2025-09-12T17:48:03.847648405Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:48:03.847985 containerd[1729]: time="2025-09-12T17:48:03.847663590Z" level=info msg="runtime interface created" Sep 12 17:48:03.847985 containerd[1729]: time="2025-09-12T17:48:03.847669041Z" level=info msg="created NRI interface" Sep 12 17:48:03.847985 containerd[1729]: time="2025-09-12T17:48:03.847677246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:48:03.847985 containerd[1729]: time="2025-09-12T17:48:03.847689498Z" level=info msg="Connect containerd service" Sep 12 17:48:03.847985 containerd[1729]: time="2025-09-12T17:48:03.847715371Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:48:03.848755 containerd[1729]: time="2025-09-12T17:48:03.848481283Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:48:04.210260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:04.214300 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:48:04.384122 containerd[1729]: time="2025-09-12T17:48:04.384052137Z" level=info msg="Start subscribing containerd event" Sep 12 17:48:04.384212 containerd[1729]: time="2025-09-12T17:48:04.384168172Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:48:04.384240 containerd[1729]: time="2025-09-12T17:48:04.384216825Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:48:04.384430 containerd[1729]: time="2025-09-12T17:48:04.384284196Z" level=info msg="Start recovering state" Sep 12 17:48:04.384430 containerd[1729]: time="2025-09-12T17:48:04.384385309Z" level=info msg="Start event monitor" Sep 12 17:48:04.384430 containerd[1729]: time="2025-09-12T17:48:04.384396115Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:48:04.384430 containerd[1729]: time="2025-09-12T17:48:04.384405030Z" level=info msg="Start streaming server" Sep 12 17:48:04.384430 containerd[1729]: time="2025-09-12T17:48:04.384415611Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:48:04.384626 containerd[1729]: time="2025-09-12T17:48:04.384549797Z" level=info msg="runtime interface starting up..." Sep 12 17:48:04.384626 containerd[1729]: time="2025-09-12T17:48:04.384558659Z" level=info msg="starting plugins..." Sep 12 17:48:04.384626 containerd[1729]: time="2025-09-12T17:48:04.384570766Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:48:04.384975 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:48:04.388375 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:48:04.390574 containerd[1729]: time="2025-09-12T17:48:04.384900156Z" level=info msg="containerd successfully booted in 0.564632s" Sep 12 17:48:04.391239 systemd[1]: Startup finished in 3.266s (kernel) + 10.611s (initrd) + 20.593s (userspace) = 34.471s. Sep 12 17:48:04.669066 login[1825]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 17:48:04.670347 login[1826]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 17:48:04.685707 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:48:04.689137 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:48:04.695478 systemd-logind[1699]: New session 1 of user core. Sep 12 17:48:04.702491 systemd-logind[1699]: New session 2 of user core. Sep 12 17:48:04.711444 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:48:04.714062 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:48:04.725320 (systemd)[1861]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:48:04.729224 systemd-logind[1699]: New session c1 of user core. Sep 12 17:48:04.751189 kubelet[1845]: E0912 17:48:04.751161 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:48:04.754146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:48:04.754378 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:48:04.754903 systemd[1]: kubelet.service: Consumed 932ms CPU time, 265.7M memory peak. Sep 12 17:48:04.886896 systemd[1861]: Queued start job for default target default.target. Sep 12 17:48:04.896477 systemd[1861]: Created slice app.slice - User Application Slice. Sep 12 17:48:04.896508 systemd[1861]: Reached target paths.target - Paths. Sep 12 17:48:04.896538 systemd[1861]: Reached target timers.target - Timers. Sep 12 17:48:04.898835 systemd[1861]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:48:04.913062 systemd[1861]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:48:04.913161 systemd[1861]: Reached target sockets.target - Sockets. Sep 12 17:48:04.913243 systemd[1861]: Reached target basic.target - Basic System. Sep 12 17:48:04.913293 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:48:04.913837 systemd[1861]: Reached target default.target - Main User Target. Sep 12 17:48:04.913871 systemd[1861]: Startup finished in 175ms. Sep 12 17:48:04.920859 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:48:04.921376 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:48:05.026304 waagent[1823]: 2025-09-12T17:48:05.026234Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.027218Z INFO Daemon Daemon OS: flatcar 4426.1.0 Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.027551Z INFO Daemon Daemon Python: 3.11.13 Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.028315Z INFO Daemon Daemon Run daemon Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.028656Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4426.1.0' Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.029142Z INFO Daemon Daemon Using waagent for provisioning Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.029305Z INFO Daemon Daemon Activate resource disk Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.030004Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.031498Z INFO Daemon Daemon Found device: None Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.031643Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.031700Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.032380Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:48:05.040176 waagent[1823]: 2025-09-12T17:48:05.032629Z INFO Daemon Daemon Running default provisioning handler Sep 12 17:48:05.056930 waagent[1823]: 2025-09-12T17:48:05.047351Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 12 17:48:05.056930 waagent[1823]: 2025-09-12T17:48:05.047909Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 12 17:48:05.056930 waagent[1823]: 2025-09-12T17:48:05.048281Z INFO Daemon Daemon cloud-init is enabled: False Sep 12 17:48:05.056930 waagent[1823]: 2025-09-12T17:48:05.048567Z INFO Daemon Daemon Copying ovf-env.xml Sep 12 17:48:05.124640 waagent[1823]: 2025-09-12T17:48:05.124592Z INFO Daemon Daemon Successfully mounted dvd Sep 12 17:48:05.150629 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 12 17:48:05.151188 waagent[1823]: 2025-09-12T17:48:05.151149Z INFO Daemon Daemon Detect protocol endpoint Sep 12 17:48:05.152239 waagent[1823]: 2025-09-12T17:48:05.152203Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 12 17:48:05.152652 waagent[1823]: 2025-09-12T17:48:05.152628Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 12 17:48:05.152812 waagent[1823]: 2025-09-12T17:48:05.152793Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 12 17:48:05.153178 waagent[1823]: 2025-09-12T17:48:05.153153Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 12 17:48:05.153421 waagent[1823]: 2025-09-12T17:48:05.153403Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 12 17:48:05.165221 waagent[1823]: 2025-09-12T17:48:05.165190Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 12 17:48:05.167107 waagent[1823]: 2025-09-12T17:48:05.167090Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 12 17:48:05.168786 waagent[1823]: 2025-09-12T17:48:05.168208Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 12 17:48:05.284918 waagent[1823]: 2025-09-12T17:48:05.284865Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 12 17:48:05.286603 waagent[1823]: 2025-09-12T17:48:05.286521Z INFO Daemon Daemon Forcing an update of the goal state. Sep 12 17:48:05.291323 waagent[1823]: 2025-09-12T17:48:05.291287Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:48:05.307024 waagent[1823]: 2025-09-12T17:48:05.306994Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Sep 12 17:48:05.308585 waagent[1823]: 2025-09-12T17:48:05.308550Z INFO Daemon Sep 12 17:48:05.309480 waagent[1823]: 2025-09-12T17:48:05.309409Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f1ca8b46-1877-4784-abfa-622d555404bd eTag: 16923399743758274578 source: Fabric] Sep 12 17:48:05.312014 waagent[1823]: 2025-09-12T17:48:05.311988Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 12 17:48:05.313584 waagent[1823]: 2025-09-12T17:48:05.313559Z INFO Daemon Sep 12 17:48:05.314265 waagent[1823]: 2025-09-12T17:48:05.314199Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:48:05.319420 waagent[1823]: 2025-09-12T17:48:05.319392Z INFO Daemon Daemon Downloading artifacts profile blob Sep 12 17:48:05.380239 waagent[1823]: 2025-09-12T17:48:05.380192Z INFO Daemon Downloaded certificate {'thumbprint': '67FD50F5F5CB7F66216840DBB2EBD7669C375D9C', 'hasPrivateKey': True} Sep 12 17:48:05.382774 waagent[1823]: 2025-09-12T17:48:05.382699Z INFO Daemon Fetch goal state completed Sep 12 17:48:05.389084 waagent[1823]: 2025-09-12T17:48:05.389028Z INFO Daemon Daemon Starting provisioning Sep 12 17:48:05.389677 waagent[1823]: 2025-09-12T17:48:05.389389Z INFO Daemon Daemon Handle ovf-env.xml. Sep 12 17:48:05.391078 waagent[1823]: 2025-09-12T17:48:05.390620Z INFO Daemon Daemon Set hostname [ci-4426.1.0-a-9facd647c1] Sep 12 17:48:05.411988 waagent[1823]: 2025-09-12T17:48:05.411951Z INFO Daemon Daemon Publish hostname [ci-4426.1.0-a-9facd647c1] Sep 12 17:48:05.418074 waagent[1823]: 2025-09-12T17:48:05.412625Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 12 17:48:05.418074 waagent[1823]: 2025-09-12T17:48:05.412928Z INFO Daemon Daemon Primary interface is [eth0] Sep 12 17:48:05.420205 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:48:05.420212 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:48:05.420234 systemd-networkd[1366]: eth0: DHCP lease lost Sep 12 17:48:05.421045 waagent[1823]: 2025-09-12T17:48:05.421007Z INFO Daemon Daemon Create user account if not exists Sep 12 17:48:05.422073 waagent[1823]: 2025-09-12T17:48:05.421557Z INFO Daemon Daemon User core already exists, skip useradd Sep 12 17:48:05.422073 waagent[1823]: 2025-09-12T17:48:05.421636Z INFO Daemon Daemon Configure sudoer Sep 12 17:48:05.433639 waagent[1823]: 2025-09-12T17:48:05.433592Z INFO Daemon Daemon Configure sshd Sep 12 17:48:05.438144 waagent[1823]: 2025-09-12T17:48:05.438105Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 12 17:48:05.440589 waagent[1823]: 2025-09-12T17:48:05.438393Z INFO Daemon Daemon Deploy ssh public key. Sep 12 17:48:05.440851 systemd-networkd[1366]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Sep 12 17:48:06.509014 waagent[1823]: 2025-09-12T17:48:06.508951Z INFO Daemon Daemon Provisioning complete Sep 12 17:48:06.522719 waagent[1823]: 2025-09-12T17:48:06.522685Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 12 17:48:06.528387 waagent[1823]: 2025-09-12T17:48:06.523233Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 12 17:48:06.528387 waagent[1823]: 2025-09-12T17:48:06.523536Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Sep 12 17:48:06.626802 waagent[1916]: 2025-09-12T17:48:06.626720Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Sep 12 17:48:06.627044 waagent[1916]: 2025-09-12T17:48:06.626824Z INFO ExtHandler ExtHandler OS: flatcar 4426.1.0 Sep 12 17:48:06.627044 waagent[1916]: 2025-09-12T17:48:06.626865Z INFO ExtHandler ExtHandler Python: 3.11.13 Sep 12 17:48:06.627044 waagent[1916]: 2025-09-12T17:48:06.626902Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Sep 12 17:48:06.660825 waagent[1916]: 2025-09-12T17:48:06.660777Z INFO ExtHandler ExtHandler Distro: flatcar-4426.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Sep 12 17:48:06.660952 waagent[1916]: 2025-09-12T17:48:06.660926Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:48:06.661003 waagent[1916]: 2025-09-12T17:48:06.660980Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:48:06.675104 waagent[1916]: 2025-09-12T17:48:06.675046Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 12 17:48:06.684824 waagent[1916]: 2025-09-12T17:48:06.684795Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Sep 12 17:48:06.685146 waagent[1916]: 2025-09-12T17:48:06.685118Z INFO ExtHandler Sep 12 17:48:06.685188 waagent[1916]: 2025-09-12T17:48:06.685168Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 7820b4f6-18a0-4d0b-8b94-967d56942fd4 eTag: 16923399743758274578 source: Fabric] Sep 12 17:48:06.685381 waagent[1916]: 2025-09-12T17:48:06.685357Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 12 17:48:06.685699 waagent[1916]: 2025-09-12T17:48:06.685671Z INFO ExtHandler Sep 12 17:48:06.685757 waagent[1916]: 2025-09-12T17:48:06.685711Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 12 17:48:06.691522 waagent[1916]: 2025-09-12T17:48:06.691493Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 12 17:48:06.764508 waagent[1916]: 2025-09-12T17:48:06.764458Z INFO ExtHandler Downloaded certificate {'thumbprint': '67FD50F5F5CB7F66216840DBB2EBD7669C375D9C', 'hasPrivateKey': True} Sep 12 17:48:06.764871 waagent[1916]: 2025-09-12T17:48:06.764843Z INFO ExtHandler Fetch goal state completed Sep 12 17:48:06.783185 waagent[1916]: 2025-09-12T17:48:06.783141Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.1 11 Feb 2025 (Library: OpenSSL 3.4.1 11 Feb 2025) Sep 12 17:48:06.787161 waagent[1916]: 2025-09-12T17:48:06.787111Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1916 Sep 12 17:48:06.787265 waagent[1916]: 2025-09-12T17:48:06.787240Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 12 17:48:06.787494 waagent[1916]: 2025-09-12T17:48:06.787472Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Sep 12 17:48:06.788538 waagent[1916]: 2025-09-12T17:48:06.788503Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4426.1.0', '', 'Flatcar Container Linux by Kinvolk'] Sep 12 17:48:06.788847 waagent[1916]: 2025-09-12T17:48:06.788816Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4426.1.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Sep 12 17:48:06.788956 waagent[1916]: 2025-09-12T17:48:06.788933Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Sep 12 17:48:06.789335 waagent[1916]: 2025-09-12T17:48:06.789307Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 12 17:48:06.808402 waagent[1916]: 2025-09-12T17:48:06.808377Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 12 17:48:06.808526 waagent[1916]: 2025-09-12T17:48:06.808506Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 12 17:48:06.813744 waagent[1916]: 2025-09-12T17:48:06.813471Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 12 17:48:06.818610 systemd[1]: Reload requested from client PID 1931 ('systemctl') (unit waagent.service)... Sep 12 17:48:06.818625 systemd[1]: Reloading... Sep 12 17:48:06.898806 zram_generator::config[1969]: No configuration found. Sep 12 17:48:07.077664 systemd[1]: Reloading finished in 258 ms. Sep 12 17:48:07.096285 waagent[1916]: 2025-09-12T17:48:07.096230Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 12 17:48:07.096348 waagent[1916]: 2025-09-12T17:48:07.096329Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 12 17:48:07.268866 waagent[1916]: 2025-09-12T17:48:07.268809Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 12 17:48:07.269097 waagent[1916]: 2025-09-12T17:48:07.269072Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Sep 12 17:48:07.269786 waagent[1916]: 2025-09-12T17:48:07.269753Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 12 17:48:07.270037 waagent[1916]: 2025-09-12T17:48:07.270009Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 12 17:48:07.270331 waagent[1916]: 2025-09-12T17:48:07.270301Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 12 17:48:07.270371 waagent[1916]: 2025-09-12T17:48:07.270351Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:48:07.270422 waagent[1916]: 2025-09-12T17:48:07.270405Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:48:07.270548 waagent[1916]: 2025-09-12T17:48:07.270526Z INFO EnvHandler ExtHandler Configure routes Sep 12 17:48:07.270693 waagent[1916]: 2025-09-12T17:48:07.270661Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 12 17:48:07.270872 waagent[1916]: 2025-09-12T17:48:07.270851Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 12 17:48:07.271088 waagent[1916]: 2025-09-12T17:48:07.271040Z INFO EnvHandler ExtHandler Gateway:None Sep 12 17:48:07.271158 waagent[1916]: 2025-09-12T17:48:07.271135Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 12 17:48:07.271234 waagent[1916]: 2025-09-12T17:48:07.271215Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 12 17:48:07.271333 waagent[1916]: 2025-09-12T17:48:07.271314Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 12 17:48:07.271388 waagent[1916]: 2025-09-12T17:48:07.271368Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 12 17:48:07.271542 waagent[1916]: 2025-09-12T17:48:07.271507Z INFO EnvHandler ExtHandler Routes:None Sep 12 17:48:07.271711 waagent[1916]: 2025-09-12T17:48:07.271686Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 12 17:48:07.273133 waagent[1916]: 2025-09-12T17:48:07.273101Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 12 17:48:07.273133 waagent[1916]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 12 17:48:07.273133 waagent[1916]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Sep 12 17:48:07.273133 waagent[1916]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 12 17:48:07.273133 waagent[1916]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:48:07.273133 waagent[1916]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:48:07.273133 waagent[1916]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 12 17:48:07.282746 waagent[1916]: 2025-09-12T17:48:07.282190Z INFO ExtHandler ExtHandler Sep 12 17:48:07.282746 waagent[1916]: 2025-09-12T17:48:07.282244Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 66139fab-9531-4847-9e64-dcd3277e8cac correlation 26670b66-82b0-4cef-a530-c209617df64e created: 2025-09-12T17:47:04.139648Z] Sep 12 17:48:07.283315 waagent[1916]: 2025-09-12T17:48:07.283273Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 12 17:48:07.283697 waagent[1916]: 2025-09-12T17:48:07.283669Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Sep 12 17:48:07.315768 waagent[1916]: 2025-09-12T17:48:07.314909Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Sep 12 17:48:07.315768 waagent[1916]: Try `iptables -h' or 'iptables --help' for more information.) Sep 12 17:48:07.315768 waagent[1916]: 2025-09-12T17:48:07.315247Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 94A07D8B-320B-40E4-93AD-8468050D3FD2;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Sep 12 17:48:07.316823 waagent[1916]: 2025-09-12T17:48:07.316776Z INFO MonitorHandler ExtHandler Network interfaces: Sep 12 17:48:07.316823 waagent[1916]: Executing ['ip', '-a', '-o', 'link']: Sep 12 17:48:07.316823 waagent[1916]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 12 17:48:07.316823 waagent[1916]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:a2:79 brd ff:ff:ff:ff:ff:ff\ alias Network Device Sep 12 17:48:07.316823 waagent[1916]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:2d:a2:79 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Sep 12 17:48:07.316823 waagent[1916]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 12 17:48:07.316823 waagent[1916]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 12 17:48:07.316823 waagent[1916]: 2: eth0 inet 10.200.8.41/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 12 17:48:07.316823 waagent[1916]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 12 17:48:07.316823 waagent[1916]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 12 17:48:07.316823 waagent[1916]: 2: eth0 inet6 fe80::7eed:8dff:fe2d:a279/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 12 17:48:07.343845 waagent[1916]: 2025-09-12T17:48:07.343768Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Sep 12 17:48:07.343845 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:48:07.343845 waagent[1916]: pkts bytes target prot opt in out source destination Sep 12 17:48:07.343845 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:48:07.343845 waagent[1916]: pkts bytes target prot opt in out source destination Sep 12 17:48:07.343845 waagent[1916]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:48:07.343845 waagent[1916]: pkts bytes target prot opt in out source destination Sep 12 17:48:07.343845 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:48:07.343845 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:48:07.343845 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:48:07.346823 waagent[1916]: 2025-09-12T17:48:07.346791Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 12 17:48:07.346823 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:48:07.346823 waagent[1916]: pkts bytes target prot opt in out source destination Sep 12 17:48:07.346823 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:48:07.346823 waagent[1916]: pkts bytes target prot opt in out source destination Sep 12 17:48:07.346823 waagent[1916]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Sep 12 17:48:07.346823 waagent[1916]: pkts bytes target prot opt in out source destination Sep 12 17:48:07.346823 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 12 17:48:07.346823 waagent[1916]: 2 304 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 12 17:48:07.346823 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 12 17:48:14.762535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:48:14.764008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:15.452827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:15.460942 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:48:15.498106 kubelet[2068]: E0912 17:48:15.498067 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:48:15.501014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:48:15.501137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:48:15.501439 systemd[1]: kubelet.service: Consumed 132ms CPU time, 108.4M memory peak. Sep 12 17:48:21.035068 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:48:21.036134 systemd[1]: Started sshd@0-10.200.8.41:22-10.200.16.10:50002.service - OpenSSH per-connection server daemon (10.200.16.10:50002). Sep 12 17:48:21.727379 sshd[2076]: Accepted publickey for core from 10.200.16.10 port 50002 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:21.728535 sshd-session[2076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:21.732739 systemd-logind[1699]: New session 3 of user core. Sep 12 17:48:21.737856 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:48:22.285409 systemd[1]: Started sshd@1-10.200.8.41:22-10.200.16.10:50006.service - OpenSSH per-connection server daemon (10.200.16.10:50006). Sep 12 17:48:22.917025 sshd[2082]: Accepted publickey for core from 10.200.16.10 port 50006 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:22.918232 sshd-session[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:22.922565 systemd-logind[1699]: New session 4 of user core. Sep 12 17:48:22.929872 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:48:23.360830 sshd[2085]: Connection closed by 10.200.16.10 port 50006 Sep 12 17:48:23.361352 sshd-session[2082]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:23.364537 systemd[1]: sshd@1-10.200.8.41:22-10.200.16.10:50006.service: Deactivated successfully. Sep 12 17:48:23.366069 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:48:23.366691 systemd-logind[1699]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:48:23.367819 systemd-logind[1699]: Removed session 4. Sep 12 17:48:23.485283 systemd[1]: Started sshd@2-10.200.8.41:22-10.200.16.10:50014.service - OpenSSH per-connection server daemon (10.200.16.10:50014). Sep 12 17:48:24.109763 sshd[2091]: Accepted publickey for core from 10.200.16.10 port 50014 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:24.110950 sshd-session[2091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:24.115291 systemd-logind[1699]: New session 5 of user core. Sep 12 17:48:24.121880 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:48:24.547638 sshd[2094]: Connection closed by 10.200.16.10 port 50014 Sep 12 17:48:24.548173 sshd-session[2091]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:24.551469 systemd[1]: sshd@2-10.200.8.41:22-10.200.16.10:50014.service: Deactivated successfully. Sep 12 17:48:24.552948 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:48:24.553583 systemd-logind[1699]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:48:24.554624 systemd-logind[1699]: Removed session 5. Sep 12 17:48:24.657196 systemd[1]: Started sshd@3-10.200.8.41:22-10.200.16.10:50016.service - OpenSSH per-connection server daemon (10.200.16.10:50016). Sep 12 17:48:25.285306 sshd[2100]: Accepted publickey for core from 10.200.16.10 port 50016 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:25.286434 sshd-session[2100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:25.290797 systemd-logind[1699]: New session 6 of user core. Sep 12 17:48:25.301863 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:48:25.512457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:48:25.513947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:25.724806 sshd[2103]: Connection closed by 10.200.16.10 port 50016 Sep 12 17:48:25.725418 sshd-session[2100]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:25.729321 systemd-logind[1699]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:48:25.731131 systemd[1]: sshd@3-10.200.8.41:22-10.200.16.10:50016.service: Deactivated successfully. Sep 12 17:48:25.733600 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:48:25.736197 systemd-logind[1699]: Removed session 6. Sep 12 17:48:25.838256 systemd[1]: Started sshd@4-10.200.8.41:22-10.200.16.10:50030.service - OpenSSH per-connection server daemon (10.200.16.10:50030). Sep 12 17:48:25.938643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:25.947954 (kubelet)[2120]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:48:25.981084 kubelet[2120]: E0912 17:48:25.981008 2120 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:48:25.982888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:48:25.983009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:48:25.983277 systemd[1]: kubelet.service: Consumed 126ms CPU time, 108.3M memory peak. Sep 12 17:48:26.462156 sshd[2112]: Accepted publickey for core from 10.200.16.10 port 50030 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:26.463389 sshd-session[2112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:26.468123 systemd-logind[1699]: New session 7 of user core. Sep 12 17:48:26.473881 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:48:26.697183 chronyd[1679]: Selected source PHC0 Sep 12 17:48:26.897860 sudo[2128]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:48:26.898081 sudo[2128]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:48:26.925423 sudo[2128]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:27.037114 sshd[2127]: Connection closed by 10.200.16.10 port 50030 Sep 12 17:48:27.037841 sshd-session[2112]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:27.041471 systemd[1]: sshd@4-10.200.8.41:22-10.200.16.10:50030.service: Deactivated successfully. Sep 12 17:48:27.043030 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:48:27.043709 systemd-logind[1699]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:48:27.044865 systemd-logind[1699]: Removed session 7. Sep 12 17:48:27.151451 systemd[1]: Started sshd@5-10.200.8.41:22-10.200.16.10:50046.service - OpenSSH per-connection server daemon (10.200.16.10:50046). Sep 12 17:48:27.776973 sshd[2134]: Accepted publickey for core from 10.200.16.10 port 50046 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:27.778134 sshd-session[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:27.782272 systemd-logind[1699]: New session 8 of user core. Sep 12 17:48:27.786878 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:48:28.119024 sudo[2139]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:48:28.119244 sudo[2139]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:48:28.125614 sudo[2139]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:28.129708 sudo[2138]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:48:28.129944 sudo[2138]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:48:28.137446 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:48:28.168856 augenrules[2161]: No rules Sep 12 17:48:28.169712 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:48:28.170046 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:48:28.171097 sudo[2138]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:28.271827 sshd[2137]: Connection closed by 10.200.16.10 port 50046 Sep 12 17:48:28.272210 sshd-session[2134]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:28.275366 systemd[1]: sshd@5-10.200.8.41:22-10.200.16.10:50046.service: Deactivated successfully. Sep 12 17:48:28.276814 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:48:28.277490 systemd-logind[1699]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:48:28.278530 systemd-logind[1699]: Removed session 8. Sep 12 17:48:28.389170 systemd[1]: Started sshd@6-10.200.8.41:22-10.200.16.10:50056.service - OpenSSH per-connection server daemon (10.200.16.10:50056). Sep 12 17:48:29.013110 sshd[2170]: Accepted publickey for core from 10.200.16.10 port 50056 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:48:29.014246 sshd-session[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:48:29.018634 systemd-logind[1699]: New session 9 of user core. Sep 12 17:48:29.025875 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:48:29.353797 sudo[2174]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:48:29.354028 sudo[2174]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:48:30.326214 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:48:30.342008 (dockerd)[2192]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:48:31.368954 dockerd[2192]: time="2025-09-12T17:48:31.368893784Z" level=info msg="Starting up" Sep 12 17:48:31.369993 dockerd[2192]: time="2025-09-12T17:48:31.369945666Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:48:31.382154 dockerd[2192]: time="2025-09-12T17:48:31.382120942Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:48:31.450625 dockerd[2192]: time="2025-09-12T17:48:31.450599326Z" level=info msg="Loading containers: start." Sep 12 17:48:31.475745 kernel: Initializing XFRM netlink socket Sep 12 17:48:31.762462 systemd-networkd[1366]: docker0: Link UP Sep 12 17:48:31.774480 dockerd[2192]: time="2025-09-12T17:48:31.774447815Z" level=info msg="Loading containers: done." Sep 12 17:48:31.786584 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3304790391-merged.mount: Deactivated successfully. Sep 12 17:48:31.797825 dockerd[2192]: time="2025-09-12T17:48:31.797797181Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:48:31.797927 dockerd[2192]: time="2025-09-12T17:48:31.797860037Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:48:31.797952 dockerd[2192]: time="2025-09-12T17:48:31.797927348Z" level=info msg="Initializing buildkit" Sep 12 17:48:31.836328 dockerd[2192]: time="2025-09-12T17:48:31.836300655Z" level=info msg="Completed buildkit initialization" Sep 12 17:48:31.842311 dockerd[2192]: time="2025-09-12T17:48:31.842281129Z" level=info msg="Daemon has completed initialization" Sep 12 17:48:31.842482 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:48:31.842716 dockerd[2192]: time="2025-09-12T17:48:31.842676339Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:48:33.192177 containerd[1729]: time="2025-09-12T17:48:33.192139364Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:48:33.880212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088537690.mount: Deactivated successfully. Sep 12 17:48:34.932555 containerd[1729]: time="2025-09-12T17:48:34.932506491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:34.934633 containerd[1729]: time="2025-09-12T17:48:34.934598162Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117132" Sep 12 17:48:34.937765 containerd[1729]: time="2025-09-12T17:48:34.937710472Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:34.941678 containerd[1729]: time="2025-09-12T17:48:34.941633603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:34.942502 containerd[1729]: time="2025-09-12T17:48:34.942348315Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 1.75017053s" Sep 12 17:48:34.942502 containerd[1729]: time="2025-09-12T17:48:34.942383567Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:48:34.942932 containerd[1729]: time="2025-09-12T17:48:34.942912451Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:48:36.012479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:48:36.015105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:36.528646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:36.540962 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:48:36.551744 containerd[1729]: time="2025-09-12T17:48:36.551685296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:36.558733 containerd[1729]: time="2025-09-12T17:48:36.558691343Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716640" Sep 12 17:48:36.563287 containerd[1729]: time="2025-09-12T17:48:36.562072165Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:36.566293 containerd[1729]: time="2025-09-12T17:48:36.566260686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:36.567740 containerd[1729]: time="2025-09-12T17:48:36.567688315Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.624688889s" Sep 12 17:48:36.568300 containerd[1729]: time="2025-09-12T17:48:36.567720822Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:48:36.569208 containerd[1729]: time="2025-09-12T17:48:36.569188566Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:48:36.582001 kubelet[2469]: E0912 17:48:36.581960 2469 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:48:36.583496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:48:36.583619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:48:36.583928 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107.9M memory peak. Sep 12 17:48:37.610189 containerd[1729]: time="2025-09-12T17:48:37.610140973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:37.612611 containerd[1729]: time="2025-09-12T17:48:37.612576513Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787706" Sep 12 17:48:37.615066 containerd[1729]: time="2025-09-12T17:48:37.615027644Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:37.619288 containerd[1729]: time="2025-09-12T17:48:37.618570243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:37.619288 containerd[1729]: time="2025-09-12T17:48:37.619113586Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.049897277s" Sep 12 17:48:37.619288 containerd[1729]: time="2025-09-12T17:48:37.619140080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:48:37.619797 containerd[1729]: time="2025-09-12T17:48:37.619777444Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:48:38.454609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980322788.mount: Deactivated successfully. Sep 12 17:48:38.803278 containerd[1729]: time="2025-09-12T17:48:38.803233753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:38.805391 containerd[1729]: time="2025-09-12T17:48:38.805356226Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410260" Sep 12 17:48:38.808137 containerd[1729]: time="2025-09-12T17:48:38.808106459Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:38.812841 containerd[1729]: time="2025-09-12T17:48:38.812322696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:38.812841 containerd[1729]: time="2025-09-12T17:48:38.812714983Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 1.192911921s" Sep 12 17:48:38.812841 containerd[1729]: time="2025-09-12T17:48:38.812749228Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:48:38.813284 containerd[1729]: time="2025-09-12T17:48:38.813208434Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:48:39.382011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171963523.mount: Deactivated successfully. Sep 12 17:48:40.212703 containerd[1729]: time="2025-09-12T17:48:40.212652642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:40.216386 containerd[1729]: time="2025-09-12T17:48:40.216218838Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 12 17:48:40.219718 containerd[1729]: time="2025-09-12T17:48:40.219693482Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:40.223390 containerd[1729]: time="2025-09-12T17:48:40.223358773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:40.224461 containerd[1729]: time="2025-09-12T17:48:40.224102503Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.410750776s" Sep 12 17:48:40.224461 containerd[1729]: time="2025-09-12T17:48:40.224132769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:48:40.224963 containerd[1729]: time="2025-09-12T17:48:40.224931866Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:48:40.726415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418322324.mount: Deactivated successfully. Sep 12 17:48:40.748716 containerd[1729]: time="2025-09-12T17:48:40.748671934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:48:40.751387 containerd[1729]: time="2025-09-12T17:48:40.751354511Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 12 17:48:40.754254 containerd[1729]: time="2025-09-12T17:48:40.754209533Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:48:40.758571 containerd[1729]: time="2025-09-12T17:48:40.758530555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:48:40.759048 containerd[1729]: time="2025-09-12T17:48:40.758923761Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 533.964524ms" Sep 12 17:48:40.759048 containerd[1729]: time="2025-09-12T17:48:40.758952998Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:48:40.759621 containerd[1729]: time="2025-09-12T17:48:40.759452874Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:48:41.299843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3391906878.mount: Deactivated successfully. Sep 12 17:48:41.737756 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Sep 12 17:48:42.890559 containerd[1729]: time="2025-09-12T17:48:42.890510757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:42.892979 containerd[1729]: time="2025-09-12T17:48:42.892943676Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910717" Sep 12 17:48:42.896508 containerd[1729]: time="2025-09-12T17:48:42.896466843Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:42.900597 containerd[1729]: time="2025-09-12T17:48:42.900550894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:48:42.901478 containerd[1729]: time="2025-09-12T17:48:42.901241794Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.141761516s" Sep 12 17:48:42.901478 containerd[1729]: time="2025-09-12T17:48:42.901274380Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:48:45.210428 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:45.210586 systemd[1]: kubelet.service: Consumed 130ms CPU time, 107.9M memory peak. Sep 12 17:48:45.212603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:45.234816 systemd[1]: Reload requested from client PID 2627 ('systemctl') (unit session-9.scope)... Sep 12 17:48:45.234830 systemd[1]: Reloading... Sep 12 17:48:45.315775 zram_generator::config[2674]: No configuration found. Sep 12 17:48:45.513245 systemd[1]: Reloading finished in 278 ms. Sep 12 17:48:45.543429 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:48:45.543505 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:48:45.543802 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:45.543851 systemd[1]: kubelet.service: Consumed 67ms CPU time, 69.9M memory peak. Sep 12 17:48:45.545129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:46.085773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:46.089258 (kubelet)[2741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:48:46.125429 kubelet[2741]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:46.125429 kubelet[2741]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:48:46.125429 kubelet[2741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:46.125677 kubelet[2741]: I0912 17:48:46.125481 2741 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:48:46.339834 kubelet[2741]: I0912 17:48:46.339292 2741 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:48:46.339834 kubelet[2741]: I0912 17:48:46.339320 2741 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:48:46.339834 kubelet[2741]: I0912 17:48:46.339792 2741 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:48:46.364032 kubelet[2741]: E0912 17:48:46.364007 2741 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:46.364524 kubelet[2741]: I0912 17:48:46.364508 2741 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:48:46.369878 kubelet[2741]: I0912 17:48:46.369864 2741 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:48:46.372942 kubelet[2741]: I0912 17:48:46.372924 2741 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:48:46.373021 kubelet[2741]: I0912 17:48:46.373009 2741 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:48:46.373133 kubelet[2741]: I0912 17:48:46.373107 2741 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:48:46.373263 kubelet[2741]: I0912 17:48:46.373132 2741 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-a-9facd647c1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:48:46.373371 kubelet[2741]: I0912 17:48:46.373273 2741 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:48:46.373371 kubelet[2741]: I0912 17:48:46.373281 2741 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:48:46.373412 kubelet[2741]: I0912 17:48:46.373371 2741 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:46.376024 kubelet[2741]: I0912 17:48:46.375530 2741 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:48:46.376024 kubelet[2741]: I0912 17:48:46.375550 2741 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:48:46.376024 kubelet[2741]: I0912 17:48:46.375585 2741 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:48:46.376024 kubelet[2741]: I0912 17:48:46.375602 2741 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:48:46.380648 kubelet[2741]: W0912 17:48:46.380594 2741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-a-9facd647c1&limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Sep 12 17:48:46.380777 kubelet[2741]: E0912 17:48:46.380762 2741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4426.1.0-a-9facd647c1&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:46.380918 kubelet[2741]: I0912 17:48:46.380910 2741 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:48:46.381368 kubelet[2741]: I0912 17:48:46.381359 2741 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:48:46.383097 kubelet[2741]: W0912 17:48:46.382248 2741 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:48:46.384332 kubelet[2741]: I0912 17:48:46.384318 2741 server.go:1274] "Started kubelet" Sep 12 17:48:46.386919 kubelet[2741]: W0912 17:48:46.386883 2741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Sep 12 17:48:46.387327 kubelet[2741]: E0912 17:48:46.387312 2741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:46.388879 kubelet[2741]: I0912 17:48:46.387934 2741 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:48:46.388879 kubelet[2741]: I0912 17:48:46.387963 2741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:48:46.388879 kubelet[2741]: I0912 17:48:46.388427 2741 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:48:46.388879 kubelet[2741]: I0912 17:48:46.388640 2741 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:48:46.390303 kubelet[2741]: I0912 17:48:46.390287 2741 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:48:46.395005 kubelet[2741]: I0912 17:48:46.394993 2741 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:48:46.395245 kubelet[2741]: I0912 17:48:46.395214 2741 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:48:46.396144 kubelet[2741]: I0912 17:48:46.396130 2741 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:48:46.396191 kubelet[2741]: I0912 17:48:46.396168 2741 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:48:46.397294 kubelet[2741]: W0912 17:48:46.397254 2741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Sep 12 17:48:46.397397 kubelet[2741]: E0912 17:48:46.397295 2741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:46.397865 kubelet[2741]: E0912 17:48:46.397844 2741 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-9facd647c1\" not found" Sep 12 17:48:46.398113 kubelet[2741]: E0912 17:48:46.398093 2741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-9facd647c1?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="200ms" Sep 12 17:48:46.398249 kubelet[2741]: I0912 17:48:46.398236 2741 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:48:46.399302 kubelet[2741]: I0912 17:48:46.398352 2741 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:48:46.399302 kubelet[2741]: I0912 17:48:46.399208 2741 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:48:46.416380 kubelet[2741]: E0912 17:48:46.415108 2741 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4426.1.0-a-9facd647c1.18649a3983144f9f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4426.1.0-a-9facd647c1,UID:ci-4426.1.0-a-9facd647c1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4426.1.0-a-9facd647c1,},FirstTimestamp:2025-09-12 17:48:46.384295839 +0000 UTC m=+0.291933519,LastTimestamp:2025-09-12 17:48:46.384295839 +0000 UTC m=+0.291933519,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4426.1.0-a-9facd647c1,}" Sep 12 17:48:46.417387 kubelet[2741]: E0912 17:48:46.417370 2741 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:48:46.420399 kubelet[2741]: I0912 17:48:46.420385 2741 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:48:46.420399 kubelet[2741]: I0912 17:48:46.420397 2741 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:48:46.420496 kubelet[2741]: I0912 17:48:46.420410 2741 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:46.425457 kubelet[2741]: I0912 17:48:46.425440 2741 policy_none.go:49] "None policy: Start" Sep 12 17:48:46.425881 kubelet[2741]: I0912 17:48:46.425869 2741 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:48:46.425934 kubelet[2741]: I0912 17:48:46.425885 2741 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:48:46.431148 kubelet[2741]: I0912 17:48:46.431116 2741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:48:46.432582 kubelet[2741]: I0912 17:48:46.432523 2741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:48:46.432582 kubelet[2741]: I0912 17:48:46.432544 2741 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:48:46.432582 kubelet[2741]: I0912 17:48:46.432559 2741 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:48:46.432838 kubelet[2741]: E0912 17:48:46.432824 2741 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:48:46.441644 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:48:46.442637 kubelet[2741]: W0912 17:48:46.442609 2741 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Sep 12 17:48:46.442939 kubelet[2741]: E0912 17:48:46.442926 2741 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:48:46.451523 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:48:46.455208 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:48:46.472254 kubelet[2741]: I0912 17:48:46.472222 2741 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:48:46.472379 kubelet[2741]: I0912 17:48:46.472368 2741 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:48:46.472496 kubelet[2741]: I0912 17:48:46.472383 2741 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:48:46.472974 kubelet[2741]: I0912 17:48:46.472962 2741 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:48:46.474821 kubelet[2741]: E0912 17:48:46.474803 2741 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4426.1.0-a-9facd647c1\" not found" Sep 12 17:48:46.541759 systemd[1]: Created slice kubepods-burstable-pod4ee3a16343ae3f7628548083bd33f71a.slice - libcontainer container kubepods-burstable-pod4ee3a16343ae3f7628548083bd33f71a.slice. Sep 12 17:48:46.554902 systemd[1]: Created slice kubepods-burstable-pod690eeb899c1c57132e45f4e7a83dfa52.slice - libcontainer container kubepods-burstable-pod690eeb899c1c57132e45f4e7a83dfa52.slice. Sep 12 17:48:46.565182 systemd[1]: Created slice kubepods-burstable-pod62ac4bd6e6875b795bc5d336f88a8de4.slice - libcontainer container kubepods-burstable-pod62ac4bd6e6875b795bc5d336f88a8de4.slice. Sep 12 17:48:46.574012 kubelet[2741]: I0912 17:48:46.573982 2741 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.574300 kubelet[2741]: E0912 17:48:46.574281 2741 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.598896 kubelet[2741]: E0912 17:48:46.598819 2741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-9facd647c1?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="400ms" Sep 12 17:48:46.696784 kubelet[2741]: I0912 17:48:46.696757 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ee3a16343ae3f7628548083bd33f71a-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-a-9facd647c1\" (UID: \"4ee3a16343ae3f7628548083bd33f71a\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.696867 kubelet[2741]: I0912 17:48:46.696790 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ee3a16343ae3f7628548083bd33f71a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-a-9facd647c1\" (UID: \"4ee3a16343ae3f7628548083bd33f71a\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.696867 kubelet[2741]: I0912 17:48:46.696809 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62ac4bd6e6875b795bc5d336f88a8de4-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-a-9facd647c1\" (UID: \"62ac4bd6e6875b795bc5d336f88a8de4\") " pod="kube-system/kube-scheduler-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.696867 kubelet[2741]: I0912 17:48:46.696823 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ee3a16343ae3f7628548083bd33f71a-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-a-9facd647c1\" (UID: \"4ee3a16343ae3f7628548083bd33f71a\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.696867 kubelet[2741]: I0912 17:48:46.696840 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/690eeb899c1c57132e45f4e7a83dfa52-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-9facd647c1\" (UID: \"690eeb899c1c57132e45f4e7a83dfa52\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.696867 kubelet[2741]: I0912 17:48:46.696856 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/690eeb899c1c57132e45f4e7a83dfa52-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-a-9facd647c1\" (UID: \"690eeb899c1c57132e45f4e7a83dfa52\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.696976 kubelet[2741]: I0912 17:48:46.696870 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/690eeb899c1c57132e45f4e7a83dfa52-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-9facd647c1\" (UID: \"690eeb899c1c57132e45f4e7a83dfa52\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.696976 kubelet[2741]: I0912 17:48:46.696887 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/690eeb899c1c57132e45f4e7a83dfa52-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-a-9facd647c1\" (UID: \"690eeb899c1c57132e45f4e7a83dfa52\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.696976 kubelet[2741]: I0912 17:48:46.696906 2741 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/690eeb899c1c57132e45f4e7a83dfa52-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-a-9facd647c1\" (UID: \"690eeb899c1c57132e45f4e7a83dfa52\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.776061 kubelet[2741]: I0912 17:48:46.776041 2741 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.776368 kubelet[2741]: E0912 17:48:46.776338 2741 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:46.853684 containerd[1729]: time="2025-09-12T17:48:46.853379528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-a-9facd647c1,Uid:4ee3a16343ae3f7628548083bd33f71a,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:46.857784 containerd[1729]: time="2025-09-12T17:48:46.857758263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-a-9facd647c1,Uid:690eeb899c1c57132e45f4e7a83dfa52,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:46.867577 containerd[1729]: time="2025-09-12T17:48:46.867552458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-a-9facd647c1,Uid:62ac4bd6e6875b795bc5d336f88a8de4,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:46.915231 containerd[1729]: time="2025-09-12T17:48:46.915193051Z" level=info msg="connecting to shim b5024359f178b67935da38da1ee0615e31fdd6cf1bb6df29bd8bd5d36bd7a795" address="unix:///run/containerd/s/66b4fe116e29737604432ab82394ad7b2c604b8b1c54203ab36818ee52449c23" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:46.940013 systemd[1]: Started cri-containerd-b5024359f178b67935da38da1ee0615e31fdd6cf1bb6df29bd8bd5d36bd7a795.scope - libcontainer container b5024359f178b67935da38da1ee0615e31fdd6cf1bb6df29bd8bd5d36bd7a795. Sep 12 17:48:46.955751 containerd[1729]: time="2025-09-12T17:48:46.955377732Z" level=info msg="connecting to shim 4cd9d28cd712bf41aada910d55431761b08bdf2b50a4424910f1c6b8b62e98dd" address="unix:///run/containerd/s/af75c505f640e0fb0b619f8453c290d3a3beb47e07a6339c617812aa3453f685" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:46.973937 containerd[1729]: time="2025-09-12T17:48:46.973902147Z" level=info msg="connecting to shim f2334b49c65c009d1ae13a6b0c35162bd098269f0668ee70a66f928fa0d10410" address="unix:///run/containerd/s/020050ccbda9e256affa9cbbd7b079474fe81d60463cb7f524071d24574a8370" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:46.987183 systemd[1]: Started cri-containerd-4cd9d28cd712bf41aada910d55431761b08bdf2b50a4424910f1c6b8b62e98dd.scope - libcontainer container 4cd9d28cd712bf41aada910d55431761b08bdf2b50a4424910f1c6b8b62e98dd. Sep 12 17:48:46.993457 systemd[1]: Started cri-containerd-f2334b49c65c009d1ae13a6b0c35162bd098269f0668ee70a66f928fa0d10410.scope - libcontainer container f2334b49c65c009d1ae13a6b0c35162bd098269f0668ee70a66f928fa0d10410. Sep 12 17:48:47.000045 kubelet[2741]: E0912 17:48:47.000014 2741 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4426.1.0-a-9facd647c1?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="800ms" Sep 12 17:48:47.021538 containerd[1729]: time="2025-09-12T17:48:47.021516666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4426.1.0-a-9facd647c1,Uid:4ee3a16343ae3f7628548083bd33f71a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5024359f178b67935da38da1ee0615e31fdd6cf1bb6df29bd8bd5d36bd7a795\"" Sep 12 17:48:47.024850 containerd[1729]: time="2025-09-12T17:48:47.024630543Z" level=info msg="CreateContainer within sandbox \"b5024359f178b67935da38da1ee0615e31fdd6cf1bb6df29bd8bd5d36bd7a795\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:48:47.047095 containerd[1729]: time="2025-09-12T17:48:47.047071126Z" level=info msg="Container 50a875a6430ad0d784d8ec742533c1ee1c13f4b343a3c135e6afc8f3a8915daf: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:47.079136 containerd[1729]: time="2025-09-12T17:48:47.079110875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4426.1.0-a-9facd647c1,Uid:690eeb899c1c57132e45f4e7a83dfa52,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cd9d28cd712bf41aada910d55431761b08bdf2b50a4424910f1c6b8b62e98dd\"" Sep 12 17:48:47.081676 containerd[1729]: time="2025-09-12T17:48:47.081650457Z" level=info msg="CreateContainer within sandbox \"4cd9d28cd712bf41aada910d55431761b08bdf2b50a4424910f1c6b8b62e98dd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:48:47.083968 containerd[1729]: time="2025-09-12T17:48:47.083901675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4426.1.0-a-9facd647c1,Uid:62ac4bd6e6875b795bc5d336f88a8de4,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2334b49c65c009d1ae13a6b0c35162bd098269f0668ee70a66f928fa0d10410\"" Sep 12 17:48:47.085519 containerd[1729]: time="2025-09-12T17:48:47.085463219Z" level=info msg="CreateContainer within sandbox \"f2334b49c65c009d1ae13a6b0c35162bd098269f0668ee70a66f928fa0d10410\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:48:47.090819 containerd[1729]: time="2025-09-12T17:48:47.090792356Z" level=info msg="CreateContainer within sandbox \"b5024359f178b67935da38da1ee0615e31fdd6cf1bb6df29bd8bd5d36bd7a795\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"50a875a6430ad0d784d8ec742533c1ee1c13f4b343a3c135e6afc8f3a8915daf\"" Sep 12 17:48:47.091277 containerd[1729]: time="2025-09-12T17:48:47.091258041Z" level=info msg="StartContainer for \"50a875a6430ad0d784d8ec742533c1ee1c13f4b343a3c135e6afc8f3a8915daf\"" Sep 12 17:48:47.092199 containerd[1729]: time="2025-09-12T17:48:47.092153876Z" level=info msg="connecting to shim 50a875a6430ad0d784d8ec742533c1ee1c13f4b343a3c135e6afc8f3a8915daf" address="unix:///run/containerd/s/66b4fe116e29737604432ab82394ad7b2c604b8b1c54203ab36818ee52449c23" protocol=ttrpc version=3 Sep 12 17:48:47.101807 containerd[1729]: time="2025-09-12T17:48:47.101773586Z" level=info msg="Container e867ef6de357e11d96ca9e72b57c7fe50c574edfa6be127b1d41a080fb3e4d7a: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:47.105078 containerd[1729]: time="2025-09-12T17:48:47.105020121Z" level=info msg="Container 1cf714cb8b1caa05fc8f21ae48bff33afa8c22718dd75fdb0150e9e557328baa: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:47.106834 systemd[1]: Started cri-containerd-50a875a6430ad0d784d8ec742533c1ee1c13f4b343a3c135e6afc8f3a8915daf.scope - libcontainer container 50a875a6430ad0d784d8ec742533c1ee1c13f4b343a3c135e6afc8f3a8915daf. Sep 12 17:48:47.124180 containerd[1729]: time="2025-09-12T17:48:47.124117585Z" level=info msg="CreateContainer within sandbox \"f2334b49c65c009d1ae13a6b0c35162bd098269f0668ee70a66f928fa0d10410\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1cf714cb8b1caa05fc8f21ae48bff33afa8c22718dd75fdb0150e9e557328baa\"" Sep 12 17:48:47.124527 containerd[1729]: time="2025-09-12T17:48:47.124510699Z" level=info msg="StartContainer for \"1cf714cb8b1caa05fc8f21ae48bff33afa8c22718dd75fdb0150e9e557328baa\"" Sep 12 17:48:47.125682 containerd[1729]: time="2025-09-12T17:48:47.125659027Z" level=info msg="CreateContainer within sandbox \"4cd9d28cd712bf41aada910d55431761b08bdf2b50a4424910f1c6b8b62e98dd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e867ef6de357e11d96ca9e72b57c7fe50c574edfa6be127b1d41a080fb3e4d7a\"" Sep 12 17:48:47.126161 containerd[1729]: time="2025-09-12T17:48:47.126142282Z" level=info msg="StartContainer for \"e867ef6de357e11d96ca9e72b57c7fe50c574edfa6be127b1d41a080fb3e4d7a\"" Sep 12 17:48:47.126746 containerd[1729]: time="2025-09-12T17:48:47.126545379Z" level=info msg="connecting to shim 1cf714cb8b1caa05fc8f21ae48bff33afa8c22718dd75fdb0150e9e557328baa" address="unix:///run/containerd/s/020050ccbda9e256affa9cbbd7b079474fe81d60463cb7f524071d24574a8370" protocol=ttrpc version=3 Sep 12 17:48:47.128204 containerd[1729]: time="2025-09-12T17:48:47.128172924Z" level=info msg="connecting to shim e867ef6de357e11d96ca9e72b57c7fe50c574edfa6be127b1d41a080fb3e4d7a" address="unix:///run/containerd/s/af75c505f640e0fb0b619f8453c290d3a3beb47e07a6339c617812aa3453f685" protocol=ttrpc version=3 Sep 12 17:48:47.147880 systemd[1]: Started cri-containerd-e867ef6de357e11d96ca9e72b57c7fe50c574edfa6be127b1d41a080fb3e4d7a.scope - libcontainer container e867ef6de357e11d96ca9e72b57c7fe50c574edfa6be127b1d41a080fb3e4d7a. Sep 12 17:48:47.156042 systemd[1]: Started cri-containerd-1cf714cb8b1caa05fc8f21ae48bff33afa8c22718dd75fdb0150e9e557328baa.scope - libcontainer container 1cf714cb8b1caa05fc8f21ae48bff33afa8c22718dd75fdb0150e9e557328baa. Sep 12 17:48:47.173175 containerd[1729]: time="2025-09-12T17:48:47.173150020Z" level=info msg="StartContainer for \"50a875a6430ad0d784d8ec742533c1ee1c13f4b343a3c135e6afc8f3a8915daf\" returns successfully" Sep 12 17:48:47.180070 kubelet[2741]: I0912 17:48:47.180049 2741 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:47.180390 kubelet[2741]: E0912 17:48:47.180369 2741 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:47.213298 containerd[1729]: time="2025-09-12T17:48:47.213256183Z" level=info msg="StartContainer for \"e867ef6de357e11d96ca9e72b57c7fe50c574edfa6be127b1d41a080fb3e4d7a\" returns successfully" Sep 12 17:48:47.303376 containerd[1729]: time="2025-09-12T17:48:47.303306132Z" level=info msg="StartContainer for \"1cf714cb8b1caa05fc8f21ae48bff33afa8c22718dd75fdb0150e9e557328baa\" returns successfully" Sep 12 17:48:47.921111 update_engine[1701]: I20250912 17:48:47.920567 1701 update_attempter.cc:509] Updating boot flags... Sep 12 17:48:47.987334 kubelet[2741]: I0912 17:48:47.985138 2741 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:48.964102 kubelet[2741]: E0912 17:48:48.964053 2741 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4426.1.0-a-9facd647c1\" not found" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:49.151996 kubelet[2741]: I0912 17:48:49.151798 2741 kubelet_node_status.go:75] "Successfully registered node" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:49.387167 kubelet[2741]: I0912 17:48:49.386240 2741 apiserver.go:52] "Watching apiserver" Sep 12 17:48:49.397065 kubelet[2741]: I0912 17:48:49.397041 2741 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:48:51.032098 systemd[1]: Reload requested from client PID 3032 ('systemctl') (unit session-9.scope)... Sep 12 17:48:51.032113 systemd[1]: Reloading... Sep 12 17:48:51.120836 zram_generator::config[3079]: No configuration found. Sep 12 17:48:51.305765 systemd[1]: Reloading finished in 273 ms. Sep 12 17:48:51.333748 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:51.346583 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:48:51.346846 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:51.346896 systemd[1]: kubelet.service: Consumed 592ms CPU time, 128.6M memory peak. Sep 12 17:48:51.348261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:48:51.776781 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:48:51.783986 (kubelet)[3146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:48:51.823779 kubelet[3146]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:51.823779 kubelet[3146]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:48:51.823779 kubelet[3146]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:48:51.823779 kubelet[3146]: I0912 17:48:51.823609 3146 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:48:51.832530 kubelet[3146]: I0912 17:48:51.832506 3146 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:48:51.832696 kubelet[3146]: I0912 17:48:51.832681 3146 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:48:51.832909 kubelet[3146]: I0912 17:48:51.832897 3146 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:48:51.834373 kubelet[3146]: I0912 17:48:51.834356 3146 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:48:51.837800 kubelet[3146]: I0912 17:48:51.837769 3146 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:48:51.842105 kubelet[3146]: I0912 17:48:51.842078 3146 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:48:51.844251 kubelet[3146]: I0912 17:48:51.844234 3146 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:48:51.844341 kubelet[3146]: I0912 17:48:51.844312 3146 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:48:51.844432 kubelet[3146]: I0912 17:48:51.844407 3146 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:48:51.844569 kubelet[3146]: I0912 17:48:51.844432 3146 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4426.1.0-a-9facd647c1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:48:51.844658 kubelet[3146]: I0912 17:48:51.844577 3146 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:48:51.844658 kubelet[3146]: I0912 17:48:51.844588 3146 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:48:51.844658 kubelet[3146]: I0912 17:48:51.844613 3146 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:51.844739 kubelet[3146]: I0912 17:48:51.844697 3146 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:48:51.844739 kubelet[3146]: I0912 17:48:51.844707 3146 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:48:51.844785 kubelet[3146]: I0912 17:48:51.844750 3146 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:48:51.844785 kubelet[3146]: I0912 17:48:51.844760 3146 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:48:51.847767 kubelet[3146]: I0912 17:48:51.845482 3146 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:48:51.847767 kubelet[3146]: I0912 17:48:51.845862 3146 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:48:51.847767 kubelet[3146]: I0912 17:48:51.846243 3146 server.go:1274] "Started kubelet" Sep 12 17:48:51.848523 kubelet[3146]: I0912 17:48:51.848488 3146 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:48:51.851507 kubelet[3146]: I0912 17:48:51.851473 3146 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:48:51.855016 kubelet[3146]: I0912 17:48:51.854986 3146 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:48:51.855298 kubelet[3146]: I0912 17:48:51.855288 3146 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:48:51.855536 kubelet[3146]: I0912 17:48:51.855528 3146 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:48:51.858462 kubelet[3146]: I0912 17:48:51.858446 3146 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:48:51.858766 kubelet[3146]: E0912 17:48:51.858753 3146 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4426.1.0-a-9facd647c1\" not found" Sep 12 17:48:51.860218 kubelet[3146]: I0912 17:48:51.860203 3146 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:48:51.861992 kubelet[3146]: I0912 17:48:51.861978 3146 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:48:51.864073 kubelet[3146]: I0912 17:48:51.862362 3146 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:48:51.864940 kubelet[3146]: I0912 17:48:51.864921 3146 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:48:51.866051 kubelet[3146]: I0912 17:48:51.861435 3146 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:48:51.873084 kubelet[3146]: I0912 17:48:51.872975 3146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:48:51.874458 kubelet[3146]: I0912 17:48:51.874441 3146 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:48:51.874796 kubelet[3146]: I0912 17:48:51.874532 3146 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:48:51.874796 kubelet[3146]: I0912 17:48:51.874548 3146 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:48:51.874796 kubelet[3146]: E0912 17:48:51.874580 3146 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:48:51.878927 kubelet[3146]: I0912 17:48:51.878909 3146 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:48:51.896817 kubelet[3146]: E0912 17:48:51.896793 3146 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:48:51.922204 kubelet[3146]: I0912 17:48:51.922187 3146 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:48:51.922204 kubelet[3146]: I0912 17:48:51.922199 3146 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:48:51.922318 kubelet[3146]: I0912 17:48:51.922215 3146 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:48:51.922354 kubelet[3146]: I0912 17:48:51.922342 3146 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:48:51.922390 kubelet[3146]: I0912 17:48:51.922354 3146 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:48:51.922390 kubelet[3146]: I0912 17:48:51.922371 3146 policy_none.go:49] "None policy: Start" Sep 12 17:48:51.922918 kubelet[3146]: I0912 17:48:51.922909 3146 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:48:51.923008 kubelet[3146]: I0912 17:48:51.923002 3146 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:48:51.923149 kubelet[3146]: I0912 17:48:51.923145 3146 state_mem.go:75] "Updated machine memory state" Sep 12 17:48:51.926408 kubelet[3146]: I0912 17:48:51.926388 3146 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:48:51.926520 kubelet[3146]: I0912 17:48:51.926509 3146 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:48:51.926556 kubelet[3146]: I0912 17:48:51.926520 3146 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:48:51.927930 kubelet[3146]: I0912 17:48:51.927751 3146 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:48:51.983133 kubelet[3146]: W0912 17:48:51.983080 3146 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:48:51.986999 kubelet[3146]: W0912 17:48:51.986983 3146 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:48:51.987289 kubelet[3146]: W0912 17:48:51.987132 3146 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:48:52.029499 kubelet[3146]: I0912 17:48:52.029108 3146 kubelet_node_status.go:72] "Attempting to register node" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.038288 kubelet[3146]: I0912 17:48:52.038255 3146 kubelet_node_status.go:111] "Node was previously registered" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.038357 kubelet[3146]: I0912 17:48:52.038322 3146 kubelet_node_status.go:75] "Successfully registered node" node="ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.047093 sudo[3178]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:48:52.047325 sudo[3178]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:48:52.066961 kubelet[3146]: I0912 17:48:52.066916 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/690eeb899c1c57132e45f4e7a83dfa52-flexvolume-dir\") pod \"kube-controller-manager-ci-4426.1.0-a-9facd647c1\" (UID: \"690eeb899c1c57132e45f4e7a83dfa52\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.067119 kubelet[3146]: I0912 17:48:52.067068 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/690eeb899c1c57132e45f4e7a83dfa52-k8s-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-9facd647c1\" (UID: \"690eeb899c1c57132e45f4e7a83dfa52\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.067119 kubelet[3146]: I0912 17:48:52.067094 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/690eeb899c1c57132e45f4e7a83dfa52-kubeconfig\") pod \"kube-controller-manager-ci-4426.1.0-a-9facd647c1\" (UID: \"690eeb899c1c57132e45f4e7a83dfa52\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.067214 kubelet[3146]: I0912 17:48:52.067204 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4ee3a16343ae3f7628548083bd33f71a-k8s-certs\") pod \"kube-apiserver-ci-4426.1.0-a-9facd647c1\" (UID: \"4ee3a16343ae3f7628548083bd33f71a\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.067288 kubelet[3146]: I0912 17:48:52.067279 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/690eeb899c1c57132e45f4e7a83dfa52-ca-certs\") pod \"kube-controller-manager-ci-4426.1.0-a-9facd647c1\" (UID: \"690eeb899c1c57132e45f4e7a83dfa52\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.067373 kubelet[3146]: I0912 17:48:52.067364 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/690eeb899c1c57132e45f4e7a83dfa52-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4426.1.0-a-9facd647c1\" (UID: \"690eeb899c1c57132e45f4e7a83dfa52\") " pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.067454 kubelet[3146]: I0912 17:48:52.067445 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62ac4bd6e6875b795bc5d336f88a8de4-kubeconfig\") pod \"kube-scheduler-ci-4426.1.0-a-9facd647c1\" (UID: \"62ac4bd6e6875b795bc5d336f88a8de4\") " pod="kube-system/kube-scheduler-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.067529 kubelet[3146]: I0912 17:48:52.067521 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4ee3a16343ae3f7628548083bd33f71a-ca-certs\") pod \"kube-apiserver-ci-4426.1.0-a-9facd647c1\" (UID: \"4ee3a16343ae3f7628548083bd33f71a\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.067605 kubelet[3146]: I0912 17:48:52.067596 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4ee3a16343ae3f7628548083bd33f71a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4426.1.0-a-9facd647c1\" (UID: \"4ee3a16343ae3f7628548083bd33f71a\") " pod="kube-system/kube-apiserver-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.378839 sudo[3178]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:52.845686 kubelet[3146]: I0912 17:48:52.845650 3146 apiserver.go:52] "Watching apiserver" Sep 12 17:48:52.862489 kubelet[3146]: I0912 17:48:52.862440 3146 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:48:52.917843 kubelet[3146]: W0912 17:48:52.917820 3146 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:48:52.917934 kubelet[3146]: E0912 17:48:52.917874 3146 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4426.1.0-a-9facd647c1\" already exists" pod="kube-system/kube-scheduler-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.922391 kubelet[3146]: W0912 17:48:52.922369 3146 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:48:52.922670 kubelet[3146]: W0912 17:48:52.922366 3146 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:48:52.922670 kubelet[3146]: E0912 17:48:52.922529 3146 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4426.1.0-a-9facd647c1\" already exists" pod="kube-system/kube-apiserver-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.922670 kubelet[3146]: E0912 17:48:52.922566 3146 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4426.1.0-a-9facd647c1\" already exists" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" Sep 12 17:48:52.926496 kubelet[3146]: I0912 17:48:52.926019 3146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4426.1.0-a-9facd647c1" podStartSLOduration=1.925988464 podStartE2EDuration="1.925988464s" podCreationTimestamp="2025-09-12 17:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:52.925781562 +0000 UTC m=+1.138450552" watchObservedRunningTime="2025-09-12 17:48:52.925988464 +0000 UTC m=+1.138657453" Sep 12 17:48:52.943462 kubelet[3146]: I0912 17:48:52.943384 3146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4426.1.0-a-9facd647c1" podStartSLOduration=1.943369837 podStartE2EDuration="1.943369837s" podCreationTimestamp="2025-09-12 17:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:52.935668389 +0000 UTC m=+1.148337366" watchObservedRunningTime="2025-09-12 17:48:52.943369837 +0000 UTC m=+1.156038827" Sep 12 17:48:52.952619 kubelet[3146]: I0912 17:48:52.952555 3146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4426.1.0-a-9facd647c1" podStartSLOduration=1.952525317 podStartE2EDuration="1.952525317s" podCreationTimestamp="2025-09-12 17:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:52.943652728 +0000 UTC m=+1.156321717" watchObservedRunningTime="2025-09-12 17:48:52.952525317 +0000 UTC m=+1.165194306" Sep 12 17:48:53.634502 sudo[2174]: pam_unix(sudo:session): session closed for user root Sep 12 17:48:53.733601 sshd[2173]: Connection closed by 10.200.16.10 port 50056 Sep 12 17:48:53.734029 sshd-session[2170]: pam_unix(sshd:session): session closed for user core Sep 12 17:48:53.737322 systemd[1]: sshd@6-10.200.8.41:22-10.200.16.10:50056.service: Deactivated successfully. Sep 12 17:48:53.739442 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:48:53.739718 systemd[1]: session-9.scope: Consumed 3.289s CPU time, 265.6M memory peak. Sep 12 17:48:53.741046 systemd-logind[1699]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:48:53.742502 systemd-logind[1699]: Removed session 9. Sep 12 17:48:57.464440 kubelet[3146]: I0912 17:48:57.464410 3146 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:48:57.464845 containerd[1729]: time="2025-09-12T17:48:57.464775074Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:48:57.465086 kubelet[3146]: I0912 17:48:57.465068 3146 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:48:58.416506 systemd[1]: Created slice kubepods-besteffort-pod54865b92_045b_41fa_97b7_c24d270cad0d.slice - libcontainer container kubepods-besteffort-pod54865b92_045b_41fa_97b7_c24d270cad0d.slice. Sep 12 17:48:58.437331 systemd[1]: Created slice kubepods-burstable-pod19d72a5b_a8f4_4176_9cce_f1ec5ded664b.slice - libcontainer container kubepods-burstable-pod19d72a5b_a8f4_4176_9cce_f1ec5ded664b.slice. Sep 12 17:48:58.512522 kubelet[3146]: I0912 17:48:58.512455 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-host-proc-sys-net\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.512522 kubelet[3146]: I0912 17:48:58.512518 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-hostproc\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.512884 kubelet[3146]: I0912 17:48:58.512536 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-lib-modules\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.512884 kubelet[3146]: I0912 17:48:58.512552 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-host-proc-sys-kernel\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.512884 kubelet[3146]: I0912 17:48:58.512591 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-hubble-tls\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.512884 kubelet[3146]: I0912 17:48:58.512610 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-config-path\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.512884 kubelet[3146]: I0912 17:48:58.512626 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw4zz\" (UniqueName: \"kubernetes.io/projected/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-kube-api-access-gw4zz\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.512998 kubelet[3146]: I0912 17:48:58.512645 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54865b92-045b-41fa-97b7-c24d270cad0d-kube-proxy\") pod \"kube-proxy-9gcr6\" (UID: \"54865b92-045b-41fa-97b7-c24d270cad0d\") " pod="kube-system/kube-proxy-9gcr6" Sep 12 17:48:58.512998 kubelet[3146]: I0912 17:48:58.512663 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzntx\" (UniqueName: \"kubernetes.io/projected/54865b92-045b-41fa-97b7-c24d270cad0d-kube-api-access-kzntx\") pod \"kube-proxy-9gcr6\" (UID: \"54865b92-045b-41fa-97b7-c24d270cad0d\") " pod="kube-system/kube-proxy-9gcr6" Sep 12 17:48:58.512998 kubelet[3146]: I0912 17:48:58.512679 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cni-path\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.512998 kubelet[3146]: I0912 17:48:58.512696 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-etc-cni-netd\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.512998 kubelet[3146]: I0912 17:48:58.512716 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-clustermesh-secrets\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.512998 kubelet[3146]: I0912 17:48:58.512760 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54865b92-045b-41fa-97b7-c24d270cad0d-lib-modules\") pod \"kube-proxy-9gcr6\" (UID: \"54865b92-045b-41fa-97b7-c24d270cad0d\") " pod="kube-system/kube-proxy-9gcr6" Sep 12 17:48:58.513091 kubelet[3146]: I0912 17:48:58.512775 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-run\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.513091 kubelet[3146]: I0912 17:48:58.512790 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-bpf-maps\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.513091 kubelet[3146]: I0912 17:48:58.512806 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-xtables-lock\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.513091 kubelet[3146]: I0912 17:48:58.512820 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-cgroup\") pod \"cilium-tn7f5\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " pod="kube-system/cilium-tn7f5" Sep 12 17:48:58.513091 kubelet[3146]: I0912 17:48:58.512837 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54865b92-045b-41fa-97b7-c24d270cad0d-xtables-lock\") pod \"kube-proxy-9gcr6\" (UID: \"54865b92-045b-41fa-97b7-c24d270cad0d\") " pod="kube-system/kube-proxy-9gcr6" Sep 12 17:48:58.551298 systemd[1]: Created slice kubepods-besteffort-pod3f9bf460_0b2f_4d12_8ee3_f34536ef51d3.slice - libcontainer container kubepods-besteffort-pod3f9bf460_0b2f_4d12_8ee3_f34536ef51d3.slice. Sep 12 17:48:58.613532 kubelet[3146]: I0912 17:48:58.613506 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pzb2\" (UniqueName: \"kubernetes.io/projected/3f9bf460-0b2f-4d12-8ee3-f34536ef51d3-kube-api-access-7pzb2\") pod \"cilium-operator-5d85765b45-2s92c\" (UID: \"3f9bf460-0b2f-4d12-8ee3-f34536ef51d3\") " pod="kube-system/cilium-operator-5d85765b45-2s92c" Sep 12 17:48:58.613737 kubelet[3146]: I0912 17:48:58.613626 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f9bf460-0b2f-4d12-8ee3-f34536ef51d3-cilium-config-path\") pod \"cilium-operator-5d85765b45-2s92c\" (UID: \"3f9bf460-0b2f-4d12-8ee3-f34536ef51d3\") " pod="kube-system/cilium-operator-5d85765b45-2s92c" Sep 12 17:48:58.744285 containerd[1729]: time="2025-09-12T17:48:58.743743246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tn7f5,Uid:19d72a5b-a8f4-4176-9cce-f1ec5ded664b,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:58.744285 containerd[1729]: time="2025-09-12T17:48:58.743763016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9gcr6,Uid:54865b92-045b-41fa-97b7-c24d270cad0d,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:58.802325 containerd[1729]: time="2025-09-12T17:48:58.802286698Z" level=info msg="connecting to shim 24103e0a1c16fe37b0b5a057f8526c4c4ad2188597083125d8fc3be05be0de9f" address="unix:///run/containerd/s/d4b9efc4b637392fba7efd17be2d45bfd64b9c41c763586a68ba0e8443068f23" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:58.802859 containerd[1729]: time="2025-09-12T17:48:58.802823961Z" level=info msg="connecting to shim 60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989" address="unix:///run/containerd/s/62ba7df13973e43b2c98fe0f795d7bca4985ba9807d17dd94cf91fda6209474d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:58.821980 systemd[1]: Started cri-containerd-60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989.scope - libcontainer container 60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989. Sep 12 17:48:58.826410 systemd[1]: Started cri-containerd-24103e0a1c16fe37b0b5a057f8526c4c4ad2188597083125d8fc3be05be0de9f.scope - libcontainer container 24103e0a1c16fe37b0b5a057f8526c4c4ad2188597083125d8fc3be05be0de9f. Sep 12 17:48:58.856477 containerd[1729]: time="2025-09-12T17:48:58.856428826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2s92c,Uid:3f9bf460-0b2f-4d12-8ee3-f34536ef51d3,Namespace:kube-system,Attempt:0,}" Sep 12 17:48:58.856649 containerd[1729]: time="2025-09-12T17:48:58.856625382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tn7f5,Uid:19d72a5b-a8f4-4176-9cce-f1ec5ded664b,Namespace:kube-system,Attempt:0,} returns sandbox id \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\"" Sep 12 17:48:58.859959 containerd[1729]: time="2025-09-12T17:48:58.859654960Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:48:58.867291 containerd[1729]: time="2025-09-12T17:48:58.867267424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9gcr6,Uid:54865b92-045b-41fa-97b7-c24d270cad0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24103e0a1c16fe37b0b5a057f8526c4c4ad2188597083125d8fc3be05be0de9f\"" Sep 12 17:48:58.869403 containerd[1729]: time="2025-09-12T17:48:58.869372703Z" level=info msg="CreateContainer within sandbox \"24103e0a1c16fe37b0b5a057f8526c4c4ad2188597083125d8fc3be05be0de9f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:48:58.896637 containerd[1729]: time="2025-09-12T17:48:58.896612686Z" level=info msg="Container 9e0eaa4b7b7baa356f0c79c9b3f69b387311701725816a1d908b655060e70d5b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:48:58.915517 containerd[1729]: time="2025-09-12T17:48:58.915457734Z" level=info msg="connecting to shim db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f" address="unix:///run/containerd/s/9a9417313ec2c0c5c7de471b997b8f78ed20422558dc53186568e10f7f6422f9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:48:58.918913 containerd[1729]: time="2025-09-12T17:48:58.918888639Z" level=info msg="CreateContainer within sandbox \"24103e0a1c16fe37b0b5a057f8526c4c4ad2188597083125d8fc3be05be0de9f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9e0eaa4b7b7baa356f0c79c9b3f69b387311701725816a1d908b655060e70d5b\"" Sep 12 17:48:58.919707 containerd[1729]: time="2025-09-12T17:48:58.919678332Z" level=info msg="StartContainer for \"9e0eaa4b7b7baa356f0c79c9b3f69b387311701725816a1d908b655060e70d5b\"" Sep 12 17:48:58.922757 containerd[1729]: time="2025-09-12T17:48:58.922555532Z" level=info msg="connecting to shim 9e0eaa4b7b7baa356f0c79c9b3f69b387311701725816a1d908b655060e70d5b" address="unix:///run/containerd/s/d4b9efc4b637392fba7efd17be2d45bfd64b9c41c763586a68ba0e8443068f23" protocol=ttrpc version=3 Sep 12 17:48:58.940885 systemd[1]: Started cri-containerd-db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f.scope - libcontainer container db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f. Sep 12 17:48:58.943833 systemd[1]: Started cri-containerd-9e0eaa4b7b7baa356f0c79c9b3f69b387311701725816a1d908b655060e70d5b.scope - libcontainer container 9e0eaa4b7b7baa356f0c79c9b3f69b387311701725816a1d908b655060e70d5b. Sep 12 17:48:59.000333 containerd[1729]: time="2025-09-12T17:48:58.999633754Z" level=info msg="StartContainer for \"9e0eaa4b7b7baa356f0c79c9b3f69b387311701725816a1d908b655060e70d5b\" returns successfully" Sep 12 17:48:59.011544 containerd[1729]: time="2025-09-12T17:48:59.011519526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2s92c,Uid:3f9bf460-0b2f-4d12-8ee3-f34536ef51d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f\"" Sep 12 17:48:59.935107 kubelet[3146]: I0912 17:48:59.934203 3146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9gcr6" podStartSLOduration=1.934188986 podStartE2EDuration="1.934188986s" podCreationTimestamp="2025-09-12 17:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:48:59.93368773 +0000 UTC m=+8.146362950" watchObservedRunningTime="2025-09-12 17:48:59.934188986 +0000 UTC m=+8.146857969" Sep 12 17:49:04.164496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1821335825.mount: Deactivated successfully. Sep 12 17:49:05.645912 containerd[1729]: time="2025-09-12T17:49:05.645867071Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:05.648269 containerd[1729]: time="2025-09-12T17:49:05.648230946Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 12 17:49:05.651019 containerd[1729]: time="2025-09-12T17:49:05.650978883Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:05.651974 containerd[1729]: time="2025-09-12T17:49:05.651944480Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.791278959s" Sep 12 17:49:05.651974 containerd[1729]: time="2025-09-12T17:49:05.651973272Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 12 17:49:05.653244 containerd[1729]: time="2025-09-12T17:49:05.652796314Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:49:05.654578 containerd[1729]: time="2025-09-12T17:49:05.654547699Z" level=info msg="CreateContainer within sandbox \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:49:05.682576 containerd[1729]: time="2025-09-12T17:49:05.680342875Z" level=info msg="Container ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:05.698572 containerd[1729]: time="2025-09-12T17:49:05.698546514Z" level=info msg="CreateContainer within sandbox \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\"" Sep 12 17:49:05.699859 containerd[1729]: time="2025-09-12T17:49:05.698929232Z" level=info msg="StartContainer for \"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\"" Sep 12 17:49:05.699859 containerd[1729]: time="2025-09-12T17:49:05.699748485Z" level=info msg="connecting to shim ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04" address="unix:///run/containerd/s/62ba7df13973e43b2c98fe0f795d7bca4985ba9807d17dd94cf91fda6209474d" protocol=ttrpc version=3 Sep 12 17:49:05.718882 systemd[1]: Started cri-containerd-ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04.scope - libcontainer container ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04. Sep 12 17:49:05.747618 containerd[1729]: time="2025-09-12T17:49:05.747584992Z" level=info msg="StartContainer for \"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\" returns successfully" Sep 12 17:49:05.754013 systemd[1]: cri-containerd-ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04.scope: Deactivated successfully. Sep 12 17:49:05.755963 containerd[1729]: time="2025-09-12T17:49:05.755938102Z" level=info msg="received exit event container_id:\"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\" id:\"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\" pid:3557 exited_at:{seconds:1757699345 nanos:755081047}" Sep 12 17:49:05.756155 containerd[1729]: time="2025-09-12T17:49:05.756134210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\" id:\"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\" pid:3557 exited_at:{seconds:1757699345 nanos:755081047}" Sep 12 17:49:05.771054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04-rootfs.mount: Deactivated successfully. Sep 12 17:49:14.947109 containerd[1729]: time="2025-09-12T17:49:14.947067856Z" level=info msg="CreateContainer within sandbox \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:49:14.967829 containerd[1729]: time="2025-09-12T17:49:14.967574875Z" level=info msg="Container db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:14.972676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541108699.mount: Deactivated successfully. Sep 12 17:49:14.982283 containerd[1729]: time="2025-09-12T17:49:14.982255792Z" level=info msg="CreateContainer within sandbox \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\"" Sep 12 17:49:14.982749 containerd[1729]: time="2025-09-12T17:49:14.982708001Z" level=info msg="StartContainer for \"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\"" Sep 12 17:49:14.983571 containerd[1729]: time="2025-09-12T17:49:14.983545022Z" level=info msg="connecting to shim db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a" address="unix:///run/containerd/s/62ba7df13973e43b2c98fe0f795d7bca4985ba9807d17dd94cf91fda6209474d" protocol=ttrpc version=3 Sep 12 17:49:15.004872 systemd[1]: Started cri-containerd-db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a.scope - libcontainer container db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a. Sep 12 17:49:15.033043 containerd[1729]: time="2025-09-12T17:49:15.033014342Z" level=info msg="StartContainer for \"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\" returns successfully" Sep 12 17:49:15.041651 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:49:15.042422 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:49:15.043827 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:49:15.046010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:49:15.047661 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:49:15.049203 containerd[1729]: time="2025-09-12T17:49:15.049088330Z" level=info msg="received exit event container_id:\"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\" id:\"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\" pid:3601 exited_at:{seconds:1757699355 nanos:48937944}" Sep 12 17:49:15.049563 systemd[1]: cri-containerd-db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a.scope: Deactivated successfully. Sep 12 17:49:15.051212 containerd[1729]: time="2025-09-12T17:49:15.050262327Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\" id:\"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\" pid:3601 exited_at:{seconds:1757699355 nanos:48937944}" Sep 12 17:49:15.071449 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:49:15.951092 containerd[1729]: time="2025-09-12T17:49:15.950786961Z" level=info msg="CreateContainer within sandbox \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:49:15.967222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a-rootfs.mount: Deactivated successfully. Sep 12 17:49:17.096351 containerd[1729]: time="2025-09-12T17:49:17.096312221Z" level=info msg="Container 32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:17.100976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1708922484.mount: Deactivated successfully. Sep 12 17:49:17.142803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3348948683.mount: Deactivated successfully. Sep 12 17:49:17.691644 containerd[1729]: time="2025-09-12T17:49:17.691609085Z" level=info msg="CreateContainer within sandbox \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\"" Sep 12 17:49:17.692238 containerd[1729]: time="2025-09-12T17:49:17.692211694Z" level=info msg="StartContainer for \"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\"" Sep 12 17:49:17.693897 containerd[1729]: time="2025-09-12T17:49:17.693862538Z" level=info msg="connecting to shim 32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d" address="unix:///run/containerd/s/62ba7df13973e43b2c98fe0f795d7bca4985ba9807d17dd94cf91fda6209474d" protocol=ttrpc version=3 Sep 12 17:49:17.714873 systemd[1]: Started cri-containerd-32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d.scope - libcontainer container 32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d. Sep 12 17:49:17.742486 systemd[1]: cri-containerd-32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d.scope: Deactivated successfully. Sep 12 17:49:17.744651 containerd[1729]: time="2025-09-12T17:49:17.744626183Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\" id:\"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\" pid:3654 exited_at:{seconds:1757699357 nanos:744346613}" Sep 12 17:49:17.845888 containerd[1729]: time="2025-09-12T17:49:17.845860077Z" level=info msg="received exit event container_id:\"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\" id:\"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\" pid:3654 exited_at:{seconds:1757699357 nanos:744346613}" Sep 12 17:49:17.848215 containerd[1729]: time="2025-09-12T17:49:17.848191769Z" level=info msg="StartContainer for \"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\" returns successfully" Sep 12 17:49:18.991760 containerd[1729]: time="2025-09-12T17:49:18.991549126Z" level=info msg="CreateContainer within sandbox \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:49:19.448674 containerd[1729]: time="2025-09-12T17:49:19.448562474Z" level=info msg="Container 151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:19.702188 containerd[1729]: time="2025-09-12T17:49:19.702101584Z" level=info msg="CreateContainer within sandbox \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\"" Sep 12 17:49:19.704002 containerd[1729]: time="2025-09-12T17:49:19.703260056Z" level=info msg="StartContainer for \"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\"" Sep 12 17:49:19.705465 containerd[1729]: time="2025-09-12T17:49:19.705046639Z" level=info msg="connecting to shim 151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251" address="unix:///run/containerd/s/62ba7df13973e43b2c98fe0f795d7bca4985ba9807d17dd94cf91fda6209474d" protocol=ttrpc version=3 Sep 12 17:49:19.728970 systemd[1]: Started cri-containerd-151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251.scope - libcontainer container 151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251. Sep 12 17:49:19.765207 systemd[1]: cri-containerd-151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251.scope: Deactivated successfully. Sep 12 17:49:19.766309 containerd[1729]: time="2025-09-12T17:49:19.766285966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\" id:\"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\" pid:3700 exited_at:{seconds:1757699359 nanos:765604890}" Sep 12 17:49:19.771902 containerd[1729]: time="2025-09-12T17:49:19.771865282Z" level=info msg="received exit event container_id:\"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\" id:\"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\" pid:3700 exited_at:{seconds:1757699359 nanos:765604890}" Sep 12 17:49:19.782486 containerd[1729]: time="2025-09-12T17:49:19.782457174Z" level=info msg="StartContainer for \"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\" returns successfully" Sep 12 17:49:19.799367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251-rootfs.mount: Deactivated successfully. Sep 12 17:49:23.001817 containerd[1729]: time="2025-09-12T17:49:23.001760255Z" level=info msg="CreateContainer within sandbox \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:49:23.390039 containerd[1729]: time="2025-09-12T17:49:23.389987725Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:23.442746 containerd[1729]: time="2025-09-12T17:49:23.440374151Z" level=info msg="Container 398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:23.444058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291472725.mount: Deactivated successfully. Sep 12 17:49:23.499008 containerd[1729]: time="2025-09-12T17:49:23.498981415Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 12 17:49:23.594747 containerd[1729]: time="2025-09-12T17:49:23.594674889Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:49:23.643019 containerd[1729]: time="2025-09-12T17:49:23.642932210Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 17.990100615s" Sep 12 17:49:23.643019 containerd[1729]: time="2025-09-12T17:49:23.642962305Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 12 17:49:23.646440 containerd[1729]: time="2025-09-12T17:49:23.646417117Z" level=info msg="CreateContainer within sandbox \"db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:49:23.649668 containerd[1729]: time="2025-09-12T17:49:23.649620712Z" level=info msg="CreateContainer within sandbox \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\"" Sep 12 17:49:23.650280 containerd[1729]: time="2025-09-12T17:49:23.650260316Z" level=info msg="StartContainer for \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\"" Sep 12 17:49:23.652069 containerd[1729]: time="2025-09-12T17:49:23.652040964Z" level=info msg="connecting to shim 398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd" address="unix:///run/containerd/s/62ba7df13973e43b2c98fe0f795d7bca4985ba9807d17dd94cf91fda6209474d" protocol=ttrpc version=3 Sep 12 17:49:23.673862 systemd[1]: Started cri-containerd-398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd.scope - libcontainer container 398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd. Sep 12 17:49:23.801535 containerd[1729]: time="2025-09-12T17:49:23.801505936Z" level=info msg="StartContainer for \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" returns successfully" Sep 12 17:49:23.903580 containerd[1729]: time="2025-09-12T17:49:23.903501830Z" level=info msg="TaskExit event in podsandbox handler container_id:\"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" id:\"4c4d9e52c280ff1c7849b0014a261af041adf94e8102d7d6b4d9baa1f4644e49\" pid:3785 exited_at:{seconds:1757699363 nanos:903145726}" Sep 12 17:49:23.935008 kubelet[3146]: I0912 17:49:23.934953 3146 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:49:23.944558 containerd[1729]: time="2025-09-12T17:49:23.944528270Z" level=info msg="Container a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:23.979070 systemd[1]: Created slice kubepods-burstable-pod9a2bdcab_c93e_44f3_bb9d_d8e4f37bbd1e.slice - libcontainer container kubepods-burstable-pod9a2bdcab_c93e_44f3_bb9d_d8e4f37bbd1e.slice. Sep 12 17:49:23.983118 kubelet[3146]: I0912 17:49:23.983074 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a2bdcab-c93e-44f3-bb9d-d8e4f37bbd1e-config-volume\") pod \"coredns-7c65d6cfc9-t6qts\" (UID: \"9a2bdcab-c93e-44f3-bb9d-d8e4f37bbd1e\") " pod="kube-system/coredns-7c65d6cfc9-t6qts" Sep 12 17:49:23.983815 kubelet[3146]: I0912 17:49:23.983105 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8pc7\" (UniqueName: \"kubernetes.io/projected/9a2bdcab-c93e-44f3-bb9d-d8e4f37bbd1e-kube-api-access-s8pc7\") pod \"coredns-7c65d6cfc9-t6qts\" (UID: \"9a2bdcab-c93e-44f3-bb9d-d8e4f37bbd1e\") " pod="kube-system/coredns-7c65d6cfc9-t6qts" Sep 12 17:49:23.983815 kubelet[3146]: I0912 17:49:23.983770 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gqk9\" (UniqueName: \"kubernetes.io/projected/7ab6397b-03e5-4635-a05e-1d80e2ebec76-kube-api-access-8gqk9\") pod \"coredns-7c65d6cfc9-7zn8g\" (UID: \"7ab6397b-03e5-4635-a05e-1d80e2ebec76\") " pod="kube-system/coredns-7c65d6cfc9-7zn8g" Sep 12 17:49:23.983815 kubelet[3146]: I0912 17:49:23.983797 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ab6397b-03e5-4635-a05e-1d80e2ebec76-config-volume\") pod \"coredns-7c65d6cfc9-7zn8g\" (UID: \"7ab6397b-03e5-4635-a05e-1d80e2ebec76\") " pod="kube-system/coredns-7c65d6cfc9-7zn8g" Sep 12 17:49:23.987768 systemd[1]: Created slice kubepods-burstable-pod7ab6397b_03e5_4635_a05e_1d80e2ebec76.slice - libcontainer container kubepods-burstable-pod7ab6397b_03e5_4635_a05e_1d80e2ebec76.slice. Sep 12 17:49:24.027921 kubelet[3146]: I0912 17:49:24.027884 3146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tn7f5" podStartSLOduration=19.233635479 podStartE2EDuration="26.027871631s" podCreationTimestamp="2025-09-12 17:48:58 +0000 UTC" firstStartedPulling="2025-09-12 17:48:58.858443873 +0000 UTC m=+7.071112857" lastFinishedPulling="2025-09-12 17:49:05.652680028 +0000 UTC m=+13.865349009" observedRunningTime="2025-09-12 17:49:24.024375742 +0000 UTC m=+32.237044728" watchObservedRunningTime="2025-09-12 17:49:24.027871631 +0000 UTC m=+32.240540682" Sep 12 17:49:24.040913 containerd[1729]: time="2025-09-12T17:49:24.040887736Z" level=info msg="CreateContainer within sandbox \"db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\"" Sep 12 17:49:24.042090 containerd[1729]: time="2025-09-12T17:49:24.042069477Z" level=info msg="StartContainer for \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\"" Sep 12 17:49:24.043823 containerd[1729]: time="2025-09-12T17:49:24.043796181Z" level=info msg="connecting to shim a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7" address="unix:///run/containerd/s/9a9417313ec2c0c5c7de471b997b8f78ed20422558dc53186568e10f7f6422f9" protocol=ttrpc version=3 Sep 12 17:49:24.061868 systemd[1]: Started cri-containerd-a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7.scope - libcontainer container a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7. Sep 12 17:49:24.126737 containerd[1729]: time="2025-09-12T17:49:24.126639012Z" level=info msg="StartContainer for \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\" returns successfully" Sep 12 17:49:24.284195 containerd[1729]: time="2025-09-12T17:49:24.283468260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t6qts,Uid:9a2bdcab-c93e-44f3-bb9d-d8e4f37bbd1e,Namespace:kube-system,Attempt:0,}" Sep 12 17:49:24.295232 containerd[1729]: time="2025-09-12T17:49:24.295208768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7zn8g,Uid:7ab6397b-03e5-4635-a05e-1d80e2ebec76,Namespace:kube-system,Attempt:0,}" Sep 12 17:49:27.734333 systemd-networkd[1366]: cilium_host: Link UP Sep 12 17:49:27.734428 systemd-networkd[1366]: cilium_net: Link UP Sep 12 17:49:27.734528 systemd-networkd[1366]: cilium_net: Gained carrier Sep 12 17:49:27.734615 systemd-networkd[1366]: cilium_host: Gained carrier Sep 12 17:49:27.878242 systemd-networkd[1366]: cilium_vxlan: Link UP Sep 12 17:49:27.878248 systemd-networkd[1366]: cilium_vxlan: Gained carrier Sep 12 17:49:27.910828 systemd-networkd[1366]: cilium_host: Gained IPv6LL Sep 12 17:49:28.070750 kernel: NET: Registered PF_ALG protocol family Sep 12 17:49:28.287049 systemd-networkd[1366]: cilium_net: Gained IPv6LL Sep 12 17:49:28.594008 systemd-networkd[1366]: lxc_health: Link UP Sep 12 17:49:28.609849 systemd-networkd[1366]: lxc_health: Gained carrier Sep 12 17:49:28.767707 kubelet[3146]: I0912 17:49:28.767639 3146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2s92c" podStartSLOduration=6.137197854 podStartE2EDuration="30.767620338s" podCreationTimestamp="2025-09-12 17:48:58 +0000 UTC" firstStartedPulling="2025-09-12 17:48:59.013049145 +0000 UTC m=+7.225718123" lastFinishedPulling="2025-09-12 17:49:23.643471628 +0000 UTC m=+31.856140607" observedRunningTime="2025-09-12 17:49:25.020885889 +0000 UTC m=+33.233554879" watchObservedRunningTime="2025-09-12 17:49:28.767620338 +0000 UTC m=+36.980289323" Sep 12 17:49:28.828756 kernel: eth0: renamed from tmpb9523 Sep 12 17:49:28.830379 systemd-networkd[1366]: lxc05d6787d0b4f: Link UP Sep 12 17:49:28.830576 systemd-networkd[1366]: lxc05d6787d0b4f: Gained carrier Sep 12 17:49:28.855652 systemd-networkd[1366]: lxc601679d0a467: Link UP Sep 12 17:49:28.857877 kernel: eth0: renamed from tmp483e7 Sep 12 17:49:28.858881 systemd-networkd[1366]: lxc601679d0a467: Gained carrier Sep 12 17:49:29.566844 systemd-networkd[1366]: cilium_vxlan: Gained IPv6LL Sep 12 17:49:29.950862 systemd-networkd[1366]: lxc_health: Gained IPv6LL Sep 12 17:49:30.718863 systemd-networkd[1366]: lxc601679d0a467: Gained IPv6LL Sep 12 17:49:30.782869 systemd-networkd[1366]: lxc05d6787d0b4f: Gained IPv6LL Sep 12 17:49:31.802961 containerd[1729]: time="2025-09-12T17:49:31.802868230Z" level=info msg="connecting to shim b9523d4595b97b811b29a04a7a0130b9af8dc099012b68dbf28931bc63fb5c26" address="unix:///run/containerd/s/c2860231a6a2aa84f7ef7ea0b2af0c61a9b3e244b09659cc8b48f9812ef7ab9d" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:31.820401 containerd[1729]: time="2025-09-12T17:49:31.820322068Z" level=info msg="connecting to shim 483e78baadf7843c137bc3ecbff5bbef7051b8b2d5677bc34634a337d4013813" address="unix:///run/containerd/s/cc2c07b9915582af118b92dd659beb6a686c58c3b49cafcd2921c26b13103b0c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:49:31.847245 systemd[1]: Started cri-containerd-b9523d4595b97b811b29a04a7a0130b9af8dc099012b68dbf28931bc63fb5c26.scope - libcontainer container b9523d4595b97b811b29a04a7a0130b9af8dc099012b68dbf28931bc63fb5c26. Sep 12 17:49:31.856678 systemd[1]: Started cri-containerd-483e78baadf7843c137bc3ecbff5bbef7051b8b2d5677bc34634a337d4013813.scope - libcontainer container 483e78baadf7843c137bc3ecbff5bbef7051b8b2d5677bc34634a337d4013813. Sep 12 17:49:31.908048 containerd[1729]: time="2025-09-12T17:49:31.908025085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t6qts,Uid:9a2bdcab-c93e-44f3-bb9d-d8e4f37bbd1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9523d4595b97b811b29a04a7a0130b9af8dc099012b68dbf28931bc63fb5c26\"" Sep 12 17:49:31.914200 containerd[1729]: time="2025-09-12T17:49:31.914071191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-7zn8g,Uid:7ab6397b-03e5-4635-a05e-1d80e2ebec76,Namespace:kube-system,Attempt:0,} returns sandbox id \"483e78baadf7843c137bc3ecbff5bbef7051b8b2d5677bc34634a337d4013813\"" Sep 12 17:49:31.917396 containerd[1729]: time="2025-09-12T17:49:31.915922589Z" level=info msg="CreateContainer within sandbox \"b9523d4595b97b811b29a04a7a0130b9af8dc099012b68dbf28931bc63fb5c26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:49:31.919822 containerd[1729]: time="2025-09-12T17:49:31.919801192Z" level=info msg="CreateContainer within sandbox \"483e78baadf7843c137bc3ecbff5bbef7051b8b2d5677bc34634a337d4013813\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:49:31.933954 containerd[1729]: time="2025-09-12T17:49:31.933916800Z" level=info msg="Container c019790ea584d5343a64ff4fa9394f210591d6b981bb2113437526042ff843e3: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:31.944376 containerd[1729]: time="2025-09-12T17:49:31.944357135Z" level=info msg="Container 9e86eb1cd0c012ba691bb9782f548de49b21edfb94456598297f2b39d3573c1b: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:49:31.950952 containerd[1729]: time="2025-09-12T17:49:31.950934815Z" level=info msg="CreateContainer within sandbox \"b9523d4595b97b811b29a04a7a0130b9af8dc099012b68dbf28931bc63fb5c26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c019790ea584d5343a64ff4fa9394f210591d6b981bb2113437526042ff843e3\"" Sep 12 17:49:31.951309 containerd[1729]: time="2025-09-12T17:49:31.951296412Z" level=info msg="StartContainer for \"c019790ea584d5343a64ff4fa9394f210591d6b981bb2113437526042ff843e3\"" Sep 12 17:49:31.952400 containerd[1729]: time="2025-09-12T17:49:31.952375846Z" level=info msg="connecting to shim c019790ea584d5343a64ff4fa9394f210591d6b981bb2113437526042ff843e3" address="unix:///run/containerd/s/c2860231a6a2aa84f7ef7ea0b2af0c61a9b3e244b09659cc8b48f9812ef7ab9d" protocol=ttrpc version=3 Sep 12 17:49:31.959111 containerd[1729]: time="2025-09-12T17:49:31.959086100Z" level=info msg="CreateContainer within sandbox \"483e78baadf7843c137bc3ecbff5bbef7051b8b2d5677bc34634a337d4013813\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e86eb1cd0c012ba691bb9782f548de49b21edfb94456598297f2b39d3573c1b\"" Sep 12 17:49:31.960150 containerd[1729]: time="2025-09-12T17:49:31.960052210Z" level=info msg="StartContainer for \"9e86eb1cd0c012ba691bb9782f548de49b21edfb94456598297f2b39d3573c1b\"" Sep 12 17:49:31.962821 containerd[1729]: time="2025-09-12T17:49:31.962793180Z" level=info msg="connecting to shim 9e86eb1cd0c012ba691bb9782f548de49b21edfb94456598297f2b39d3573c1b" address="unix:///run/containerd/s/cc2c07b9915582af118b92dd659beb6a686c58c3b49cafcd2921c26b13103b0c" protocol=ttrpc version=3 Sep 12 17:49:31.970017 systemd[1]: Started cri-containerd-c019790ea584d5343a64ff4fa9394f210591d6b981bb2113437526042ff843e3.scope - libcontainer container c019790ea584d5343a64ff4fa9394f210591d6b981bb2113437526042ff843e3. Sep 12 17:49:31.980969 systemd[1]: Started cri-containerd-9e86eb1cd0c012ba691bb9782f548de49b21edfb94456598297f2b39d3573c1b.scope - libcontainer container 9e86eb1cd0c012ba691bb9782f548de49b21edfb94456598297f2b39d3573c1b. Sep 12 17:49:32.018406 containerd[1729]: time="2025-09-12T17:49:32.018232533Z" level=info msg="StartContainer for \"c019790ea584d5343a64ff4fa9394f210591d6b981bb2113437526042ff843e3\" returns successfully" Sep 12 17:49:32.029192 containerd[1729]: time="2025-09-12T17:49:32.028891528Z" level=info msg="StartContainer for \"9e86eb1cd0c012ba691bb9782f548de49b21edfb94456598297f2b39d3573c1b\" returns successfully" Sep 12 17:49:32.055479 kubelet[3146]: I0912 17:49:32.054776 3146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-t6qts" podStartSLOduration=34.0547611 podStartE2EDuration="34.0547611s" podCreationTimestamp="2025-09-12 17:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:49:32.052888646 +0000 UTC m=+40.265557631" watchObservedRunningTime="2025-09-12 17:49:32.0547611 +0000 UTC m=+40.267430090" Sep 12 17:49:33.051091 kubelet[3146]: I0912 17:49:33.051041 3146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-7zn8g" podStartSLOduration=35.051024249 podStartE2EDuration="35.051024249s" podCreationTimestamp="2025-09-12 17:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:49:33.050701353 +0000 UTC m=+41.263370338" watchObservedRunningTime="2025-09-12 17:49:33.051024249 +0000 UTC m=+41.263693234" Sep 12 17:50:29.165773 systemd[1]: Started sshd@7-10.200.8.41:22-10.200.16.10:54704.service - OpenSSH per-connection server daemon (10.200.16.10:54704). Sep 12 17:50:29.788170 sshd[4462]: Accepted publickey for core from 10.200.16.10 port 54704 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:29.789272 sshd-session[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:29.793313 systemd-logind[1699]: New session 10 of user core. Sep 12 17:50:29.799867 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:50:30.296566 sshd[4465]: Connection closed by 10.200.16.10 port 54704 Sep 12 17:50:30.297045 sshd-session[4462]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:30.299659 systemd[1]: sshd@7-10.200.8.41:22-10.200.16.10:54704.service: Deactivated successfully. Sep 12 17:50:30.302583 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:50:30.303739 systemd-logind[1699]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:50:30.304542 systemd-logind[1699]: Removed session 10. Sep 12 17:50:35.408070 systemd[1]: Started sshd@8-10.200.8.41:22-10.200.16.10:33712.service - OpenSSH per-connection server daemon (10.200.16.10:33712). Sep 12 17:50:36.031584 sshd[4480]: Accepted publickey for core from 10.200.16.10 port 33712 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:36.032763 sshd-session[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:36.037106 systemd-logind[1699]: New session 11 of user core. Sep 12 17:50:36.045881 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:50:36.521978 sshd[4483]: Connection closed by 10.200.16.10 port 33712 Sep 12 17:50:36.522883 sshd-session[4480]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:36.526117 systemd-logind[1699]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:50:36.526257 systemd[1]: sshd@8-10.200.8.41:22-10.200.16.10:33712.service: Deactivated successfully. Sep 12 17:50:36.528388 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:50:36.529926 systemd-logind[1699]: Removed session 11. Sep 12 17:50:41.646862 systemd[1]: Started sshd@9-10.200.8.41:22-10.200.16.10:34784.service - OpenSSH per-connection server daemon (10.200.16.10:34784). Sep 12 17:50:42.280358 sshd[4496]: Accepted publickey for core from 10.200.16.10 port 34784 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:42.281399 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:42.284942 systemd-logind[1699]: New session 12 of user core. Sep 12 17:50:42.289855 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:50:42.768797 sshd[4499]: Connection closed by 10.200.16.10 port 34784 Sep 12 17:50:42.769274 sshd-session[4496]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:42.771927 systemd[1]: sshd@9-10.200.8.41:22-10.200.16.10:34784.service: Deactivated successfully. Sep 12 17:50:42.773525 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:50:42.775104 systemd-logind[1699]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:50:42.776372 systemd-logind[1699]: Removed session 12. Sep 12 17:50:47.883948 systemd[1]: Started sshd@10-10.200.8.41:22-10.200.16.10:34796.service - OpenSSH per-connection server daemon (10.200.16.10:34796). Sep 12 17:50:48.504442 sshd[4512]: Accepted publickey for core from 10.200.16.10 port 34796 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:48.505483 sshd-session[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:48.509887 systemd-logind[1699]: New session 13 of user core. Sep 12 17:50:48.514893 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:50:48.994326 sshd[4515]: Connection closed by 10.200.16.10 port 34796 Sep 12 17:50:48.994764 sshd-session[4512]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:48.997886 systemd-logind[1699]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:50:48.998377 systemd[1]: sshd@10-10.200.8.41:22-10.200.16.10:34796.service: Deactivated successfully. Sep 12 17:50:48.999990 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:50:49.001374 systemd-logind[1699]: Removed session 13. Sep 12 17:50:49.105357 systemd[1]: Started sshd@11-10.200.8.41:22-10.200.16.10:34808.service - OpenSSH per-connection server daemon (10.200.16.10:34808). Sep 12 17:50:49.730785 sshd[4528]: Accepted publickey for core from 10.200.16.10 port 34808 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:49.731835 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:49.735922 systemd-logind[1699]: New session 14 of user core. Sep 12 17:50:49.744836 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:50:50.249199 sshd[4531]: Connection closed by 10.200.16.10 port 34808 Sep 12 17:50:50.249673 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:50.252452 systemd[1]: sshd@11-10.200.8.41:22-10.200.16.10:34808.service: Deactivated successfully. Sep 12 17:50:50.254335 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:50:50.255601 systemd-logind[1699]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:50:50.256661 systemd-logind[1699]: Removed session 14. Sep 12 17:50:50.359491 systemd[1]: Started sshd@12-10.200.8.41:22-10.200.16.10:37660.service - OpenSSH per-connection server daemon (10.200.16.10:37660). Sep 12 17:50:50.983582 sshd[4540]: Accepted publickey for core from 10.200.16.10 port 37660 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:50.984668 sshd-session[4540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:50.988771 systemd-logind[1699]: New session 15 of user core. Sep 12 17:50:50.992880 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:50:51.475857 sshd[4543]: Connection closed by 10.200.16.10 port 37660 Sep 12 17:50:51.476360 sshd-session[4540]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:51.479413 systemd[1]: sshd@12-10.200.8.41:22-10.200.16.10:37660.service: Deactivated successfully. Sep 12 17:50:51.481178 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:50:51.482262 systemd-logind[1699]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:50:51.483362 systemd-logind[1699]: Removed session 15. Sep 12 17:50:56.587676 systemd[1]: Started sshd@13-10.200.8.41:22-10.200.16.10:37664.service - OpenSSH per-connection server daemon (10.200.16.10:37664). Sep 12 17:50:57.212400 sshd[4557]: Accepted publickey for core from 10.200.16.10 port 37664 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:57.213504 sshd-session[4557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:57.217752 systemd-logind[1699]: New session 16 of user core. Sep 12 17:50:57.221861 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:50:57.699430 sshd[4560]: Connection closed by 10.200.16.10 port 37664 Sep 12 17:50:57.699922 sshd-session[4557]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:57.702973 systemd[1]: sshd@13-10.200.8.41:22-10.200.16.10:37664.service: Deactivated successfully. Sep 12 17:50:57.704685 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:50:57.705868 systemd-logind[1699]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:50:57.706933 systemd-logind[1699]: Removed session 16. Sep 12 17:50:57.813444 systemd[1]: Started sshd@14-10.200.8.41:22-10.200.16.10:37672.service - OpenSSH per-connection server daemon (10.200.16.10:37672). Sep 12 17:50:58.436980 sshd[4572]: Accepted publickey for core from 10.200.16.10 port 37672 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:58.437953 sshd-session[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:58.441959 systemd-logind[1699]: New session 17 of user core. Sep 12 17:50:58.445860 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:50:58.974990 sshd[4575]: Connection closed by 10.200.16.10 port 37672 Sep 12 17:50:58.975426 sshd-session[4572]: pam_unix(sshd:session): session closed for user core Sep 12 17:50:58.978061 systemd[1]: sshd@14-10.200.8.41:22-10.200.16.10:37672.service: Deactivated successfully. Sep 12 17:50:58.979665 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:50:58.980929 systemd-logind[1699]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:50:58.982083 systemd-logind[1699]: Removed session 17. Sep 12 17:50:59.085483 systemd[1]: Started sshd@15-10.200.8.41:22-10.200.16.10:37684.service - OpenSSH per-connection server daemon (10.200.16.10:37684). Sep 12 17:50:59.711306 sshd[4585]: Accepted publickey for core from 10.200.16.10 port 37684 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:50:59.712338 sshd-session[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:50:59.716377 systemd-logind[1699]: New session 18 of user core. Sep 12 17:50:59.720861 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:51:01.182672 sshd[4590]: Connection closed by 10.200.16.10 port 37684 Sep 12 17:51:01.183279 sshd-session[4585]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:01.187409 systemd[1]: sshd@15-10.200.8.41:22-10.200.16.10:37684.service: Deactivated successfully. Sep 12 17:51:01.189246 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:51:01.189965 systemd-logind[1699]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:51:01.191520 systemd-logind[1699]: Removed session 18. Sep 12 17:51:01.292587 systemd[1]: Started sshd@16-10.200.8.41:22-10.200.16.10:35420.service - OpenSSH per-connection server daemon (10.200.16.10:35420). Sep 12 17:51:01.915222 sshd[4607]: Accepted publickey for core from 10.200.16.10 port 35420 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:01.916221 sshd-session[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:01.920371 systemd-logind[1699]: New session 19 of user core. Sep 12 17:51:01.924858 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:51:02.473631 sshd[4610]: Connection closed by 10.200.16.10 port 35420 Sep 12 17:51:02.474939 sshd-session[4607]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:02.477286 systemd[1]: sshd@16-10.200.8.41:22-10.200.16.10:35420.service: Deactivated successfully. Sep 12 17:51:02.479025 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:51:02.480702 systemd-logind[1699]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:51:02.481556 systemd-logind[1699]: Removed session 19. Sep 12 17:51:02.583432 systemd[1]: Started sshd@17-10.200.8.41:22-10.200.16.10:35430.service - OpenSSH per-connection server daemon (10.200.16.10:35430). Sep 12 17:51:03.206702 sshd[4619]: Accepted publickey for core from 10.200.16.10 port 35430 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:03.207835 sshd-session[4619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:03.211939 systemd-logind[1699]: New session 20 of user core. Sep 12 17:51:03.217868 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:51:03.692648 sshd[4622]: Connection closed by 10.200.16.10 port 35430 Sep 12 17:51:03.693126 sshd-session[4619]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:03.696161 systemd[1]: sshd@17-10.200.8.41:22-10.200.16.10:35430.service: Deactivated successfully. Sep 12 17:51:03.697814 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:51:03.699111 systemd-logind[1699]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:51:03.700263 systemd-logind[1699]: Removed session 20. Sep 12 17:51:08.803624 systemd[1]: Started sshd@18-10.200.8.41:22-10.200.16.10:35444.service - OpenSSH per-connection server daemon (10.200.16.10:35444). Sep 12 17:51:09.426073 sshd[4637]: Accepted publickey for core from 10.200.16.10 port 35444 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:09.427110 sshd-session[4637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:09.430582 systemd-logind[1699]: New session 21 of user core. Sep 12 17:51:09.438860 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:51:09.911144 sshd[4640]: Connection closed by 10.200.16.10 port 35444 Sep 12 17:51:09.911612 sshd-session[4637]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:09.914286 systemd[1]: sshd@18-10.200.8.41:22-10.200.16.10:35444.service: Deactivated successfully. Sep 12 17:51:09.916200 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:51:09.917057 systemd-logind[1699]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:51:09.918440 systemd-logind[1699]: Removed session 21. Sep 12 17:51:15.022636 systemd[1]: Started sshd@19-10.200.8.41:22-10.200.16.10:50028.service - OpenSSH per-connection server daemon (10.200.16.10:50028). Sep 12 17:51:15.648589 sshd[4652]: Accepted publickey for core from 10.200.16.10 port 50028 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:15.649937 sshd-session[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:15.654771 systemd-logind[1699]: New session 22 of user core. Sep 12 17:51:15.664844 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:51:16.138353 sshd[4655]: Connection closed by 10.200.16.10 port 50028 Sep 12 17:51:16.139053 sshd-session[4652]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:16.142320 systemd[1]: sshd@19-10.200.8.41:22-10.200.16.10:50028.service: Deactivated successfully. Sep 12 17:51:16.144269 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:51:16.145133 systemd-logind[1699]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:51:16.146319 systemd-logind[1699]: Removed session 22. Sep 12 17:51:21.255640 systemd[1]: Started sshd@20-10.200.8.41:22-10.200.16.10:47798.service - OpenSSH per-connection server daemon (10.200.16.10:47798). Sep 12 17:51:21.886073 sshd[4667]: Accepted publickey for core from 10.200.16.10 port 47798 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:21.887068 sshd-session[4667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:21.891096 systemd-logind[1699]: New session 23 of user core. Sep 12 17:51:21.897859 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:51:22.369717 sshd[4670]: Connection closed by 10.200.16.10 port 47798 Sep 12 17:51:22.370247 sshd-session[4667]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:22.373207 systemd[1]: sshd@20-10.200.8.41:22-10.200.16.10:47798.service: Deactivated successfully. Sep 12 17:51:22.374802 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:51:22.375510 systemd-logind[1699]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:51:22.376634 systemd-logind[1699]: Removed session 23. Sep 12 17:51:22.479356 systemd[1]: Started sshd@21-10.200.8.41:22-10.200.16.10:47804.service - OpenSSH per-connection server daemon (10.200.16.10:47804). Sep 12 17:51:23.112699 sshd[4682]: Accepted publickey for core from 10.200.16.10 port 47804 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:23.113755 sshd-session[4682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:23.117588 systemd-logind[1699]: New session 24 of user core. Sep 12 17:51:23.120847 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:51:24.743074 containerd[1729]: time="2025-09-12T17:51:24.742874154Z" level=info msg="StopContainer for \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\" with timeout 30 (s)" Sep 12 17:51:24.746109 containerd[1729]: time="2025-09-12T17:51:24.746024382Z" level=info msg="Stop container \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\" with signal terminated" Sep 12 17:51:24.756939 systemd[1]: cri-containerd-a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7.scope: Deactivated successfully. Sep 12 17:51:24.758773 containerd[1729]: time="2025-09-12T17:51:24.758413617Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:51:24.759907 containerd[1729]: time="2025-09-12T17:51:24.759754835Z" level=info msg="received exit event container_id:\"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\" id:\"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\" pid:3852 exited_at:{seconds:1757699484 nanos:759480447}" Sep 12 17:51:24.760171 containerd[1729]: time="2025-09-12T17:51:24.760057362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\" id:\"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\" pid:3852 exited_at:{seconds:1757699484 nanos:759480447}" Sep 12 17:51:24.764774 containerd[1729]: time="2025-09-12T17:51:24.764751446Z" level=info msg="TaskExit event in podsandbox handler container_id:\"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" id:\"9ef35fd3c33b0ff4326ba126b187f6f01a5d18def1c4fbb8b00315c21bb899ba\" pid:4704 exited_at:{seconds:1757699484 nanos:764507918}" Sep 12 17:51:24.767964 containerd[1729]: time="2025-09-12T17:51:24.767935057Z" level=info msg="StopContainer for \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" with timeout 2 (s)" Sep 12 17:51:24.768222 containerd[1729]: time="2025-09-12T17:51:24.768199165Z" level=info msg="Stop container \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" with signal terminated" Sep 12 17:51:24.777884 systemd-networkd[1366]: lxc_health: Link DOWN Sep 12 17:51:24.777889 systemd-networkd[1366]: lxc_health: Lost carrier Sep 12 17:51:24.791260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7-rootfs.mount: Deactivated successfully. Sep 12 17:51:24.794903 containerd[1729]: time="2025-09-12T17:51:24.794870258Z" level=info msg="received exit event container_id:\"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" id:\"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" pid:3744 exited_at:{seconds:1757699484 nanos:794683358}" Sep 12 17:51:24.795166 containerd[1729]: time="2025-09-12T17:51:24.794929955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" id:\"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" pid:3744 exited_at:{seconds:1757699484 nanos:794683358}" Sep 12 17:51:24.795221 systemd[1]: cri-containerd-398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd.scope: Deactivated successfully. Sep 12 17:51:24.795963 systemd[1]: cri-containerd-398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd.scope: Consumed 5.356s CPU time, 129.7M memory peak, 144K read from disk, 13.3M written to disk. Sep 12 17:51:24.812798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd-rootfs.mount: Deactivated successfully. Sep 12 17:51:24.885696 containerd[1729]: time="2025-09-12T17:51:24.885634777Z" level=info msg="StopContainer for \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\" returns successfully" Sep 12 17:51:24.886982 containerd[1729]: time="2025-09-12T17:51:24.886920843Z" level=info msg="StopContainer for \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" returns successfully" Sep 12 17:51:24.887838 containerd[1729]: time="2025-09-12T17:51:24.887718188Z" level=info msg="StopPodSandbox for \"db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f\"" Sep 12 17:51:24.888007 containerd[1729]: time="2025-09-12T17:51:24.887818916Z" level=info msg="Container to stop \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:24.889447 containerd[1729]: time="2025-09-12T17:51:24.889136725Z" level=info msg="StopPodSandbox for \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\"" Sep 12 17:51:24.889447 containerd[1729]: time="2025-09-12T17:51:24.889201367Z" level=info msg="Container to stop \"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:24.889447 containerd[1729]: time="2025-09-12T17:51:24.889213669Z" level=info msg="Container to stop \"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:24.889447 containerd[1729]: time="2025-09-12T17:51:24.889225847Z" level=info msg="Container to stop \"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:24.889447 containerd[1729]: time="2025-09-12T17:51:24.889235515Z" level=info msg="Container to stop \"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:24.889447 containerd[1729]: time="2025-09-12T17:51:24.889244123Z" level=info msg="Container to stop \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:51:24.895756 systemd[1]: cri-containerd-60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989.scope: Deactivated successfully. Sep 12 17:51:24.897896 containerd[1729]: time="2025-09-12T17:51:24.897871140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" id:\"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" pid:3287 exit_status:137 exited_at:{seconds:1757699484 nanos:897541782}" Sep 12 17:51:24.898156 systemd[1]: cri-containerd-db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f.scope: Deactivated successfully. Sep 12 17:51:24.925681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f-rootfs.mount: Deactivated successfully. Sep 12 17:51:24.925913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989-rootfs.mount: Deactivated successfully. Sep 12 17:51:24.947392 containerd[1729]: time="2025-09-12T17:51:24.947355873Z" level=info msg="shim disconnected" id=60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989 namespace=k8s.io Sep 12 17:51:24.947392 containerd[1729]: time="2025-09-12T17:51:24.947377337Z" level=warning msg="cleaning up after shim disconnected" id=60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989 namespace=k8s.io Sep 12 17:51:24.947541 containerd[1729]: time="2025-09-12T17:51:24.947385106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:51:24.950486 containerd[1729]: time="2025-09-12T17:51:24.950338443Z" level=info msg="shim disconnected" id=db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f namespace=k8s.io Sep 12 17:51:24.950486 containerd[1729]: time="2025-09-12T17:51:24.950362672Z" level=warning msg="cleaning up after shim disconnected" id=db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f namespace=k8s.io Sep 12 17:51:24.950486 containerd[1729]: time="2025-09-12T17:51:24.950371738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:51:24.961902 containerd[1729]: time="2025-09-12T17:51:24.961866843Z" level=info msg="received exit event sandbox_id:\"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" exit_status:137 exited_at:{seconds:1757699484 nanos:897541782}" Sep 12 17:51:24.964146 containerd[1729]: time="2025-09-12T17:51:24.964049771Z" level=info msg="received exit event sandbox_id:\"db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f\" exit_status:137 exited_at:{seconds:1757699484 nanos:902570788}" Sep 12 17:51:24.967106 containerd[1729]: time="2025-09-12T17:51:24.967066179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f\" id:\"db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f\" pid:3358 exit_status:137 exited_at:{seconds:1757699484 nanos:902570788}" Sep 12 17:51:24.967874 containerd[1729]: time="2025-09-12T17:51:24.967849046Z" level=info msg="TearDown network for sandbox \"db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f\" successfully" Sep 12 17:51:24.967874 containerd[1729]: time="2025-09-12T17:51:24.967873799Z" level=info msg="StopPodSandbox for \"db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f\" returns successfully" Sep 12 17:51:24.967998 containerd[1729]: time="2025-09-12T17:51:24.967983201Z" level=info msg="TearDown network for sandbox \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" successfully" Sep 12 17:51:24.968025 containerd[1729]: time="2025-09-12T17:51:24.967999323Z" level=info msg="StopPodSandbox for \"60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989\" returns successfully" Sep 12 17:51:24.969313 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db318b8164083c8ffee4302a8c3ef90654e46384d962cf92bea7dbb5061bd11f-shm.mount: Deactivated successfully. Sep 12 17:51:25.009270 kubelet[3146]: I0912 17:51:25.008881 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cni-path\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.009270 kubelet[3146]: I0912 17:51:25.008909 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-etc-cni-netd\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.009270 kubelet[3146]: I0912 17:51:25.008922 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-xtables-lock\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.009270 kubelet[3146]: I0912 17:51:25.008936 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-cgroup\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.009270 kubelet[3146]: I0912 17:51:25.008950 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-hostproc\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.009270 kubelet[3146]: I0912 17:51:25.008964 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-bpf-maps\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.009639 kubelet[3146]: I0912 17:51:25.008964 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:51:25.009639 kubelet[3146]: I0912 17:51:25.008986 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pzb2\" (UniqueName: \"kubernetes.io/projected/3f9bf460-0b2f-4d12-8ee3-f34536ef51d3-kube-api-access-7pzb2\") pod \"3f9bf460-0b2f-4d12-8ee3-f34536ef51d3\" (UID: \"3f9bf460-0b2f-4d12-8ee3-f34536ef51d3\") " Sep 12 17:51:25.009639 kubelet[3146]: I0912 17:51:25.009000 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-host-proc-sys-net\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.009639 kubelet[3146]: I0912 17:51:25.009017 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-host-proc-sys-kernel\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.009639 kubelet[3146]: I0912 17:51:25.009034 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw4zz\" (UniqueName: \"kubernetes.io/projected/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-kube-api-access-gw4zz\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.009639 kubelet[3146]: I0912 17:51:25.009047 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-run\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.011888 kubelet[3146]: I0912 17:51:25.009065 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-hubble-tls\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.011888 kubelet[3146]: I0912 17:51:25.009083 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-clustermesh-secrets\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.011888 kubelet[3146]: I0912 17:51:25.009099 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-lib-modules\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.011888 kubelet[3146]: I0912 17:51:25.009116 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-config-path\") pod \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\" (UID: \"19d72a5b-a8f4-4176-9cce-f1ec5ded664b\") " Sep 12 17:51:25.011888 kubelet[3146]: I0912 17:51:25.009135 3146 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f9bf460-0b2f-4d12-8ee3-f34536ef51d3-cilium-config-path\") pod \"3f9bf460-0b2f-4d12-8ee3-f34536ef51d3\" (UID: \"3f9bf460-0b2f-4d12-8ee3-f34536ef51d3\") " Sep 12 17:51:25.011888 kubelet[3146]: I0912 17:51:25.009163 3146 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-etc-cni-netd\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.012048 kubelet[3146]: I0912 17:51:25.009001 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cni-path" (OuterVolumeSpecName: "cni-path") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:51:25.012048 kubelet[3146]: I0912 17:51:25.009011 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-hostproc" (OuterVolumeSpecName: "hostproc") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:51:25.012048 kubelet[3146]: I0912 17:51:25.009020 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:51:25.012048 kubelet[3146]: I0912 17:51:25.009030 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:51:25.012048 kubelet[3146]: I0912 17:51:25.009041 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:51:25.012175 kubelet[3146]: I0912 17:51:25.009051 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:51:25.012175 kubelet[3146]: I0912 17:51:25.011413 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9bf460-0b2f-4d12-8ee3-f34536ef51d3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3f9bf460-0b2f-4d12-8ee3-f34536ef51d3" (UID: "3f9bf460-0b2f-4d12-8ee3-f34536ef51d3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:51:25.012175 kubelet[3146]: I0912 17:51:25.011496 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:51:25.013420 kubelet[3146]: I0912 17:51:25.013156 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:51:25.013420 kubelet[3146]: I0912 17:51:25.013230 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 12 17:51:25.016075 kubelet[3146]: I0912 17:51:25.016048 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9bf460-0b2f-4d12-8ee3-f34536ef51d3-kube-api-access-7pzb2" (OuterVolumeSpecName: "kube-api-access-7pzb2") pod "3f9bf460-0b2f-4d12-8ee3-f34536ef51d3" (UID: "3f9bf460-0b2f-4d12-8ee3-f34536ef51d3"). InnerVolumeSpecName "kube-api-access-7pzb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:51:25.017453 kubelet[3146]: I0912 17:51:25.017429 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-kube-api-access-gw4zz" (OuterVolumeSpecName: "kube-api-access-gw4zz") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "kube-api-access-gw4zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:51:25.017584 kubelet[3146]: I0912 17:51:25.017570 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:51:25.017660 kubelet[3146]: I0912 17:51:25.017630 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:51:25.018003 kubelet[3146]: I0912 17:51:25.017984 3146 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "19d72a5b-a8f4-4176-9cce-f1ec5ded664b" (UID: "19d72a5b-a8f4-4176-9cce-f1ec5ded664b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:51:25.109825 kubelet[3146]: I0912 17:51:25.109798 3146 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-host-proc-sys-kernel\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.109825 kubelet[3146]: I0912 17:51:25.109818 3146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw4zz\" (UniqueName: \"kubernetes.io/projected/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-kube-api-access-gw4zz\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.109825 kubelet[3146]: I0912 17:51:25.109828 3146 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-run\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.109967 kubelet[3146]: I0912 17:51:25.109837 3146 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-hubble-tls\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.109967 kubelet[3146]: I0912 17:51:25.109846 3146 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-clustermesh-secrets\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.109967 kubelet[3146]: I0912 17:51:25.109854 3146 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-lib-modules\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.109967 kubelet[3146]: I0912 17:51:25.109863 3146 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-config-path\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.109967 kubelet[3146]: I0912 17:51:25.109871 3146 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f9bf460-0b2f-4d12-8ee3-f34536ef51d3-cilium-config-path\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.109967 kubelet[3146]: I0912 17:51:25.109881 3146 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cni-path\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.109967 kubelet[3146]: I0912 17:51:25.109890 3146 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-hostproc\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.109967 kubelet[3146]: I0912 17:51:25.109899 3146 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-xtables-lock\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.110094 kubelet[3146]: I0912 17:51:25.109910 3146 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-cilium-cgroup\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.110094 kubelet[3146]: I0912 17:51:25.109920 3146 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-host-proc-sys-net\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.110094 kubelet[3146]: I0912 17:51:25.109928 3146 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19d72a5b-a8f4-4176-9cce-f1ec5ded664b-bpf-maps\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.110094 kubelet[3146]: I0912 17:51:25.109938 3146 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pzb2\" (UniqueName: \"kubernetes.io/projected/3f9bf460-0b2f-4d12-8ee3-f34536ef51d3-kube-api-access-7pzb2\") on node \"ci-4426.1.0-a-9facd647c1\" DevicePath \"\"" Sep 12 17:51:25.227657 kubelet[3146]: I0912 17:51:25.227546 3146 scope.go:117] "RemoveContainer" containerID="398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd" Sep 12 17:51:25.234059 systemd[1]: Removed slice kubepods-burstable-pod19d72a5b_a8f4_4176_9cce_f1ec5ded664b.slice - libcontainer container kubepods-burstable-pod19d72a5b_a8f4_4176_9cce_f1ec5ded664b.slice. Sep 12 17:51:25.234274 systemd[1]: kubepods-burstable-pod19d72a5b_a8f4_4176_9cce_f1ec5ded664b.slice: Consumed 5.424s CPU time, 130.2M memory peak, 144K read from disk, 13.3M written to disk. Sep 12 17:51:25.234681 containerd[1729]: time="2025-09-12T17:51:25.234648029Z" level=info msg="RemoveContainer for \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\"" Sep 12 17:51:25.238670 systemd[1]: Removed slice kubepods-besteffort-pod3f9bf460_0b2f_4d12_8ee3_f34536ef51d3.slice - libcontainer container kubepods-besteffort-pod3f9bf460_0b2f_4d12_8ee3_f34536ef51d3.slice. Sep 12 17:51:25.247365 containerd[1729]: time="2025-09-12T17:51:25.247327795Z" level=info msg="RemoveContainer for \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" returns successfully" Sep 12 17:51:25.247579 kubelet[3146]: I0912 17:51:25.247562 3146 scope.go:117] "RemoveContainer" containerID="151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251" Sep 12 17:51:25.249758 containerd[1729]: time="2025-09-12T17:51:25.249114149Z" level=info msg="RemoveContainer for \"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\"" Sep 12 17:51:25.257567 containerd[1729]: time="2025-09-12T17:51:25.257541843Z" level=info msg="RemoveContainer for \"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\" returns successfully" Sep 12 17:51:25.257756 kubelet[3146]: I0912 17:51:25.257704 3146 scope.go:117] "RemoveContainer" containerID="32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d" Sep 12 17:51:25.259452 containerd[1729]: time="2025-09-12T17:51:25.259390770Z" level=info msg="RemoveContainer for \"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\"" Sep 12 17:51:25.266853 containerd[1729]: time="2025-09-12T17:51:25.266824327Z" level=info msg="RemoveContainer for \"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\" returns successfully" Sep 12 17:51:25.266991 kubelet[3146]: I0912 17:51:25.266975 3146 scope.go:117] "RemoveContainer" containerID="db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a" Sep 12 17:51:25.268041 containerd[1729]: time="2025-09-12T17:51:25.268018747Z" level=info msg="RemoveContainer for \"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\"" Sep 12 17:51:25.274502 containerd[1729]: time="2025-09-12T17:51:25.274465412Z" level=info msg="RemoveContainer for \"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\" returns successfully" Sep 12 17:51:25.274639 kubelet[3146]: I0912 17:51:25.274623 3146 scope.go:117] "RemoveContainer" containerID="ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04" Sep 12 17:51:25.275743 containerd[1729]: time="2025-09-12T17:51:25.275714042Z" level=info msg="RemoveContainer for \"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\"" Sep 12 17:51:25.281765 containerd[1729]: time="2025-09-12T17:51:25.281717326Z" level=info msg="RemoveContainer for \"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\" returns successfully" Sep 12 17:51:25.281910 kubelet[3146]: I0912 17:51:25.281890 3146 scope.go:117] "RemoveContainer" containerID="398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd" Sep 12 17:51:25.282070 containerd[1729]: time="2025-09-12T17:51:25.282026824Z" level=error msg="ContainerStatus for \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\": not found" Sep 12 17:51:25.282171 kubelet[3146]: E0912 17:51:25.282153 3146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\": not found" containerID="398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd" Sep 12 17:51:25.282245 kubelet[3146]: I0912 17:51:25.282177 3146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd"} err="failed to get container status \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"398f4afa291de6b1069a7c930ada7e67cec47767096d0c7dd75ab3f7e59045fd\": not found" Sep 12 17:51:25.282274 kubelet[3146]: I0912 17:51:25.282246 3146 scope.go:117] "RemoveContainer" containerID="151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251" Sep 12 17:51:25.282463 containerd[1729]: time="2025-09-12T17:51:25.282432509Z" level=error msg="ContainerStatus for \"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\": not found" Sep 12 17:51:25.282531 kubelet[3146]: E0912 17:51:25.282512 3146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\": not found" containerID="151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251" Sep 12 17:51:25.282569 kubelet[3146]: I0912 17:51:25.282535 3146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251"} err="failed to get container status \"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\": rpc error: code = NotFound desc = an error occurred when try to find container \"151864c843c3db670ffd71c16bd4622f7a9c77747f8317d16c13361e33fef251\": not found" Sep 12 17:51:25.282569 kubelet[3146]: I0912 17:51:25.282550 3146 scope.go:117] "RemoveContainer" containerID="32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d" Sep 12 17:51:25.282691 containerd[1729]: time="2025-09-12T17:51:25.282669432Z" level=error msg="ContainerStatus for \"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\": not found" Sep 12 17:51:25.282809 kubelet[3146]: E0912 17:51:25.282792 3146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\": not found" containerID="32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d" Sep 12 17:51:25.282854 kubelet[3146]: I0912 17:51:25.282811 3146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d"} err="failed to get container status \"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\": rpc error: code = NotFound desc = an error occurred when try to find container \"32b86742f8c5bbb46b75ab3b904a7e1ceed47b1eedd0186a9b9164d3c329c33d\": not found" Sep 12 17:51:25.282854 kubelet[3146]: I0912 17:51:25.282835 3146 scope.go:117] "RemoveContainer" containerID="db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a" Sep 12 17:51:25.282998 containerd[1729]: time="2025-09-12T17:51:25.282958348Z" level=error msg="ContainerStatus for \"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\": not found" Sep 12 17:51:25.283127 kubelet[3146]: E0912 17:51:25.283097 3146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\": not found" containerID="db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a" Sep 12 17:51:25.283165 kubelet[3146]: I0912 17:51:25.283128 3146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a"} err="failed to get container status \"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\": rpc error: code = NotFound desc = an error occurred when try to find container \"db4a6ceedd601b8eee01726c7c477522a4f71297e3b71f3e0801f037fc30e74a\": not found" Sep 12 17:51:25.283165 kubelet[3146]: I0912 17:51:25.283141 3146 scope.go:117] "RemoveContainer" containerID="ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04" Sep 12 17:51:25.283299 containerd[1729]: time="2025-09-12T17:51:25.283277201Z" level=error msg="ContainerStatus for \"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\": not found" Sep 12 17:51:25.283379 kubelet[3146]: E0912 17:51:25.283365 3146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\": not found" containerID="ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04" Sep 12 17:51:25.283411 kubelet[3146]: I0912 17:51:25.283383 3146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04"} err="failed to get container status \"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca9fe27b24ded9658e6eaa3ebc48c446f0948976a38a760a4f57d6d2cd106e04\": not found" Sep 12 17:51:25.283411 kubelet[3146]: I0912 17:51:25.283397 3146 scope.go:117] "RemoveContainer" containerID="a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7" Sep 12 17:51:25.285337 containerd[1729]: time="2025-09-12T17:51:25.285309482Z" level=info msg="RemoveContainer for \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\"" Sep 12 17:51:25.292447 containerd[1729]: time="2025-09-12T17:51:25.292409958Z" level=info msg="RemoveContainer for \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\" returns successfully" Sep 12 17:51:25.292584 kubelet[3146]: I0912 17:51:25.292561 3146 scope.go:117] "RemoveContainer" containerID="a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7" Sep 12 17:51:25.292765 containerd[1729]: time="2025-09-12T17:51:25.292718831Z" level=error msg="ContainerStatus for \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\": not found" Sep 12 17:51:25.292918 kubelet[3146]: E0912 17:51:25.292888 3146 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\": not found" containerID="a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7" Sep 12 17:51:25.292954 kubelet[3146]: I0912 17:51:25.292922 3146 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7"} err="failed to get container status \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\": rpc error: code = NotFound desc = an error occurred when try to find container \"a26e64224435ed5efee6d457fa179b8f2b846b840a7466d1ed71372892208ba7\": not found" Sep 12 17:51:25.791155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60b31d8b103a9701bb96376cae71e703b32d8447ce6afc1a6ae5c9e90629c989-shm.mount: Deactivated successfully. Sep 12 17:51:25.791258 systemd[1]: var-lib-kubelet-pods-3f9bf460\x2d0b2f\x2d4d12\x2d8ee3\x2df34536ef51d3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7pzb2.mount: Deactivated successfully. Sep 12 17:51:25.791321 systemd[1]: var-lib-kubelet-pods-19d72a5b\x2da8f4\x2d4176\x2d9cce\x2df1ec5ded664b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgw4zz.mount: Deactivated successfully. Sep 12 17:51:25.791375 systemd[1]: var-lib-kubelet-pods-19d72a5b\x2da8f4\x2d4176\x2d9cce\x2df1ec5ded664b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:51:25.791426 systemd[1]: var-lib-kubelet-pods-19d72a5b\x2da8f4\x2d4176\x2d9cce\x2df1ec5ded664b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:51:25.877549 kubelet[3146]: I0912 17:51:25.877516 3146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19d72a5b-a8f4-4176-9cce-f1ec5ded664b" path="/var/lib/kubelet/pods/19d72a5b-a8f4-4176-9cce-f1ec5ded664b/volumes" Sep 12 17:51:25.878112 kubelet[3146]: I0912 17:51:25.878095 3146 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9bf460-0b2f-4d12-8ee3-f34536ef51d3" path="/var/lib/kubelet/pods/3f9bf460-0b2f-4d12-8ee3-f34536ef51d3/volumes" Sep 12 17:51:26.790055 sshd[4685]: Connection closed by 10.200.16.10 port 47804 Sep 12 17:51:26.790647 sshd-session[4682]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:26.794228 systemd[1]: sshd@21-10.200.8.41:22-10.200.16.10:47804.service: Deactivated successfully. Sep 12 17:51:26.795817 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:51:26.796457 systemd-logind[1699]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:51:26.797716 systemd-logind[1699]: Removed session 24. Sep 12 17:51:26.899533 systemd[1]: Started sshd@22-10.200.8.41:22-10.200.16.10:47812.service - OpenSSH per-connection server daemon (10.200.16.10:47812). Sep 12 17:51:26.956111 kubelet[3146]: E0912 17:51:26.956080 3146 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:51:27.525613 sshd[4843]: Accepted publickey for core from 10.200.16.10 port 47812 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:27.526693 sshd-session[4843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:27.530935 systemd-logind[1699]: New session 25 of user core. Sep 12 17:51:27.536855 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:51:28.345056 kubelet[3146]: E0912 17:51:28.345015 3146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19d72a5b-a8f4-4176-9cce-f1ec5ded664b" containerName="mount-cgroup" Sep 12 17:51:28.345056 kubelet[3146]: E0912 17:51:28.345041 3146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3f9bf460-0b2f-4d12-8ee3-f34536ef51d3" containerName="cilium-operator" Sep 12 17:51:28.345056 kubelet[3146]: E0912 17:51:28.345050 3146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19d72a5b-a8f4-4176-9cce-f1ec5ded664b" containerName="clean-cilium-state" Sep 12 17:51:28.345056 kubelet[3146]: E0912 17:51:28.345056 3146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19d72a5b-a8f4-4176-9cce-f1ec5ded664b" containerName="cilium-agent" Sep 12 17:51:28.345056 kubelet[3146]: E0912 17:51:28.345062 3146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19d72a5b-a8f4-4176-9cce-f1ec5ded664b" containerName="apply-sysctl-overwrites" Sep 12 17:51:28.346882 kubelet[3146]: E0912 17:51:28.345069 3146 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19d72a5b-a8f4-4176-9cce-f1ec5ded664b" containerName="mount-bpf-fs" Sep 12 17:51:28.346882 kubelet[3146]: I0912 17:51:28.345090 3146 memory_manager.go:354] "RemoveStaleState removing state" podUID="19d72a5b-a8f4-4176-9cce-f1ec5ded664b" containerName="cilium-agent" Sep 12 17:51:28.346882 kubelet[3146]: I0912 17:51:28.345096 3146 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9bf460-0b2f-4d12-8ee3-f34536ef51d3" containerName="cilium-operator" Sep 12 17:51:28.354319 systemd[1]: Created slice kubepods-burstable-podad2c1bcb_0bbf_459e_944c_0767f9b34d2c.slice - libcontainer container kubepods-burstable-podad2c1bcb_0bbf_459e_944c_0767f9b34d2c.slice. Sep 12 17:51:28.425285 kubelet[3146]: I0912 17:51:28.425258 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-bpf-maps\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425479 kubelet[3146]: I0912 17:51:28.425289 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-cilium-run\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425479 kubelet[3146]: I0912 17:51:28.425311 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-cilium-cgroup\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425479 kubelet[3146]: I0912 17:51:28.425327 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-cni-path\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425479 kubelet[3146]: I0912 17:51:28.425352 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-cilium-config-path\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425479 kubelet[3146]: I0912 17:51:28.425369 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-hostproc\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425479 kubelet[3146]: I0912 17:51:28.425381 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-etc-cni-netd\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425621 kubelet[3146]: I0912 17:51:28.425400 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-cilium-ipsec-secrets\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425621 kubelet[3146]: I0912 17:51:28.425421 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-hubble-tls\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425621 kubelet[3146]: I0912 17:51:28.425450 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-xtables-lock\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425621 kubelet[3146]: I0912 17:51:28.425469 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-host-proc-sys-kernel\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425621 kubelet[3146]: I0912 17:51:28.425489 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc46s\" (UniqueName: \"kubernetes.io/projected/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-kube-api-access-bc46s\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425691 kubelet[3146]: I0912 17:51:28.425533 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-lib-modules\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425691 kubelet[3146]: I0912 17:51:28.425551 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-clustermesh-secrets\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.425691 kubelet[3146]: I0912 17:51:28.425572 3146 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ad2c1bcb-0bbf-459e-944c-0767f9b34d2c-host-proc-sys-net\") pod \"cilium-6gdx2\" (UID: \"ad2c1bcb-0bbf-459e-944c-0767f9b34d2c\") " pod="kube-system/cilium-6gdx2" Sep 12 17:51:28.447278 sshd[4846]: Connection closed by 10.200.16.10 port 47812 Sep 12 17:51:28.447682 sshd-session[4843]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:28.450917 systemd[1]: sshd@22-10.200.8.41:22-10.200.16.10:47812.service: Deactivated successfully. Sep 12 17:51:28.452604 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:51:28.454149 systemd-logind[1699]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:51:28.455284 systemd-logind[1699]: Removed session 25. Sep 12 17:51:28.566322 systemd[1]: Started sshd@23-10.200.8.41:22-10.200.16.10:47828.service - OpenSSH per-connection server daemon (10.200.16.10:47828). Sep 12 17:51:28.658649 containerd[1729]: time="2025-09-12T17:51:28.658223781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gdx2,Uid:ad2c1bcb-0bbf-459e-944c-0767f9b34d2c,Namespace:kube-system,Attempt:0,}" Sep 12 17:51:28.686093 containerd[1729]: time="2025-09-12T17:51:28.686060332Z" level=info msg="connecting to shim e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363" address="unix:///run/containerd/s/1b833c9035dde410e8d2f3545ae6398d8b2f7125f6377ccdbe4022a440af5b29" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:51:28.710854 systemd[1]: Started cri-containerd-e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363.scope - libcontainer container e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363. Sep 12 17:51:28.731628 containerd[1729]: time="2025-09-12T17:51:28.731599133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6gdx2,Uid:ad2c1bcb-0bbf-459e-944c-0767f9b34d2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363\"" Sep 12 17:51:28.733958 containerd[1729]: time="2025-09-12T17:51:28.733919179Z" level=info msg="CreateContainer within sandbox \"e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:51:28.746449 containerd[1729]: time="2025-09-12T17:51:28.746422228Z" level=info msg="Container 3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:51:28.759540 containerd[1729]: time="2025-09-12T17:51:28.759515062Z" level=info msg="CreateContainer within sandbox \"e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5\"" Sep 12 17:51:28.759884 containerd[1729]: time="2025-09-12T17:51:28.759862867Z" level=info msg="StartContainer for \"3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5\"" Sep 12 17:51:28.760809 containerd[1729]: time="2025-09-12T17:51:28.760756329Z" level=info msg="connecting to shim 3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5" address="unix:///run/containerd/s/1b833c9035dde410e8d2f3545ae6398d8b2f7125f6377ccdbe4022a440af5b29" protocol=ttrpc version=3 Sep 12 17:51:28.775860 systemd[1]: Started cri-containerd-3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5.scope - libcontainer container 3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5. Sep 12 17:51:28.800771 containerd[1729]: time="2025-09-12T17:51:28.800717648Z" level=info msg="StartContainer for \"3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5\" returns successfully" Sep 12 17:51:28.803472 systemd[1]: cri-containerd-3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5.scope: Deactivated successfully. Sep 12 17:51:28.806046 containerd[1729]: time="2025-09-12T17:51:28.805814757Z" level=info msg="received exit event container_id:\"3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5\" id:\"3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5\" pid:4922 exited_at:{seconds:1757699488 nanos:805610231}" Sep 12 17:51:28.806316 containerd[1729]: time="2025-09-12T17:51:28.806291898Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5\" id:\"3cd35cb6c5c0cf3f5b06323b53ef3c09d2154fe1d64380a59c2d3caac3ff3ed5\" pid:4922 exited_at:{seconds:1757699488 nanos:805610231}" Sep 12 17:51:29.190893 sshd[4861]: Accepted publickey for core from 10.200.16.10 port 47828 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:29.191895 sshd-session[4861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:29.195586 systemd-logind[1699]: New session 26 of user core. Sep 12 17:51:29.204856 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:51:29.240985 containerd[1729]: time="2025-09-12T17:51:29.240939456Z" level=info msg="CreateContainer within sandbox \"e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:51:29.254354 containerd[1729]: time="2025-09-12T17:51:29.254327930Z" level=info msg="Container 8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:51:29.268546 containerd[1729]: time="2025-09-12T17:51:29.268518004Z" level=info msg="CreateContainer within sandbox \"e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb\"" Sep 12 17:51:29.269068 containerd[1729]: time="2025-09-12T17:51:29.268990928Z" level=info msg="StartContainer for \"8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb\"" Sep 12 17:51:29.270165 containerd[1729]: time="2025-09-12T17:51:29.270113775Z" level=info msg="connecting to shim 8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb" address="unix:///run/containerd/s/1b833c9035dde410e8d2f3545ae6398d8b2f7125f6377ccdbe4022a440af5b29" protocol=ttrpc version=3 Sep 12 17:51:29.287864 systemd[1]: Started cri-containerd-8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb.scope - libcontainer container 8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb. Sep 12 17:51:29.313223 containerd[1729]: time="2025-09-12T17:51:29.313151077Z" level=info msg="StartContainer for \"8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb\" returns successfully" Sep 12 17:51:29.314653 systemd[1]: cri-containerd-8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb.scope: Deactivated successfully. Sep 12 17:51:29.317336 containerd[1729]: time="2025-09-12T17:51:29.317292096Z" level=info msg="received exit event container_id:\"8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb\" id:\"8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb\" pid:4966 exited_at:{seconds:1757699489 nanos:316894591}" Sep 12 17:51:29.317507 containerd[1729]: time="2025-09-12T17:51:29.317392743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb\" id:\"8eec6ff2bd497d1f7f2db94ca9ad4a5d81d1e55152a04411cc5b2e99287045eb\" pid:4966 exited_at:{seconds:1757699489 nanos:316894591}" Sep 12 17:51:29.629975 sshd[4953]: Connection closed by 10.200.16.10 port 47828 Sep 12 17:51:29.630421 sshd-session[4861]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:29.632877 systemd[1]: sshd@23-10.200.8.41:22-10.200.16.10:47828.service: Deactivated successfully. Sep 12 17:51:29.634992 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:51:29.636616 systemd-logind[1699]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:51:29.637383 systemd-logind[1699]: Removed session 26. Sep 12 17:51:29.746297 systemd[1]: Started sshd@24-10.200.8.41:22-10.200.16.10:47842.service - OpenSSH per-connection server daemon (10.200.16.10:47842). Sep 12 17:51:30.245816 containerd[1729]: time="2025-09-12T17:51:30.244934908Z" level=info msg="CreateContainer within sandbox \"e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:51:30.266066 containerd[1729]: time="2025-09-12T17:51:30.265258173Z" level=info msg="Container d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:51:30.281898 containerd[1729]: time="2025-09-12T17:51:30.281871953Z" level=info msg="CreateContainer within sandbox \"e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40\"" Sep 12 17:51:30.282421 containerd[1729]: time="2025-09-12T17:51:30.282397869Z" level=info msg="StartContainer for \"d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40\"" Sep 12 17:51:30.283850 containerd[1729]: time="2025-09-12T17:51:30.283821755Z" level=info msg="connecting to shim d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40" address="unix:///run/containerd/s/1b833c9035dde410e8d2f3545ae6398d8b2f7125f6377ccdbe4022a440af5b29" protocol=ttrpc version=3 Sep 12 17:51:30.307870 systemd[1]: Started cri-containerd-d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40.scope - libcontainer container d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40. Sep 12 17:51:30.335737 systemd[1]: cri-containerd-d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40.scope: Deactivated successfully. Sep 12 17:51:30.337474 containerd[1729]: time="2025-09-12T17:51:30.337452344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40\" id:\"d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40\" pid:5022 exited_at:{seconds:1757699490 nanos:337215041}" Sep 12 17:51:30.338928 containerd[1729]: time="2025-09-12T17:51:30.338901331Z" level=info msg="received exit event container_id:\"d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40\" id:\"d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40\" pid:5022 exited_at:{seconds:1757699490 nanos:337215041}" Sep 12 17:51:30.345786 containerd[1729]: time="2025-09-12T17:51:30.345745899Z" level=info msg="StartContainer for \"d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40\" returns successfully" Sep 12 17:51:30.355180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0ce45cebe00572039e20779cb3f753b6e2468eeeb60f7ccdbe1023bcee0ee40-rootfs.mount: Deactivated successfully. Sep 12 17:51:30.375746 sshd[5005]: Accepted publickey for core from 10.200.16.10 port 47842 ssh2: RSA SHA256:ZNiJkqu8Loo123AdfZic/f9v0/MsiWfNTs209WSupSg Sep 12 17:51:30.376949 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:51:30.381967 systemd-logind[1699]: New session 27 of user core. Sep 12 17:51:30.385876 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:51:31.248375 containerd[1729]: time="2025-09-12T17:51:31.248331385Z" level=info msg="CreateContainer within sandbox \"e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:51:31.265697 containerd[1729]: time="2025-09-12T17:51:31.265544860Z" level=info msg="Container a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:51:31.286755 containerd[1729]: time="2025-09-12T17:51:31.286349213Z" level=info msg="CreateContainer within sandbox \"e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862\"" Sep 12 17:51:31.288155 containerd[1729]: time="2025-09-12T17:51:31.288131462Z" level=info msg="StartContainer for \"a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862\"" Sep 12 17:51:31.289816 containerd[1729]: time="2025-09-12T17:51:31.289791014Z" level=info msg="connecting to shim a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862" address="unix:///run/containerd/s/1b833c9035dde410e8d2f3545ae6398d8b2f7125f6377ccdbe4022a440af5b29" protocol=ttrpc version=3 Sep 12 17:51:31.310865 systemd[1]: Started cri-containerd-a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862.scope - libcontainer container a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862. Sep 12 17:51:31.329978 systemd[1]: cri-containerd-a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862.scope: Deactivated successfully. Sep 12 17:51:31.331336 containerd[1729]: time="2025-09-12T17:51:31.331288521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862\" id:\"a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862\" pid:5072 exited_at:{seconds:1757699491 nanos:331014047}" Sep 12 17:51:31.335211 containerd[1729]: time="2025-09-12T17:51:31.335142373Z" level=info msg="received exit event container_id:\"a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862\" id:\"a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862\" pid:5072 exited_at:{seconds:1757699491 nanos:331014047}" Sep 12 17:51:31.335864 containerd[1729]: time="2025-09-12T17:51:31.335804083Z" level=info msg="StartContainer for \"a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862\" returns successfully" Sep 12 17:51:31.350468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0e24be19f2464c81faa88856bb72455557b0e51d003bb899904bf79c1128862-rootfs.mount: Deactivated successfully. Sep 12 17:51:31.957226 kubelet[3146]: E0912 17:51:31.957197 3146 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:51:32.253529 containerd[1729]: time="2025-09-12T17:51:32.253094402Z" level=info msg="CreateContainer within sandbox \"e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:51:32.270745 containerd[1729]: time="2025-09-12T17:51:32.270258098Z" level=info msg="Container f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:51:32.288104 containerd[1729]: time="2025-09-12T17:51:32.288078774Z" level=info msg="CreateContainer within sandbox \"e029a9b74e72259e4f9c2fbe2f82547ecc8e100e67d49da4a5783341e7770363\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458\"" Sep 12 17:51:32.288443 containerd[1729]: time="2025-09-12T17:51:32.288421107Z" level=info msg="StartContainer for \"f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458\"" Sep 12 17:51:32.289536 containerd[1729]: time="2025-09-12T17:51:32.289451517Z" level=info msg="connecting to shim f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458" address="unix:///run/containerd/s/1b833c9035dde410e8d2f3545ae6398d8b2f7125f6377ccdbe4022a440af5b29" protocol=ttrpc version=3 Sep 12 17:51:32.310860 systemd[1]: Started cri-containerd-f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458.scope - libcontainer container f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458. Sep 12 17:51:32.344222 containerd[1729]: time="2025-09-12T17:51:32.344203065Z" level=info msg="StartContainer for \"f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458\" returns successfully" Sep 12 17:51:32.410087 containerd[1729]: time="2025-09-12T17:51:32.410059841Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458\" id:\"4d3bc4cc541fa3a36009e166affc56d70be042250fac99ebdeae094b7b0885f9\" pid:5140 exited_at:{seconds:1757699492 nanos:409567491}" Sep 12 17:51:32.576992 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Sep 12 17:51:34.883713 containerd[1729]: time="2025-09-12T17:51:34.883647119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458\" id:\"70460c42fac5c39afd12cfd5d19394f690edf341fc20d65306a8734de60ec876\" pid:5566 exit_status:1 exited_at:{seconds:1757699494 nanos:883307362}" Sep 12 17:51:35.006532 systemd-networkd[1366]: lxc_health: Link UP Sep 12 17:51:35.012095 systemd-networkd[1366]: lxc_health: Gained carrier Sep 12 17:51:35.236578 kubelet[3146]: I0912 17:51:35.236468 3146 setters.go:600] "Node became not ready" node="ci-4426.1.0-a-9facd647c1" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-12T17:51:35Z","lastTransitionTime":"2025-09-12T17:51:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 12 17:51:36.158825 systemd-networkd[1366]: lxc_health: Gained IPv6LL Sep 12 17:51:36.681615 kubelet[3146]: I0912 17:51:36.681552 3146 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6gdx2" podStartSLOduration=8.68153404 podStartE2EDuration="8.68153404s" podCreationTimestamp="2025-09-12 17:51:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:51:33.271255588 +0000 UTC m=+161.483924575" watchObservedRunningTime="2025-09-12 17:51:36.68153404 +0000 UTC m=+164.894203029" Sep 12 17:51:37.013872 containerd[1729]: time="2025-09-12T17:51:37.013833090Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458\" id:\"c86b72917ae90e4af777c1c89311a97d8a060062dc0e3ecbccc556686e63c336\" pid:5670 exited_at:{seconds:1757699497 nanos:13584028}" Sep 12 17:51:39.090957 containerd[1729]: time="2025-09-12T17:51:39.090914324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458\" id:\"e253e2a57b2d3a65f3803d62c725177564a2b8132d84646716e0756eef354c1b\" pid:5702 exited_at:{seconds:1757699499 nanos:90616855}" Sep 12 17:51:41.182760 containerd[1729]: time="2025-09-12T17:51:41.182693156Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458\" id:\"9f6844d4217aacc95bcc50449bdb4294fbe26ff24c9e8758604a52314387bbbb\" pid:5726 exited_at:{seconds:1757699501 nanos:182430798}" Sep 12 17:51:43.261583 containerd[1729]: time="2025-09-12T17:51:43.261517771Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f493a635a6bb2bf1ff14ce8d5ac7a882ebb850924e26877f61247327b12af458\" id:\"356f7f0e9b528304a6343bc559c948a206a916652a87b7654afcb533d698d0fd\" pid:5749 exited_at:{seconds:1757699503 nanos:261227129}" Sep 12 17:51:43.368089 sshd[5053]: Connection closed by 10.200.16.10 port 47842 Sep 12 17:51:43.368632 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Sep 12 17:51:43.372044 systemd[1]: sshd@24-10.200.8.41:22-10.200.16.10:47842.service: Deactivated successfully. Sep 12 17:51:43.373753 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:51:43.374490 systemd-logind[1699]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:51:43.376164 systemd-logind[1699]: Removed session 27.