Jul 7 06:13:10.991628 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:13:10.991650 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:13:10.991658 kernel: BIOS-provided physical RAM map: Jul 7 06:13:10.991663 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 06:13:10.991668 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 7 06:13:10.991672 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jul 7 06:13:10.991677 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jul 7 06:13:10.991683 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jul 7 06:13:10.991688 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jul 7 06:13:10.991692 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 7 06:13:10.991697 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 7 06:13:10.991701 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 7 06:13:10.991721 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 7 06:13:10.991726 kernel: printk: legacy bootconsole [earlyser0] enabled Jul 7 06:13:10.991733 kernel: NX (Execute Disable) protection: active Jul 7 06:13:10.991738 kernel: APIC: Static calls initialized Jul 7 06:13:10.991742 kernel: efi: EFI v2.7 by Microsoft Jul 7 06:13:10.991747 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ead5718 RNG=0x3ffd2018 Jul 7 06:13:10.991752 kernel: random: crng init done Jul 7 06:13:10.991757 kernel: secureboot: Secure boot disabled Jul 7 06:13:10.991761 kernel: SMBIOS 3.1.0 present. Jul 7 06:13:10.991766 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 01/28/2025 Jul 7 06:13:10.991771 kernel: DMI: Memory slots populated: 2/2 Jul 7 06:13:10.991777 kernel: Hypervisor detected: Microsoft Hyper-V Jul 7 06:13:10.991781 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jul 7 06:13:10.991786 kernel: Hyper-V: Nested features: 0x3e0101 Jul 7 06:13:10.991790 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 7 06:13:10.991795 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 7 06:13:10.991799 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 7 06:13:10.991804 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 7 06:13:10.991808 kernel: tsc: Detected 2300.000 MHz processor Jul 7 06:13:10.991813 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:13:10.991819 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:13:10.991825 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jul 7 06:13:10.991830 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 06:13:10.991835 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:13:10.991840 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jul 7 06:13:10.991845 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jul 7 06:13:10.991849 kernel: Using GB pages for direct mapping Jul 7 06:13:10.991854 kernel: ACPI: Early table checksum verification disabled Jul 7 06:13:10.991862 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 7 06:13:10.991868 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:10.991873 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:10.991878 kernel: ACPI: DSDT 0x000000003FFD6000 01E27A (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jul 7 06:13:10.991883 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 7 06:13:10.991888 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:10.991893 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:10.991899 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:10.991904 kernel: ACPI: APIC 0x000000003FFD5000 000052 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 7 06:13:10.991909 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jul 7 06:13:10.991914 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 7 06:13:10.991919 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 7 06:13:10.991924 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4279] Jul 7 06:13:10.991929 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 7 06:13:10.991934 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 7 06:13:10.991939 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 7 06:13:10.991945 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 7 06:13:10.991950 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5051] Jul 7 06:13:10.991955 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jul 7 06:13:10.991960 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 7 06:13:10.991965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 7 06:13:10.991970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jul 7 06:13:10.991976 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jul 7 06:13:10.991981 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jul 7 06:13:10.991986 kernel: Zone ranges: Jul 7 06:13:10.991992 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:13:10.991997 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 7 06:13:10.992001 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 7 06:13:10.992006 kernel: Device empty Jul 7 06:13:10.992010 kernel: Movable zone start for each node Jul 7 06:13:10.992015 kernel: Early memory node ranges Jul 7 06:13:10.992020 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 06:13:10.992024 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jul 7 06:13:10.992029 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jul 7 06:13:10.992035 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 7 06:13:10.992040 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 7 06:13:10.992045 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 7 06:13:10.992049 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:13:10.992054 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 06:13:10.992059 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jul 7 06:13:10.992063 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jul 7 06:13:10.992068 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 7 06:13:10.992073 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:13:10.992079 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:13:10.992083 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:13:10.992088 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 7 06:13:10.992093 kernel: TSC deadline timer available Jul 7 06:13:10.992098 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:13:10.992102 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:13:10.992107 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:13:10.992111 kernel: CPU topo: Max. threads per core: 2 Jul 7 06:13:10.992116 kernel: CPU topo: Num. cores per package: 1 Jul 7 06:13:10.992122 kernel: CPU topo: Num. threads per package: 2 Jul 7 06:13:10.992127 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 7 06:13:10.992132 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 7 06:13:10.992136 kernel: Booting paravirtualized kernel on Hyper-V Jul 7 06:13:10.992141 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:13:10.992146 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 7 06:13:10.992151 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 7 06:13:10.992155 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 7 06:13:10.992160 kernel: pcpu-alloc: [0] 0 1 Jul 7 06:13:10.992166 kernel: Hyper-V: PV spinlocks enabled Jul 7 06:13:10.992171 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:13:10.992177 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:13:10.992182 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:13:10.992186 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 7 06:13:10.992191 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:13:10.992196 kernel: Fallback order for Node 0: 0 Jul 7 06:13:10.992201 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jul 7 06:13:10.992207 kernel: Policy zone: Normal Jul 7 06:13:10.992212 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:13:10.992217 kernel: software IO TLB: area num 2. Jul 7 06:13:10.992221 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 7 06:13:10.992226 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:13:10.992231 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:13:10.992235 kernel: Dynamic Preempt: voluntary Jul 7 06:13:10.992240 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:13:10.992246 kernel: rcu: RCU event tracing is enabled. Jul 7 06:13:10.992258 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 7 06:13:10.992263 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:13:10.992268 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:13:10.992275 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:13:10.992280 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:13:10.992285 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 7 06:13:10.992291 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:13:10.992296 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:13:10.992301 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 7 06:13:10.992306 kernel: Using NULL legacy PIC Jul 7 06:13:10.992313 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 7 06:13:10.992318 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:13:10.992323 kernel: Console: colour dummy device 80x25 Jul 7 06:13:10.992328 kernel: printk: legacy console [tty1] enabled Jul 7 06:13:10.992333 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:13:10.992338 kernel: printk: legacy bootconsole [earlyser0] disabled Jul 7 06:13:10.992343 kernel: ACPI: Core revision 20240827 Jul 7 06:13:10.992350 kernel: Failed to register legacy timer interrupt Jul 7 06:13:10.992354 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:13:10.992360 kernel: x2apic enabled Jul 7 06:13:10.992365 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:13:10.992370 kernel: Hyper-V: Host Build 10.0.26100.1261-1-0 Jul 7 06:13:10.992375 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 7 06:13:10.992380 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jul 7 06:13:10.992385 kernel: Hyper-V: Using IPI hypercalls Jul 7 06:13:10.992390 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jul 7 06:13:10.992396 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jul 7 06:13:10.992402 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jul 7 06:13:10.992407 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jul 7 06:13:10.992412 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jul 7 06:13:10.992417 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jul 7 06:13:10.992422 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 7 06:13:10.992427 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4600.00 BogoMIPS (lpj=2300000) Jul 7 06:13:10.992433 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 06:13:10.992439 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 06:13:10.992444 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 06:13:10.992449 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:13:10.992454 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:13:10.992459 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:13:10.992464 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 7 06:13:10.992470 kernel: RETBleed: Vulnerable Jul 7 06:13:10.992474 kernel: Speculative Store Bypass: Vulnerable Jul 7 06:13:10.992479 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 06:13:10.992484 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:13:10.992489 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:13:10.992496 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:13:10.992501 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 7 06:13:10.992506 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 7 06:13:10.992511 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 7 06:13:10.992516 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jul 7 06:13:10.992521 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jul 7 06:13:10.992526 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jul 7 06:13:10.992531 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:13:10.992536 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 7 06:13:10.992541 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 7 06:13:10.992546 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 7 06:13:10.992552 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jul 7 06:13:10.992557 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jul 7 06:13:10.992562 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jul 7 06:13:10.992567 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jul 7 06:13:10.992572 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:13:10.992577 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:13:10.992582 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:13:10.992587 kernel: landlock: Up and running. Jul 7 06:13:10.992592 kernel: SELinux: Initializing. Jul 7 06:13:10.992597 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 06:13:10.992602 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 06:13:10.992607 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jul 7 06:13:10.992614 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jul 7 06:13:10.992619 kernel: signal: max sigframe size: 11952 Jul 7 06:13:10.992624 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:13:10.992629 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:13:10.992634 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:13:10.992640 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 06:13:10.992645 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:13:10.992650 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:13:10.992655 kernel: .... node #0, CPUs: #1 Jul 7 06:13:10.992661 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 06:13:10.992666 kernel: smpboot: Total of 2 processors activated (9200.00 BogoMIPS) Jul 7 06:13:10.992672 kernel: Memory: 8077024K/8383228K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 299988K reserved, 0K cma-reserved) Jul 7 06:13:10.992677 kernel: devtmpfs: initialized Jul 7 06:13:10.992683 kernel: x86/mm: Memory block size: 128MB Jul 7 06:13:10.992688 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 7 06:13:10.992693 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:13:10.992698 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 7 06:13:10.992711 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:13:10.992720 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:13:10.992725 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:13:10.992731 kernel: audit: type=2000 audit(1751868787.029:1): state=initialized audit_enabled=0 res=1 Jul 7 06:13:10.992736 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:13:10.992741 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:13:10.992746 kernel: cpuidle: using governor menu Jul 7 06:13:10.992751 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:13:10.992756 kernel: dca service started, version 1.12.1 Jul 7 06:13:10.992762 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jul 7 06:13:10.992768 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jul 7 06:13:10.992773 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:13:10.992778 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:13:10.992784 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:13:10.992789 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:13:10.992794 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:13:10.992799 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:13:10.992804 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:13:10.992811 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:13:10.992816 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:13:10.992822 kernel: ACPI: Interpreter enabled Jul 7 06:13:10.992827 kernel: ACPI: PM: (supports S0 S5) Jul 7 06:13:10.992832 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:13:10.992837 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:13:10.992842 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 7 06:13:10.992848 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 7 06:13:10.992853 kernel: iommu: Default domain type: Translated Jul 7 06:13:10.992858 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:13:10.992865 kernel: efivars: Registered efivars operations Jul 7 06:13:10.992870 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:13:10.992875 kernel: PCI: System does not support PCI Jul 7 06:13:10.992880 kernel: vgaarb: loaded Jul 7 06:13:10.992885 kernel: clocksource: Switched to clocksource tsc-early Jul 7 06:13:10.992890 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:13:10.992895 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:13:10.992900 kernel: pnp: PnP ACPI init Jul 7 06:13:10.992906 kernel: pnp: PnP ACPI: found 3 devices Jul 7 06:13:10.992912 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:13:10.992918 kernel: NET: Registered PF_INET protocol family Jul 7 06:13:10.992923 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 06:13:10.992928 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 7 06:13:10.992933 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:13:10.992938 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:13:10.992944 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 7 06:13:10.992949 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 7 06:13:10.992955 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 06:13:10.992961 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 7 06:13:10.992966 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:13:10.992971 kernel: NET: Registered PF_XDP protocol family Jul 7 06:13:10.992976 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:13:10.992981 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 06:13:10.992986 kernel: software IO TLB: mapped [mem 0x000000003a9c3000-0x000000003e9c3000] (64MB) Jul 7 06:13:10.992992 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jul 7 06:13:10.992997 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jul 7 06:13:10.993003 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212735223b2, max_idle_ns: 440795277976 ns Jul 7 06:13:10.993008 kernel: clocksource: Switched to clocksource tsc Jul 7 06:13:10.993014 kernel: Initialise system trusted keyrings Jul 7 06:13:10.993019 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 7 06:13:10.993024 kernel: Key type asymmetric registered Jul 7 06:13:10.993029 kernel: Asymmetric key parser 'x509' registered Jul 7 06:13:10.993034 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:13:10.993039 kernel: io scheduler mq-deadline registered Jul 7 06:13:10.993044 kernel: io scheduler kyber registered Jul 7 06:13:10.993051 kernel: io scheduler bfq registered Jul 7 06:13:10.993056 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:13:10.993061 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:13:10.993066 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:13:10.993072 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 7 06:13:10.993077 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:13:10.993082 kernel: i8042: PNP: No PS/2 controller found. Jul 7 06:13:10.993186 kernel: rtc_cmos 00:02: registered as rtc0 Jul 7 06:13:10.993239 kernel: rtc_cmos 00:02: setting system clock to 2025-07-07T06:13:10 UTC (1751868790) Jul 7 06:13:10.993285 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 7 06:13:10.993291 kernel: intel_pstate: Intel P-state driver initializing Jul 7 06:13:10.993296 kernel: efifb: probing for efifb Jul 7 06:13:10.993302 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 7 06:13:10.993307 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 7 06:13:10.993312 kernel: efifb: scrolling: redraw Jul 7 06:13:10.993317 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 06:13:10.993322 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 06:13:10.993329 kernel: fb0: EFI VGA frame buffer device Jul 7 06:13:10.993335 kernel: pstore: Using crash dump compression: deflate Jul 7 06:13:10.993340 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 06:13:10.993345 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:13:10.993350 kernel: Segment Routing with IPv6 Jul 7 06:13:10.993355 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:13:10.993360 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:13:10.993366 kernel: Key type dns_resolver registered Jul 7 06:13:10.993371 kernel: IPI shorthand broadcast: enabled Jul 7 06:13:10.993377 kernel: sched_clock: Marking stable (3095100350, 91650977)->(3498623289, -311871962) Jul 7 06:13:10.993383 kernel: registered taskstats version 1 Jul 7 06:13:10.993388 kernel: Loading compiled-in X.509 certificates Jul 7 06:13:10.993393 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:13:10.993398 kernel: Demotion targets for Node 0: null Jul 7 06:13:10.993403 kernel: Key type .fscrypt registered Jul 7 06:13:10.993409 kernel: Key type fscrypt-provisioning registered Jul 7 06:13:10.993414 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:13:10.993419 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:13:10.993426 kernel: ima: No architecture policies found Jul 7 06:13:10.993431 kernel: clk: Disabling unused clocks Jul 7 06:13:10.993436 kernel: Warning: unable to open an initial console. Jul 7 06:13:10.993441 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:13:10.993447 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:13:10.993452 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:13:10.993457 kernel: Run /init as init process Jul 7 06:13:10.993462 kernel: with arguments: Jul 7 06:13:10.993467 kernel: /init Jul 7 06:13:10.993474 kernel: with environment: Jul 7 06:13:10.993478 kernel: HOME=/ Jul 7 06:13:10.993484 kernel: TERM=linux Jul 7 06:13:10.993489 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:13:10.993495 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:13:10.993503 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:13:10.993509 systemd[1]: Detected virtualization microsoft. Jul 7 06:13:10.993516 systemd[1]: Detected architecture x86-64. Jul 7 06:13:10.993521 systemd[1]: Running in initrd. Jul 7 06:13:10.993527 systemd[1]: No hostname configured, using default hostname. Jul 7 06:13:10.993533 systemd[1]: Hostname set to . Jul 7 06:13:10.993538 systemd[1]: Initializing machine ID from random generator. Jul 7 06:13:10.993544 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:13:10.993549 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:13:10.993555 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:13:10.993563 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:13:10.993568 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:13:10.993574 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:13:10.993580 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:13:10.993587 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:13:10.993592 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:13:10.993598 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:13:10.993605 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:13:10.993611 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:13:10.993616 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:13:10.993622 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:13:10.993627 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:13:10.993633 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:13:10.993638 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:13:10.993644 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:13:10.993649 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:13:10.993656 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:13:10.993662 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:13:10.993667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:13:10.993673 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:13:10.993679 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:13:10.993685 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:13:10.993690 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:13:10.993696 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:13:10.993702 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:13:10.993727 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:13:10.993735 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:13:10.993748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:10.993765 systemd-journald[205]: Collecting audit messages is disabled. Jul 7 06:13:10.993783 systemd-journald[205]: Journal started Jul 7 06:13:10.993800 systemd-journald[205]: Runtime Journal (/run/log/journal/409f7eafdd024106904ba6c9ab1f268d) is 8M, max 158.9M, 150.9M free. Jul 7 06:13:10.999315 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:13:11.008475 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:13:11.010114 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:13:11.014908 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:13:11.016945 systemd-modules-load[207]: Inserted module 'overlay' Jul 7 06:13:11.020451 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:13:11.025363 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:13:11.036468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:11.043845 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:13:11.047372 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:13:11.048476 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:13:11.057136 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:13:11.061525 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:13:11.071791 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:13:11.079726 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:13:11.082644 systemd-modules-load[207]: Inserted module 'br_netfilter' Jul 7 06:13:11.084349 kernel: Bridge firewalling registered Jul 7 06:13:11.084401 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:13:11.087908 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:13:11.089653 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:13:11.092808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:13:11.106872 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:13:11.111341 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:13:11.118528 dracut-cmdline[240]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:13:11.157434 systemd-resolved[253]: Positive Trust Anchors: Jul 7 06:13:11.157449 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:13:11.157494 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:13:11.177358 systemd-resolved[253]: Defaulting to hostname 'linux'. Jul 7 06:13:11.179904 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:13:11.185262 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:13:11.193722 kernel: SCSI subsystem initialized Jul 7 06:13:11.200721 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:13:11.210720 kernel: iscsi: registered transport (tcp) Jul 7 06:13:11.228119 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:13:11.228165 kernel: QLogic iSCSI HBA Driver Jul 7 06:13:11.241787 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:13:11.258790 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:13:11.263579 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:13:11.294915 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:13:11.297820 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:13:11.342724 kernel: raid6: avx512x4 gen() 33323 MB/s Jul 7 06:13:11.360716 kernel: raid6: avx512x2 gen() 31322 MB/s Jul 7 06:13:11.377718 kernel: raid6: avx512x1 gen() 24895 MB/s Jul 7 06:13:11.395716 kernel: raid6: avx2x4 gen() 28480 MB/s Jul 7 06:13:11.412716 kernel: raid6: avx2x2 gen() 30858 MB/s Jul 7 06:13:11.430414 kernel: raid6: avx2x1 gen() 20166 MB/s Jul 7 06:13:11.430435 kernel: raid6: using algorithm avx512x4 gen() 33323 MB/s Jul 7 06:13:11.450082 kernel: raid6: .... xor() 5006 MB/s, rmw enabled Jul 7 06:13:11.450105 kernel: raid6: using avx512x2 recovery algorithm Jul 7 06:13:11.468727 kernel: xor: automatically using best checksumming function avx Jul 7 06:13:11.581726 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:13:11.586237 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:13:11.590009 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:13:11.615160 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 7 06:13:11.619734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:13:11.627399 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:13:11.649992 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jul 7 06:13:11.668944 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:13:11.671825 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:13:11.706285 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:13:11.711840 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:13:11.748719 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:13:11.756739 kernel: AES CTR mode by8 optimization enabled Jul 7 06:13:11.788109 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:13:11.788272 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:11.795412 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:11.800780 kernel: hv_vmbus: Vmbus version:5.3 Jul 7 06:13:11.805834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:11.812247 kernel: hv_vmbus: registering driver hv_pci Jul 7 06:13:11.814735 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jul 7 06:13:11.827964 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jul 7 06:13:11.833723 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 7 06:13:11.833760 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 7 06:13:11.838753 kernel: hv_vmbus: registering driver hv_netvsc Jul 7 06:13:11.840786 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jul 7 06:13:11.841073 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 06:13:11.843632 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 7 06:13:11.844769 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jul 7 06:13:11.846773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:11.854439 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jul 7 06:13:11.863962 kernel: pci c05b:00:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s PCIe x16 link at c05b:00:00.0 (capable of 1024.000 Gb/s with 64.0 GT/s PCIe x16 link) Jul 7 06:13:11.868272 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 06:13:11.868487 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jul 7 06:13:11.875614 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 7 06:13:11.877747 kernel: PTP clock support registered Jul 7 06:13:11.889480 kernel: hv_utils: Registering HyperV Utility Driver Jul 7 06:13:11.893465 kernel: hv_vmbus: registering driver hv_utils Jul 7 06:13:11.893497 kernel: hv_utils: Shutdown IC version 3.2 Jul 7 06:13:11.893508 kernel: hv_utils: Heartbeat IC version 3.0 Jul 7 06:13:11.893517 kernel: hv_utils: TimeSync IC version 4.0 Jul 7 06:13:11.433032 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:13:11.441752 systemd-journald[205]: Time jumped backwards, rotating. Jul 7 06:13:11.441799 kernel: hv_netvsc f8615163-0000-1000-2000-00224842f791 (unnamed net_device) (uninitialized): VF slot 1 added Jul 7 06:13:11.433440 systemd-resolved[253]: Clock change detected. Flushing caches. Jul 7 06:13:11.445837 kernel: hv_vmbus: registering driver hv_storvsc Jul 7 06:13:11.447792 kernel: scsi host0: storvsc_host_t Jul 7 06:13:11.450722 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jul 7 06:13:11.457747 kernel: nvme nvme0: pci function c05b:00:00.0 Jul 7 06:13:11.461173 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jul 7 06:13:11.711781 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 7 06:13:11.717719 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:13:11.723741 kernel: hv_vmbus: registering driver hid_hyperv Jul 7 06:13:11.728725 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 7 06:13:11.733063 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 7 06:13:11.736991 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 7 06:13:11.737240 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:13:11.738722 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 7 06:13:11.752725 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#233 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 06:13:11.767780 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#298 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 06:13:11.987723 kernel: nvme nvme0: using unchecked data buffer Jul 7 06:13:12.180828 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jul 7 06:13:12.192694 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 7 06:13:12.225651 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 7 06:13:12.228987 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jul 7 06:13:12.237987 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:13:12.247653 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jul 7 06:13:12.255832 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:13:12.256295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:13:12.263750 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:13:12.268612 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:13:12.271773 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:13:12.290470 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:13:12.297382 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:13:12.303719 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:13:12.466789 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jul 7 06:13:12.466987 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jul 7 06:13:12.469541 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jul 7 06:13:12.471155 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jul 7 06:13:12.475767 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jul 7 06:13:12.478860 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jul 7 06:13:12.483858 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jul 7 06:13:12.485911 kernel: pci 7870:00:00.0: enabling Extended Tags Jul 7 06:13:12.503782 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jul 7 06:13:12.503955 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jul 7 06:13:12.504111 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jul 7 06:13:12.510756 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jul 7 06:13:12.524724 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jul 7 06:13:12.528077 kernel: hv_netvsc f8615163-0000-1000-2000-00224842f791 eth0: VF registering: eth1 Jul 7 06:13:12.528244 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jul 7 06:13:12.531731 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jul 7 06:13:13.310785 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 7 06:13:13.311169 disk-uuid[671]: The operation has completed successfully. Jul 7 06:13:13.370436 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:13:13.370533 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:13:13.398033 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:13:13.414977 sh[710]: Success Jul 7 06:13:13.448110 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:13:13.448152 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:13:13.453472 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:13:13.462721 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 06:13:13.694541 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:13:13.699363 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:13:13.712094 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:13:13.723214 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:13:13.723277 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (723) Jul 7 06:13:13.725373 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:13:13.726720 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:13:13.727760 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:13:13.993557 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:13:13.995911 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:13:13.996576 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:13:13.998830 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:13:14.000862 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:13:14.030725 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (756) Jul 7 06:13:14.037732 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:13:14.037769 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:13:14.037779 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:13:14.073732 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:13:14.077399 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:13:14.090721 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:13:14.091310 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:13:14.097056 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:13:14.112844 systemd-networkd[886]: lo: Link UP Jul 7 06:13:14.117207 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 7 06:13:14.112852 systemd-networkd[886]: lo: Gained carrier Jul 7 06:13:14.120990 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 06:13:14.114576 systemd-networkd[886]: Enumeration completed Jul 7 06:13:14.114641 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:13:14.115096 systemd-networkd[886]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:14.115100 systemd-networkd[886]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:13:14.120435 systemd[1]: Reached target network.target - Network. Jul 7 06:13:14.133771 kernel: hv_netvsc f8615163-0000-1000-2000-00224842f791 eth0: Data path switched to VF: enP30832s1 Jul 7 06:13:14.126220 systemd-networkd[886]: enP30832s1: Link UP Jul 7 06:13:14.126283 systemd-networkd[886]: eth0: Link UP Jul 7 06:13:14.126436 systemd-networkd[886]: eth0: Gained carrier Jul 7 06:13:14.126445 systemd-networkd[886]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:14.131027 systemd-networkd[886]: enP30832s1: Gained carrier Jul 7 06:13:14.140739 systemd-networkd[886]: eth0: DHCPv4 address 10.200.4.32/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 06:13:14.973826 ignition[893]: Ignition 2.21.0 Jul 7 06:13:14.973840 ignition[893]: Stage: fetch-offline Jul 7 06:13:14.973946 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:14.976450 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:13:14.973953 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:14.980484 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 06:13:14.974045 ignition[893]: parsed url from cmdline: "" Jul 7 06:13:14.974048 ignition[893]: no config URL provided Jul 7 06:13:14.974054 ignition[893]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:13:14.974059 ignition[893]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:13:14.974064 ignition[893]: failed to fetch config: resource requires networking Jul 7 06:13:14.974234 ignition[893]: Ignition finished successfully Jul 7 06:13:15.011299 ignition[903]: Ignition 2.21.0 Jul 7 06:13:15.011309 ignition[903]: Stage: fetch Jul 7 06:13:15.011505 ignition[903]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:15.011514 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:15.011589 ignition[903]: parsed url from cmdline: "" Jul 7 06:13:15.011592 ignition[903]: no config URL provided Jul 7 06:13:15.011596 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:13:15.011602 ignition[903]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:13:15.011651 ignition[903]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 7 06:13:15.113892 ignition[903]: GET result: OK Jul 7 06:13:15.113997 ignition[903]: config has been read from IMDS userdata Jul 7 06:13:15.114032 ignition[903]: parsing config with SHA512: 0686641c060f134c49839a42b3435c3e0564911c8d0a30c21545ea11f2fe5c8c0a59af78dbaaa1f5642db637f262745f91340c50af607aedfe7d3edceffd4ca2 Jul 7 06:13:15.120758 unknown[903]: fetched base config from "system" Jul 7 06:13:15.120777 unknown[903]: fetched base config from "system" Jul 7 06:13:15.122118 ignition[903]: fetch: fetch complete Jul 7 06:13:15.120782 unknown[903]: fetched user config from "azure" Jul 7 06:13:15.122123 ignition[903]: fetch: fetch passed Jul 7 06:13:15.124134 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 06:13:15.122178 ignition[903]: Ignition finished successfully Jul 7 06:13:15.128010 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:13:15.152316 ignition[909]: Ignition 2.21.0 Jul 7 06:13:15.152327 ignition[909]: Stage: kargs Jul 7 06:13:15.154688 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:13:15.152525 ignition[909]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:15.158899 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:13:15.152532 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:15.153359 ignition[909]: kargs: kargs passed Jul 7 06:13:15.153395 ignition[909]: Ignition finished successfully Jul 7 06:13:15.177540 ignition[916]: Ignition 2.21.0 Jul 7 06:13:15.177551 ignition[916]: Stage: disks Jul 7 06:13:15.177738 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:15.179230 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:13:15.177746 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:15.182153 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:13:15.178422 ignition[916]: disks: disks passed Jul 7 06:13:15.186791 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:13:15.178451 ignition[916]: Ignition finished successfully Jul 7 06:13:15.189353 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:13:15.192751 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:13:15.193429 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:13:15.194150 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:13:15.248855 systemd-networkd[886]: eth0: Gained IPv6LL Jul 7 06:13:15.258842 systemd-fsck[925]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jul 7 06:13:15.265402 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:13:15.267863 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:13:15.504917 systemd-networkd[886]: enP30832s1: Gained IPv6LL Jul 7 06:13:15.510810 kernel: EXT4-fs (nvme0n1p9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:13:15.511820 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:13:15.514302 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:13:15.533605 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:13:15.537566 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:13:15.549821 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 7 06:13:15.554094 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:13:15.570360 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (934) Jul 7 06:13:15.570391 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:13:15.570407 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:13:15.570419 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:13:15.555738 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:13:15.556817 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:13:15.575798 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:13:15.580442 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:13:16.062918 coreos-metadata[936]: Jul 07 06:13:16.062 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 06:13:16.067318 coreos-metadata[936]: Jul 07 06:13:16.067 INFO Fetch successful Jul 7 06:13:16.068595 coreos-metadata[936]: Jul 07 06:13:16.068 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 7 06:13:16.080962 coreos-metadata[936]: Jul 07 06:13:16.080 INFO Fetch successful Jul 7 06:13:16.095449 coreos-metadata[936]: Jul 07 06:13:16.095 INFO wrote hostname ci-4372.0.1-a-6edf51656b to /sysroot/etc/hostname Jul 7 06:13:16.099625 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:13:16.277381 initrd-setup-root[964]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:13:16.325165 initrd-setup-root[971]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:13:16.330173 initrd-setup-root[978]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:13:16.334855 initrd-setup-root[985]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:13:17.114175 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:13:17.120128 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:13:17.124792 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:13:17.137530 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:13:17.143016 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:13:17.159332 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:13:17.170599 ignition[1053]: INFO : Ignition 2.21.0 Jul 7 06:13:17.170599 ignition[1053]: INFO : Stage: mount Jul 7 06:13:17.170599 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:17.170599 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:17.173951 ignition[1053]: INFO : mount: mount passed Jul 7 06:13:17.173951 ignition[1053]: INFO : Ignition finished successfully Jul 7 06:13:17.172311 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:13:17.180802 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:13:17.190641 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:13:17.219719 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1064) Jul 7 06:13:17.221980 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:13:17.222009 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:13:17.222023 kernel: BTRFS info (device nvme0n1p6): using free-space-tree Jul 7 06:13:17.226768 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:13:17.250327 ignition[1080]: INFO : Ignition 2.21.0 Jul 7 06:13:17.250327 ignition[1080]: INFO : Stage: files Jul 7 06:13:17.254608 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:17.254608 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:17.254608 ignition[1080]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:13:17.266091 ignition[1080]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:13:17.266091 ignition[1080]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:13:17.297667 ignition[1080]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:13:17.299735 ignition[1080]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:13:17.299735 ignition[1080]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:13:17.297984 unknown[1080]: wrote ssh authorized keys file for user: core Jul 7 06:13:17.315134 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 06:13:17.318753 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 06:13:17.593385 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:13:17.671439 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 06:13:17.673660 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:13:17.673660 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 06:13:18.256654 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 06:13:18.451873 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 06:13:18.455397 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:13:18.455397 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:13:18.455397 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:13:18.455397 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:13:18.455397 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:13:18.455397 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:13:18.455397 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:13:18.455397 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:13:18.478691 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:13:18.478691 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:13:18.478691 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:13:18.478691 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:13:18.478691 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:13:18.478691 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 06:13:19.122781 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 06:13:19.321249 ignition[1080]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:13:19.321249 ignition[1080]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 06:13:19.365235 ignition[1080]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:13:19.383009 ignition[1080]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:13:19.383009 ignition[1080]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 06:13:19.383009 ignition[1080]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:13:19.392118 ignition[1080]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:13:19.392118 ignition[1080]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:13:19.392118 ignition[1080]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:13:19.392118 ignition[1080]: INFO : files: files passed Jul 7 06:13:19.392118 ignition[1080]: INFO : Ignition finished successfully Jul 7 06:13:19.387084 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:13:19.394749 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:13:19.407450 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:13:19.411599 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:13:19.414833 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:13:19.423658 initrd-setup-root-after-ignition[1111]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:13:19.423658 initrd-setup-root-after-ignition[1111]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:13:19.429437 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:13:19.431620 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:13:19.435395 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:13:19.437607 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:13:19.464972 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:13:19.465060 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:13:19.471025 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:13:19.472378 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:13:19.472669 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:13:19.473375 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:13:19.498701 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:13:19.501874 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:13:19.515420 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:13:19.519845 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:13:19.521499 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:13:19.524009 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:13:19.524120 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:13:19.526898 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:13:19.532823 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:13:19.534348 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:13:19.538411 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:13:19.542660 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:13:19.545414 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:13:19.550521 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:13:19.551654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:13:19.551977 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:13:19.552269 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:13:19.552617 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:13:19.558807 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:13:19.558930 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:13:19.561744 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:13:19.564039 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:13:19.566452 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:13:19.567375 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:13:19.569936 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:13:19.570053 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:13:19.576370 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:13:19.576485 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:13:19.579871 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:13:19.579993 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:13:19.594866 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 7 06:13:19.595001 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 7 06:13:19.597898 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:13:19.599817 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:13:19.602308 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:13:19.602469 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:13:19.607593 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:13:19.607723 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:13:19.626024 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:13:19.626105 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:13:19.641916 ignition[1135]: INFO : Ignition 2.21.0 Jul 7 06:13:19.641916 ignition[1135]: INFO : Stage: umount Jul 7 06:13:19.645800 ignition[1135]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:13:19.645800 ignition[1135]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 7 06:13:19.645800 ignition[1135]: INFO : umount: umount passed Jul 7 06:13:19.645800 ignition[1135]: INFO : Ignition finished successfully Jul 7 06:13:19.645042 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:13:19.645615 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:13:19.645701 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:13:19.649307 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:13:19.649377 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:13:19.659559 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:13:19.659615 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:13:19.662146 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 06:13:19.662189 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 06:13:19.666790 systemd[1]: Stopped target network.target - Network. Jul 7 06:13:19.666924 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:13:19.666970 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:13:19.667134 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:13:19.667156 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:13:19.667421 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:13:19.667605 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:13:19.674432 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:13:19.679997 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:13:19.680042 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:13:19.682367 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:13:19.682397 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:13:19.686777 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:13:19.686832 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:13:19.690770 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:13:19.690807 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:13:19.694917 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:13:19.698813 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:13:19.707142 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:13:19.707272 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:13:19.714250 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:13:19.714407 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:13:19.714479 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:13:19.718400 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:13:19.718809 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:13:19.721830 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:13:19.721861 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:13:19.725314 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:13:19.728116 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:13:19.728174 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:13:19.728462 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:13:19.728499 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:13:19.731344 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:13:19.731396 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:13:19.734829 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:13:19.734881 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:13:19.735482 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:13:19.736620 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:13:19.736672 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:13:19.768208 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:13:19.770821 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:13:19.774938 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:13:19.775007 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:13:19.779940 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:13:19.780370 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:13:19.785168 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:13:19.785221 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:13:19.788167 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:13:19.790527 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:13:19.792409 kernel: hv_netvsc f8615163-0000-1000-2000-00224842f791 eth0: Data path switched from VF: enP30832s1 Jul 7 06:13:19.792575 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 06:13:19.795083 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:13:19.795128 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:13:19.800391 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:13:19.804754 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:13:19.804811 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:13:19.809476 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:13:19.809520 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:13:19.815086 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 06:13:19.815133 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:13:19.822796 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:13:19.822854 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:13:19.823103 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:13:19.823138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:19.824445 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 06:13:19.824490 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 7 06:13:19.824520 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 06:13:19.824553 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:13:19.824848 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:13:19.824931 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:13:19.825180 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:13:19.825242 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:13:19.860044 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:13:19.860136 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:13:19.861386 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:13:19.861454 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:13:19.861491 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:13:19.862187 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:13:19.873391 systemd[1]: Switching root. Jul 7 06:13:19.934953 systemd-journald[205]: Journal stopped Jul 7 06:13:27.977223 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Jul 7 06:13:27.977247 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:13:27.977256 kernel: SELinux: policy capability open_perms=1 Jul 7 06:13:27.977262 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:13:27.977267 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:13:27.977273 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:13:27.977281 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:13:27.977287 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:13:27.977292 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:13:27.977298 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:13:27.977303 kernel: audit: type=1403 audit(1751868802.925:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:13:27.977310 systemd[1]: Successfully loaded SELinux policy in 105.363ms. Jul 7 06:13:27.977317 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.522ms. Jul 7 06:13:27.977327 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:13:27.977333 systemd[1]: Detected virtualization microsoft. Jul 7 06:13:27.977339 systemd[1]: Detected architecture x86-64. Jul 7 06:13:27.977347 systemd[1]: Detected first boot. Jul 7 06:13:27.977354 systemd[1]: Hostname set to . Jul 7 06:13:27.977362 systemd[1]: Initializing machine ID from random generator. Jul 7 06:13:27.977368 zram_generator::config[1178]: No configuration found. Jul 7 06:13:27.977375 kernel: Guest personality initialized and is inactive Jul 7 06:13:27.977380 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jul 7 06:13:27.977386 kernel: Initialized host personality Jul 7 06:13:27.977392 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:13:27.977398 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:13:27.977406 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:13:27.977413 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:13:27.977419 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:13:27.977425 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:13:27.977431 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:13:27.977437 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:13:27.977444 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:13:27.977451 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:13:27.977457 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:13:27.977463 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:13:27.977470 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:13:27.977476 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:13:27.977483 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:13:27.977490 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:13:27.977496 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:13:27.977505 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:13:27.977513 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:13:27.977520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:13:27.977527 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:13:27.977533 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:13:27.977539 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:13:27.977545 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:13:27.977552 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:13:27.977560 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:13:27.977566 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:13:27.977573 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:13:27.977579 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:13:27.977585 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:13:27.977592 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:13:27.977598 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:13:27.977604 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:13:27.977612 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:13:27.977619 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:13:27.977626 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:13:27.977633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:13:27.977639 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:13:27.977647 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:13:27.977653 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:13:27.977660 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:13:27.977666 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:27.977672 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:13:27.977678 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:13:27.977685 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:13:27.977691 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:13:27.977699 systemd[1]: Reached target machines.target - Containers. Jul 7 06:13:27.977745 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:13:27.977753 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:27.977759 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:13:27.977766 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:13:27.977772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:13:27.977778 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:13:27.977784 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:13:27.977791 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:13:27.977799 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:13:27.977807 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:13:27.977814 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:13:27.977820 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:13:27.977827 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:13:27.977833 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:13:27.977840 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:13:27.977847 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:13:27.977854 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:13:27.977861 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:13:27.977867 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:13:27.977874 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:13:27.977880 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:13:27.977887 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:13:27.977893 systemd[1]: Stopped verity-setup.service. Jul 7 06:13:27.977899 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:27.977907 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:13:27.977914 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:13:27.977920 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:13:27.977926 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:13:27.977933 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:13:27.977939 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:13:27.977946 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:13:27.977953 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:13:27.977959 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:13:27.977967 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:13:27.977973 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:13:27.977980 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:13:27.977986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:13:27.977993 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:13:27.977999 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:13:27.978006 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:13:27.978012 kernel: loop: module loaded Jul 7 06:13:27.978019 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:13:27.978026 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:13:27.978032 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:13:27.978039 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:13:27.978045 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:13:27.978051 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:13:27.978058 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:13:27.978069 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:13:27.978076 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:13:27.978083 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:13:27.978093 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:27.978100 kernel: fuse: init (API version 7.41) Jul 7 06:13:27.978116 systemd-journald[1262]: Collecting audit messages is disabled. Jul 7 06:13:27.978134 systemd-journald[1262]: Journal started Jul 7 06:13:27.978150 systemd-journald[1262]: Runtime Journal (/run/log/journal/f3ab4097716940e2b4770f4908d11672) is 8M, max 158.9M, 150.9M free. Jul 7 06:13:27.371978 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:13:27.383322 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 7 06:13:27.383813 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:13:28.136757 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:13:28.145718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:13:28.154849 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:13:28.160799 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:13:28.173016 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:13:28.173059 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:13:28.179550 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:13:28.182954 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:13:28.187088 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:13:28.189968 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:13:28.192850 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:13:28.202718 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:13:28.206313 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:13:28.212392 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:13:28.356721 kernel: loop0: detected capacity change from 0 to 224512 Jul 7 06:13:28.416159 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jul 7 06:13:28.416174 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Jul 7 06:13:28.419183 systemd-journald[1262]: Time spent on flushing to /var/log/journal/f3ab4097716940e2b4770f4908d11672 is 2.173453s for 990 entries. Jul 7 06:13:28.419183 systemd-journald[1262]: System Journal (/var/log/journal/f3ab4097716940e2b4770f4908d11672) is 11.8M, max 2.6G, 2.6G free. Jul 7 06:13:32.701532 systemd-journald[1262]: Received client request to flush runtime journal. Jul 7 06:13:32.701610 kernel: ACPI: bus type drm_connector registered Jul 7 06:13:32.701638 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:13:32.701664 kernel: loop1: detected capacity change from 0 to 28496 Jul 7 06:13:32.701686 systemd-journald[1262]: /var/log/journal/f3ab4097716940e2b4770f4908d11672/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jul 7 06:13:32.701754 systemd-journald[1262]: Rotating system journal. Jul 7 06:13:32.701795 kernel: loop2: detected capacity change from 0 to 146240 Jul 7 06:13:28.420413 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:13:28.466284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:13:28.555451 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:13:28.555598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:13:28.907036 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:13:28.910018 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:13:28.913854 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:13:30.503619 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:13:30.506611 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:13:31.514075 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:13:31.517931 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:13:31.542648 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Jul 7 06:13:31.542658 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Jul 7 06:13:31.545053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:13:32.704108 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:13:32.707270 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:13:32.711394 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:13:33.716755 kernel: loop3: detected capacity change from 0 to 113872 Jul 7 06:13:33.750839 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:13:33.755297 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:13:33.790975 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Jul 7 06:13:34.523963 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:13:34.529868 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:13:34.571689 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:13:34.624728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#271 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jul 7 06:13:34.682733 kernel: hv_vmbus: registering driver hyperv_fb Jul 7 06:13:34.686738 kernel: hv_vmbus: registering driver hv_balloon Jul 7 06:13:34.694800 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 7 06:13:34.694843 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 7 06:13:34.696782 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 7 06:13:34.698112 kernel: Console: switching to colour dummy device 80x25 Jul 7 06:13:34.702050 kernel: Console: switching to colour frame buffer device 128x48 Jul 7 06:13:34.730731 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:13:35.123941 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:13:35.156779 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:13:35.317754 kernel: loop4: detected capacity change from 0 to 224512 Jul 7 06:13:35.334721 kernel: loop5: detected capacity change from 0 to 28496 Jul 7 06:13:35.346722 kernel: loop6: detected capacity change from 0 to 146240 Jul 7 06:13:35.359732 kernel: loop7: detected capacity change from 0 to 113872 Jul 7 06:13:35.368898 (sd-merge)[1421]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 7 06:13:35.369273 (sd-merge)[1421]: Merged extensions into '/usr'. Jul 7 06:13:35.373114 systemd[1]: Reload requested from client PID 1282 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:13:35.373129 systemd[1]: Reloading... Jul 7 06:13:35.413748 zram_generator::config[1446]: No configuration found. Jul 7 06:13:35.443331 systemd-networkd[1352]: lo: Link UP Jul 7 06:13:35.443340 systemd-networkd[1352]: lo: Gained carrier Jul 7 06:13:35.445438 systemd-networkd[1352]: Enumeration completed Jul 7 06:13:35.445802 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:35.445811 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:13:35.447729 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jul 7 06:13:35.453918 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jul 7 06:13:35.455300 kernel: hv_netvsc f8615163-0000-1000-2000-00224842f791 eth0: Data path switched to VF: enP30832s1 Jul 7 06:13:35.454993 systemd-networkd[1352]: enP30832s1: Link UP Jul 7 06:13:35.455056 systemd-networkd[1352]: eth0: Link UP Jul 7 06:13:35.455059 systemd-networkd[1352]: eth0: Gained carrier Jul 7 06:13:35.455074 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:35.459957 systemd-networkd[1352]: enP30832s1: Gained carrier Jul 7 06:13:35.470809 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.4.32/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 06:13:35.548093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:13:35.664169 systemd[1]: Reloading finished in 290 ms. Jul 7 06:13:35.686873 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:13:35.689140 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:13:35.694727 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jul 7 06:13:35.726280 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jul 7 06:13:35.736659 systemd[1]: Starting ensure-sysext.service... Jul 7 06:13:35.740826 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:13:35.744361 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:13:35.749900 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:13:35.761906 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:13:35.766829 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:13:35.780180 systemd[1]: Reload requested from client PID 1519 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:13:35.780193 systemd[1]: Reloading... Jul 7 06:13:35.780473 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:13:35.780696 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:13:35.781022 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:13:35.781294 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:13:35.782051 systemd-tmpfiles[1523]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:13:35.782351 systemd-tmpfiles[1523]: ACLs are not supported, ignoring. Jul 7 06:13:35.782456 systemd-tmpfiles[1523]: ACLs are not supported, ignoring. Jul 7 06:13:35.786423 systemd-tmpfiles[1523]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:13:35.786429 systemd-tmpfiles[1523]: Skipping /boot Jul 7 06:13:35.799840 systemd-tmpfiles[1523]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:13:35.799933 systemd-tmpfiles[1523]: Skipping /boot Jul 7 06:13:35.848825 zram_generator::config[1559]: No configuration found. Jul 7 06:13:35.935026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:13:36.022264 systemd[1]: Reloading finished in 241 ms. Jul 7 06:13:36.060341 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:13:36.060946 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:13:36.061361 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:13:36.068126 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:13:36.074581 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:13:36.077822 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:13:36.084202 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:13:36.089223 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:13:36.100865 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.101117 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:36.107925 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:13:36.111819 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:13:36.116377 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:13:36.118751 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:36.118880 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:13:36.118976 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.125800 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:13:36.130573 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:13:36.131326 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:13:36.134497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:13:36.134753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:13:36.137969 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:13:36.138215 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:13:36.147065 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.147422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:36.149914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:13:36.157799 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:13:36.162744 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:13:36.164733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:36.164864 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:13:36.164950 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.166205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:13:36.170939 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:13:36.173317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:13:36.173457 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:13:36.174245 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:13:36.174380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:13:36.180388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.180612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:13:36.181566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:13:36.184905 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:13:36.187077 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:13:36.189839 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:13:36.190441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:13:36.190556 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:13:36.190725 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:13:36.190820 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:13:36.195893 systemd[1]: Finished ensure-sysext.service. Jul 7 06:13:36.200687 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:13:36.212340 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:13:36.213933 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:13:36.214070 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:13:36.214827 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:13:36.216016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:13:36.216154 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:13:36.218439 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:13:36.218577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:13:36.219483 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:13:36.369312 systemd-resolved[1625]: Positive Trust Anchors: Jul 7 06:13:36.369333 systemd-resolved[1625]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:13:36.369369 systemd-resolved[1625]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:13:36.374894 systemd-resolved[1625]: Using system hostname 'ci-4372.0.1-a-6edf51656b'. Jul 7 06:13:36.377349 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:13:36.378015 systemd[1]: Reached target network.target - Network. Jul 7 06:13:36.378266 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:13:36.624907 systemd-networkd[1352]: eth0: Gained IPv6LL Jul 7 06:13:36.627376 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:13:36.628228 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:13:36.880888 systemd-networkd[1352]: enP30832s1: Gained IPv6LL Jul 7 06:13:36.907457 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:13:36.935894 augenrules[1670]: No rules Jul 7 06:13:36.936797 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:13:36.936992 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:13:37.684594 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:13:38.105672 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:13:38.108987 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:13:43.526974 ldconfig[1279]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:13:43.544536 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:13:43.548031 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:13:43.566892 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:13:43.569975 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:13:43.572935 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:13:43.575785 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:13:43.578769 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:13:43.581883 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:13:43.584819 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:13:43.586195 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:13:43.588764 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:13:43.588806 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:13:43.590764 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:13:43.593364 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:13:43.596772 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:13:43.599923 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:13:43.602921 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:13:43.604433 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:13:43.612201 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:13:43.615077 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:13:43.618239 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:13:43.621434 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:13:43.623752 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:13:43.624663 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:13:43.624681 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:13:43.626641 systemd[1]: Starting chronyd.service - NTP client/server... Jul 7 06:13:43.629784 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:13:43.634881 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 06:13:43.639899 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:13:43.644758 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:13:43.649808 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:13:43.656118 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:13:43.658220 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:13:43.663952 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:13:43.667878 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jul 7 06:13:43.670876 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jul 7 06:13:43.673847 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jul 7 06:13:43.675259 jq[1690]: false Jul 7 06:13:43.676195 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:13:43.680805 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:13:43.683290 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:13:43.686021 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:13:43.691429 KVP[1696]: KVP starting; pid is:1696 Jul 7 06:13:43.691925 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:13:43.697882 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:13:43.700485 kernel: hv_utils: KVP IC version 4.0 Jul 7 06:13:43.701813 KVP[1696]: KVP LIC Version: 3.1 Jul 7 06:13:43.707304 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Refreshing passwd entry cache Jul 7 06:13:43.710896 extend-filesystems[1691]: Found /dev/nvme0n1p6 Jul 7 06:13:43.712315 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:13:43.718520 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:13:43.718770 oslogin_cache_refresh[1695]: Refreshing passwd entry cache Jul 7 06:13:43.723837 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:13:43.728808 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:13:43.731619 (chronyd)[1685]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 7 06:13:43.733004 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:13:43.736573 extend-filesystems[1691]: Found /dev/nvme0n1p9 Jul 7 06:13:43.739562 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:13:43.743038 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:13:43.743220 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:13:43.748060 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:13:43.748295 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:13:43.749637 extend-filesystems[1691]: Checking size of /dev/nvme0n1p9 Jul 7 06:13:43.761472 chronyd[1726]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 7 06:13:43.766750 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Failure getting users, quitting Jul 7 06:13:43.766750 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:13:43.765627 oslogin_cache_refresh[1695]: Failure getting users, quitting Jul 7 06:13:43.765646 oslogin_cache_refresh[1695]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:13:43.769475 oslogin_cache_refresh[1695]: Refreshing group entry cache Jul 7 06:13:43.770136 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Refreshing group entry cache Jul 7 06:13:43.788751 jq[1711]: true Jul 7 06:13:43.783678 systemd[1]: Started chronyd.service - NTP client/server. Jul 7 06:13:43.781726 chronyd[1726]: Timezone right/UTC failed leap second check, ignoring Jul 7 06:13:43.787078 (ntainerd)[1732]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:13:43.781894 chronyd[1726]: Loaded seccomp filter (level 2) Jul 7 06:13:43.793772 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:13:43.794056 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:13:43.797743 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Failure getting groups, quitting Jul 7 06:13:43.797743 google_oslogin_nss_cache[1695]: oslogin_cache_refresh[1695]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:13:43.797613 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:13:43.796359 oslogin_cache_refresh[1695]: Failure getting groups, quitting Jul 7 06:13:43.796369 oslogin_cache_refresh[1695]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:13:43.798171 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:13:43.808330 extend-filesystems[1691]: Old size kept for /dev/nvme0n1p9 Jul 7 06:13:43.813204 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:13:43.813417 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:13:43.816131 jq[1738]: true Jul 7 06:13:43.820166 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:13:43.841212 update_engine[1708]: I20250707 06:13:43.841135 1708 main.cc:92] Flatcar Update Engine starting Jul 7 06:13:43.847850 tar[1717]: linux-amd64/LICENSE Jul 7 06:13:43.848035 tar[1717]: linux-amd64/helm Jul 7 06:13:43.911540 dbus-daemon[1688]: [system] SELinux support is enabled Jul 7 06:13:43.911691 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:13:43.917883 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:13:43.917918 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:13:43.921437 systemd-logind[1706]: New seat seat0. Jul 7 06:13:43.922436 update_engine[1708]: I20250707 06:13:43.922389 1708 update_check_scheduler.cc:74] Next update check in 11m45s Jul 7 06:13:43.924848 systemd-logind[1706]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:13:43.925835 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:13:43.925861 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:13:43.928803 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:13:43.943209 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:13:43.955825 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:13:43.987215 bash[1779]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:13:43.989587 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:13:43.992602 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:13:44.020237 coreos-metadata[1687]: Jul 07 06:13:44.015 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 7 06:13:44.027300 coreos-metadata[1687]: Jul 07 06:13:44.027 INFO Fetch successful Jul 7 06:13:44.031310 coreos-metadata[1687]: Jul 07 06:13:44.029 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 7 06:13:44.034740 coreos-metadata[1687]: Jul 07 06:13:44.034 INFO Fetch successful Jul 7 06:13:44.034740 coreos-metadata[1687]: Jul 07 06:13:44.034 INFO Fetching http://168.63.129.16/machine/07979598-f5ba-48b6-9774-b91fe655703b/2de4be1f%2D97d3%2D4641%2Da724%2Dbe1a9fba5e23.%5Fci%2D4372.0.1%2Da%2D6edf51656b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 7 06:13:44.038438 coreos-metadata[1687]: Jul 07 06:13:44.038 INFO Fetch successful Jul 7 06:13:44.038438 coreos-metadata[1687]: Jul 07 06:13:44.038 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 7 06:13:44.050927 coreos-metadata[1687]: Jul 07 06:13:44.050 INFO Fetch successful Jul 7 06:13:44.103144 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 06:13:44.105210 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:13:44.214134 sshd_keygen[1735]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:13:44.242212 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:13:44.247022 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:13:44.252343 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 7 06:13:44.281147 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:13:44.281530 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:13:44.285938 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:13:44.302354 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 7 06:13:44.331726 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:13:44.338042 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:13:44.342993 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:13:44.345324 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:13:44.377154 locksmithd[1789]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:13:44.703785 tar[1717]: linux-amd64/README.md Jul 7 06:13:44.717064 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:13:44.751873 containerd[1732]: time="2025-07-07T06:13:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:13:44.752724 containerd[1732]: time="2025-07-07T06:13:44.752483383Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:13:44.763325 containerd[1732]: time="2025-07-07T06:13:44.763284651Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.073µs" Jul 7 06:13:44.763485 containerd[1732]: time="2025-07-07T06:13:44.763461831Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:13:44.763541 containerd[1732]: time="2025-07-07T06:13:44.763530730Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:13:44.763723 containerd[1732]: time="2025-07-07T06:13:44.763697139Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:13:44.763774 containerd[1732]: time="2025-07-07T06:13:44.763764568Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:13:44.763826 containerd[1732]: time="2025-07-07T06:13:44.763817825Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:13:44.763921 containerd[1732]: time="2025-07-07T06:13:44.763909872Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:13:44.763958 containerd[1732]: time="2025-07-07T06:13:44.763950104Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:13:44.764314 containerd[1732]: time="2025-07-07T06:13:44.764298606Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:13:44.764353 containerd[1732]: time="2025-07-07T06:13:44.764345444Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:13:44.764396 containerd[1732]: time="2025-07-07T06:13:44.764386698Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:13:44.764435 containerd[1732]: time="2025-07-07T06:13:44.764427404Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:13:44.764528 containerd[1732]: time="2025-07-07T06:13:44.764519655Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:13:44.764752 containerd[1732]: time="2025-07-07T06:13:44.764729111Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:13:44.764803 containerd[1732]: time="2025-07-07T06:13:44.764761717Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:13:44.764803 containerd[1732]: time="2025-07-07T06:13:44.764773611Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:13:44.764857 containerd[1732]: time="2025-07-07T06:13:44.764809484Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:13:44.765391 containerd[1732]: time="2025-07-07T06:13:44.765057447Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:13:44.765391 containerd[1732]: time="2025-07-07T06:13:44.765110841Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:13:44.798059 containerd[1732]: time="2025-07-07T06:13:44.798030569Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:13:44.798121 containerd[1732]: time="2025-07-07T06:13:44.798091198Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:13:44.798121 containerd[1732]: time="2025-07-07T06:13:44.798108346Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:13:44.798158 containerd[1732]: time="2025-07-07T06:13:44.798121505Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798228343Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798247191Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798266819Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798278308Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798289793Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798299138Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798309211Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798325553Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798437350Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798454531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798467836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798478318Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798488581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:13:44.799094 containerd[1732]: time="2025-07-07T06:13:44.798498979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:13:44.799361 containerd[1732]: time="2025-07-07T06:13:44.798510631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:13:44.799361 containerd[1732]: time="2025-07-07T06:13:44.798520874Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:13:44.799361 containerd[1732]: time="2025-07-07T06:13:44.798533876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:13:44.799361 containerd[1732]: time="2025-07-07T06:13:44.798544530Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:13:44.799361 containerd[1732]: time="2025-07-07T06:13:44.798561156Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:13:44.799361 containerd[1732]: time="2025-07-07T06:13:44.798635024Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:13:44.799361 containerd[1732]: time="2025-07-07T06:13:44.798649712Z" level=info msg="Start snapshots syncer" Jul 7 06:13:44.799361 containerd[1732]: time="2025-07-07T06:13:44.798676999Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:13:44.799497 containerd[1732]: time="2025-07-07T06:13:44.798961768Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:13:44.799497 containerd[1732]: time="2025-07-07T06:13:44.799004445Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799092613Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799203920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799224322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799235211Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799247954Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799262217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799273445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799285922Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799315169Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799327086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799336951Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799368333Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799383220Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:13:44.799623 containerd[1732]: time="2025-07-07T06:13:44.799391782Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:13:44.799878 containerd[1732]: time="2025-07-07T06:13:44.799400078Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:13:44.799878 containerd[1732]: time="2025-07-07T06:13:44.799407288Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:13:44.799878 containerd[1732]: time="2025-07-07T06:13:44.799452367Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:13:44.799878 containerd[1732]: time="2025-07-07T06:13:44.799469306Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:13:44.799878 containerd[1732]: time="2025-07-07T06:13:44.799485659Z" level=info msg="runtime interface created" Jul 7 06:13:44.799878 containerd[1732]: time="2025-07-07T06:13:44.799490616Z" level=info msg="created NRI interface" Jul 7 06:13:44.799878 containerd[1732]: time="2025-07-07T06:13:44.799499855Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:13:44.799878 containerd[1732]: time="2025-07-07T06:13:44.799511886Z" level=info msg="Connect containerd service" Jul 7 06:13:44.799878 containerd[1732]: time="2025-07-07T06:13:44.799537148Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:13:44.800321 containerd[1732]: time="2025-07-07T06:13:44.800290857Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:13:45.073386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:13:45.085038 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:13:45.587924 kubelet[1847]: E0707 06:13:45.587882 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:13:45.589732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:13:45.589873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:13:45.590217 systemd[1]: kubelet.service: Consumed 951ms CPU time, 266.3M memory peak. Jul 7 06:13:45.680800 containerd[1732]: time="2025-07-07T06:13:45.680674097Z" level=info msg="Start subscribing containerd event" Jul 7 06:13:45.680974 containerd[1732]: time="2025-07-07T06:13:45.680771457Z" level=info msg="Start recovering state" Jul 7 06:13:45.681006 containerd[1732]: time="2025-07-07T06:13:45.680970201Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:13:45.681031 containerd[1732]: time="2025-07-07T06:13:45.681016856Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:13:45.681131 containerd[1732]: time="2025-07-07T06:13:45.681115111Z" level=info msg="Start event monitor" Jul 7 06:13:45.681179 containerd[1732]: time="2025-07-07T06:13:45.681171136Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:13:45.681237 containerd[1732]: time="2025-07-07T06:13:45.681209803Z" level=info msg="Start streaming server" Jul 7 06:13:45.681237 containerd[1732]: time="2025-07-07T06:13:45.681224599Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:13:45.681374 containerd[1732]: time="2025-07-07T06:13:45.681305239Z" level=info msg="runtime interface starting up..." Jul 7 06:13:45.681374 containerd[1732]: time="2025-07-07T06:13:45.681314674Z" level=info msg="starting plugins..." Jul 7 06:13:45.681374 containerd[1732]: time="2025-07-07T06:13:45.681328830Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:13:45.681598 containerd[1732]: time="2025-07-07T06:13:45.681537168Z" level=info msg="containerd successfully booted in 0.930099s" Jul 7 06:13:45.681612 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:13:45.685101 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:13:45.687464 systemd[1]: Startup finished in 3.241s (kernel) + 12.551s (initrd) + 22.865s (userspace) = 38.658s. Jul 7 06:13:46.005131 login[1828]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 06:13:46.005673 login[1829]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 06:13:46.013699 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:13:46.014793 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:13:46.023531 systemd-logind[1706]: New session 2 of user core. Jul 7 06:13:46.028990 systemd-logind[1706]: New session 1 of user core. Jul 7 06:13:46.037927 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:13:46.041943 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:13:46.052872 (systemd)[1872]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:13:46.055342 systemd-logind[1706]: New session c1 of user core. Jul 7 06:13:46.135574 waagent[1825]: 2025-07-07T06:13:46.135501Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jul 7 06:13:46.136222 waagent[1825]: 2025-07-07T06:13:46.136177Z INFO Daemon Daemon OS: flatcar 4372.0.1 Jul 7 06:13:46.136312 waagent[1825]: 2025-07-07T06:13:46.136289Z INFO Daemon Daemon Python: 3.11.12 Jul 7 06:13:46.141736 waagent[1825]: 2025-07-07T06:13:46.140568Z INFO Daemon Daemon Run daemon Jul 7 06:13:46.142288 waagent[1825]: 2025-07-07T06:13:46.142255Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4372.0.1' Jul 7 06:13:46.145894 waagent[1825]: 2025-07-07T06:13:46.145682Z INFO Daemon Daemon Using waagent for provisioning Jul 7 06:13:46.147521 waagent[1825]: 2025-07-07T06:13:46.147479Z INFO Daemon Daemon Activate resource disk Jul 7 06:13:46.149309 waagent[1825]: 2025-07-07T06:13:46.149278Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 7 06:13:46.154157 waagent[1825]: 2025-07-07T06:13:46.154118Z INFO Daemon Daemon Found device: None Jul 7 06:13:46.156208 waagent[1825]: 2025-07-07T06:13:46.156173Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 7 06:13:46.159345 waagent[1825]: 2025-07-07T06:13:46.159314Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 7 06:13:46.163935 waagent[1825]: 2025-07-07T06:13:46.163899Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 06:13:46.166234 waagent[1825]: 2025-07-07T06:13:46.166202Z INFO Daemon Daemon Running default provisioning handler Jul 7 06:13:46.175108 waagent[1825]: 2025-07-07T06:13:46.174334Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 7 06:13:46.175625 waagent[1825]: 2025-07-07T06:13:46.175590Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 7 06:13:46.175833 waagent[1825]: 2025-07-07T06:13:46.175813Z INFO Daemon Daemon cloud-init is enabled: False Jul 7 06:13:46.176372 waagent[1825]: 2025-07-07T06:13:46.176353Z INFO Daemon Daemon Copying ovf-env.xml Jul 7 06:13:46.224193 systemd[1872]: Queued start job for default target default.target. Jul 7 06:13:46.236625 systemd[1872]: Created slice app.slice - User Application Slice. Jul 7 06:13:46.237076 systemd[1872]: Reached target paths.target - Paths. Jul 7 06:13:46.237229 systemd[1872]: Reached target timers.target - Timers. Jul 7 06:13:46.240840 systemd[1872]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:13:46.246690 waagent[1825]: 2025-07-07T06:13:46.246628Z INFO Daemon Daemon Successfully mounted dvd Jul 7 06:13:46.253935 systemd[1872]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:13:46.254028 systemd[1872]: Reached target sockets.target - Sockets. Jul 7 06:13:46.254113 systemd[1872]: Reached target basic.target - Basic System. Jul 7 06:13:46.254143 systemd[1872]: Reached target default.target - Main User Target. Jul 7 06:13:46.254166 systemd[1872]: Startup finished in 191ms. Jul 7 06:13:46.254638 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:13:46.264951 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:13:46.265723 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:13:46.275895 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 7 06:13:46.278061 waagent[1825]: 2025-07-07T06:13:46.278018Z INFO Daemon Daemon Detect protocol endpoint Jul 7 06:13:46.279902 waagent[1825]: 2025-07-07T06:13:46.278542Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 7 06:13:46.279902 waagent[1825]: 2025-07-07T06:13:46.278909Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 7 06:13:46.279902 waagent[1825]: 2025-07-07T06:13:46.279179Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 7 06:13:46.279902 waagent[1825]: 2025-07-07T06:13:46.279336Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 7 06:13:46.279902 waagent[1825]: 2025-07-07T06:13:46.279498Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 7 06:13:46.293278 waagent[1825]: 2025-07-07T06:13:46.293215Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 7 06:13:46.294207 waagent[1825]: 2025-07-07T06:13:46.293832Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 7 06:13:46.294207 waagent[1825]: 2025-07-07T06:13:46.293988Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 7 06:13:46.373364 waagent[1825]: 2025-07-07T06:13:46.372126Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 7 06:13:46.373364 waagent[1825]: 2025-07-07T06:13:46.372409Z INFO Daemon Daemon Forcing an update of the goal state. Jul 7 06:13:46.383804 waagent[1825]: 2025-07-07T06:13:46.383772Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 06:13:46.398721 waagent[1825]: 2025-07-07T06:13:46.398687Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jul 7 06:13:46.401252 waagent[1825]: 2025-07-07T06:13:46.399316Z INFO Daemon Jul 7 06:13:46.401252 waagent[1825]: 2025-07-07T06:13:46.399402Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 491c9ea0-7301-4cc1-b570-ad33494e4a63 eTag: 1559446552952181114 source: Fabric] Jul 7 06:13:46.401252 waagent[1825]: 2025-07-07T06:13:46.399882Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 7 06:13:46.401252 waagent[1825]: 2025-07-07T06:13:46.400177Z INFO Daemon Jul 7 06:13:46.401252 waagent[1825]: 2025-07-07T06:13:46.400325Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 7 06:13:46.407200 waagent[1825]: 2025-07-07T06:13:46.407170Z INFO Daemon Daemon Downloading artifacts profile blob Jul 7 06:13:46.575476 waagent[1825]: 2025-07-07T06:13:46.575388Z INFO Daemon Downloaded certificate {'thumbprint': 'C5FA5AF5F3E64232372811AFA9A94403DC73E963', 'hasPrivateKey': True} Jul 7 06:13:46.578033 waagent[1825]: 2025-07-07T06:13:46.578000Z INFO Daemon Fetch goal state completed Jul 7 06:13:46.588060 waagent[1825]: 2025-07-07T06:13:46.588019Z INFO Daemon Daemon Starting provisioning Jul 7 06:13:46.588789 waagent[1825]: 2025-07-07T06:13:46.588691Z INFO Daemon Daemon Handle ovf-env.xml. Jul 7 06:13:46.589436 waagent[1825]: 2025-07-07T06:13:46.589209Z INFO Daemon Daemon Set hostname [ci-4372.0.1-a-6edf51656b] Jul 7 06:13:46.592404 waagent[1825]: 2025-07-07T06:13:46.592365Z INFO Daemon Daemon Publish hostname [ci-4372.0.1-a-6edf51656b] Jul 7 06:13:46.593771 waagent[1825]: 2025-07-07T06:13:46.593738Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 7 06:13:46.595140 waagent[1825]: 2025-07-07T06:13:46.595108Z INFO Daemon Daemon Primary interface is [eth0] Jul 7 06:13:46.602514 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:13:46.602523 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:13:46.602550 systemd-networkd[1352]: eth0: DHCP lease lost Jul 7 06:13:46.603459 waagent[1825]: 2025-07-07T06:13:46.603415Z INFO Daemon Daemon Create user account if not exists Jul 7 06:13:46.604167 waagent[1825]: 2025-07-07T06:13:46.603974Z INFO Daemon Daemon User core already exists, skip useradd Jul 7 06:13:46.604433 waagent[1825]: 2025-07-07T06:13:46.604411Z INFO Daemon Daemon Configure sudoer Jul 7 06:13:46.616075 waagent[1825]: 2025-07-07T06:13:46.616033Z INFO Daemon Daemon Configure sshd Jul 7 06:13:46.621716 waagent[1825]: 2025-07-07T06:13:46.621675Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 7 06:13:46.626614 waagent[1825]: 2025-07-07T06:13:46.622177Z INFO Daemon Daemon Deploy ssh public key. Jul 7 06:13:46.626763 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.4.32/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jul 7 06:13:47.743846 waagent[1825]: 2025-07-07T06:13:47.743789Z INFO Daemon Daemon Provisioning complete Jul 7 06:13:47.754415 waagent[1825]: 2025-07-07T06:13:47.754375Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 7 06:13:47.756267 waagent[1825]: 2025-07-07T06:13:47.755329Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 7 06:13:47.756267 waagent[1825]: 2025-07-07T06:13:47.755550Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jul 7 06:13:47.856894 waagent[1921]: 2025-07-07T06:13:47.856845Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jul 7 06:13:47.857157 waagent[1921]: 2025-07-07T06:13:47.856933Z INFO ExtHandler ExtHandler OS: flatcar 4372.0.1 Jul 7 06:13:47.857157 waagent[1921]: 2025-07-07T06:13:47.856971Z INFO ExtHandler ExtHandler Python: 3.11.12 Jul 7 06:13:47.857157 waagent[1921]: 2025-07-07T06:13:47.857006Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 7 06:13:47.892843 waagent[1921]: 2025-07-07T06:13:47.892794Z INFO ExtHandler ExtHandler Distro: flatcar-4372.0.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.12; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jul 7 06:13:47.892969 waagent[1921]: 2025-07-07T06:13:47.892944Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:13:47.893018 waagent[1921]: 2025-07-07T06:13:47.892998Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:13:47.899239 waagent[1921]: 2025-07-07T06:13:47.899184Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 7 06:13:47.907512 waagent[1921]: 2025-07-07T06:13:47.907481Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jul 7 06:13:47.907884 waagent[1921]: 2025-07-07T06:13:47.907851Z INFO ExtHandler Jul 7 06:13:47.907927 waagent[1921]: 2025-07-07T06:13:47.907911Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 8287aaea-9896-4d0a-9419-1f650055f114 eTag: 1559446552952181114 source: Fabric] Jul 7 06:13:47.908128 waagent[1921]: 2025-07-07T06:13:47.908104Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 7 06:13:47.908461 waagent[1921]: 2025-07-07T06:13:47.908436Z INFO ExtHandler Jul 7 06:13:47.908494 waagent[1921]: 2025-07-07T06:13:47.908476Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 7 06:13:47.913293 waagent[1921]: 2025-07-07T06:13:47.913261Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 7 06:13:47.982155 waagent[1921]: 2025-07-07T06:13:47.982108Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C5FA5AF5F3E64232372811AFA9A94403DC73E963', 'hasPrivateKey': True} Jul 7 06:13:47.982440 waagent[1921]: 2025-07-07T06:13:47.982413Z INFO ExtHandler Fetch goal state completed Jul 7 06:13:47.992941 waagent[1921]: 2025-07-07T06:13:47.992895Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) Jul 7 06:13:47.997238 waagent[1921]: 2025-07-07T06:13:47.997162Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1921 Jul 7 06:13:47.997298 waagent[1921]: 2025-07-07T06:13:47.997279Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 7 06:13:47.997519 waagent[1921]: 2025-07-07T06:13:47.997495Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jul 7 06:13:47.998486 waagent[1921]: 2025-07-07T06:13:47.998450Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 7 06:13:47.998797 waagent[1921]: 2025-07-07T06:13:47.998772Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4372.0.1', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jul 7 06:13:47.998900 waagent[1921]: 2025-07-07T06:13:47.998879Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jul 7 06:13:47.999267 waagent[1921]: 2025-07-07T06:13:47.999242Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 7 06:13:48.279049 waagent[1921]: 2025-07-07T06:13:48.278958Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 7 06:13:48.279179 waagent[1921]: 2025-07-07T06:13:48.279153Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 7 06:13:48.285731 waagent[1921]: 2025-07-07T06:13:48.285648Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 7 06:13:48.291982 systemd[1]: Reload requested from client PID 1936 ('systemctl') (unit waagent.service)... Jul 7 06:13:48.291996 systemd[1]: Reloading... Jul 7 06:13:48.348723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#205 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jul 7 06:13:48.371851 zram_generator::config[1977]: No configuration found. Jul 7 06:13:48.452439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:13:48.543588 systemd[1]: Reloading finished in 251 ms. Jul 7 06:13:48.566532 waagent[1921]: 2025-07-07T06:13:48.565865Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 7 06:13:48.566532 waagent[1921]: 2025-07-07T06:13:48.565978Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 7 06:13:49.115693 waagent[1921]: 2025-07-07T06:13:49.115618Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 7 06:13:49.116058 waagent[1921]: 2025-07-07T06:13:49.116013Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jul 7 06:13:49.116824 waagent[1921]: 2025-07-07T06:13:49.116787Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 7 06:13:49.117199 waagent[1921]: 2025-07-07T06:13:49.117157Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 7 06:13:49.117302 waagent[1921]: 2025-07-07T06:13:49.117280Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:13:49.117440 waagent[1921]: 2025-07-07T06:13:49.117424Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:13:49.117483 waagent[1921]: 2025-07-07T06:13:49.117460Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 7 06:13:49.117688 waagent[1921]: 2025-07-07T06:13:49.117654Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 7 06:13:49.117907 waagent[1921]: 2025-07-07T06:13:49.117884Z INFO EnvHandler ExtHandler Configure routes Jul 7 06:13:49.117973 waagent[1921]: 2025-07-07T06:13:49.117942Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 7 06:13:49.118053 waagent[1921]: 2025-07-07T06:13:49.118032Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 7 06:13:49.118259 waagent[1921]: 2025-07-07T06:13:49.118239Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 7 06:13:49.118480 waagent[1921]: 2025-07-07T06:13:49.118463Z INFO EnvHandler ExtHandler Gateway:None Jul 7 06:13:49.118519 waagent[1921]: 2025-07-07T06:13:49.118504Z INFO EnvHandler ExtHandler Routes:None Jul 7 06:13:49.118825 waagent[1921]: 2025-07-07T06:13:49.118800Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 7 06:13:49.118867 waagent[1921]: 2025-07-07T06:13:49.118853Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 7 06:13:49.119166 waagent[1921]: 2025-07-07T06:13:49.119150Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 7 06:13:49.119870 waagent[1921]: 2025-07-07T06:13:49.119838Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 7 06:13:49.119870 waagent[1921]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 7 06:13:49.119870 waagent[1921]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jul 7 06:13:49.119870 waagent[1921]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 7 06:13:49.119870 waagent[1921]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:13:49.119870 waagent[1921]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:13:49.119870 waagent[1921]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 7 06:13:49.134773 waagent[1921]: 2025-07-07T06:13:49.134079Z INFO ExtHandler ExtHandler Jul 7 06:13:49.134773 waagent[1921]: 2025-07-07T06:13:49.134128Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2ff0d20e-6b84-4c76-84ef-6ba65f19cf67 correlation f1103ac1-bd2d-4f9e-a3a2-14de4f5ba47e created: 2025-07-07T06:12:40.733475Z] Jul 7 06:13:49.134773 waagent[1921]: 2025-07-07T06:13:49.134354Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 7 06:13:49.134773 waagent[1921]: 2025-07-07T06:13:49.134751Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jul 7 06:13:49.173282 waagent[1921]: 2025-07-07T06:13:49.173238Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jul 7 06:13:49.173282 waagent[1921]: Try `iptables -h' or 'iptables --help' for more information.) Jul 7 06:13:49.173581 waagent[1921]: 2025-07-07T06:13:49.173556Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4ABF29E0-F8EB-4979-8AF9-CFE12D720639;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jul 7 06:13:49.231049 waagent[1921]: 2025-07-07T06:13:49.231003Z INFO MonitorHandler ExtHandler Network interfaces: Jul 7 06:13:49.231049 waagent[1921]: Executing ['ip', '-a', '-o', 'link']: Jul 7 06:13:49.231049 waagent[1921]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 7 06:13:49.231049 waagent[1921]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:42:f7:91 brd ff:ff:ff:ff:ff:ff\ alias Network Device Jul 7 06:13:49.231049 waagent[1921]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:42:f7:91 brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jul 7 06:13:49.231049 waagent[1921]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 7 06:13:49.231049 waagent[1921]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 7 06:13:49.231049 waagent[1921]: 2: eth0 inet 10.200.4.32/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 7 06:13:49.231049 waagent[1921]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 7 06:13:49.231049 waagent[1921]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 7 06:13:49.231049 waagent[1921]: 2: eth0 inet6 fe80::222:48ff:fe42:f791/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 06:13:49.231049 waagent[1921]: 3: enP30832s1 inet6 fe80::222:48ff:fe42:f791/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 7 06:13:49.287866 waagent[1921]: 2025-07-07T06:13:49.287735Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 7 06:13:49.287866 waagent[1921]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:49.287866 waagent[1921]: pkts bytes target prot opt in out source destination Jul 7 06:13:49.287866 waagent[1921]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:49.287866 waagent[1921]: pkts bytes target prot opt in out source destination Jul 7 06:13:49.287866 waagent[1921]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:49.287866 waagent[1921]: pkts bytes target prot opt in out source destination Jul 7 06:13:49.287866 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 06:13:49.287866 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 06:13:49.287866 waagent[1921]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 06:13:49.290330 waagent[1921]: 2025-07-07T06:13:49.290287Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 7 06:13:49.290330 waagent[1921]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:49.290330 waagent[1921]: pkts bytes target prot opt in out source destination Jul 7 06:13:49.290330 waagent[1921]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:49.290330 waagent[1921]: pkts bytes target prot opt in out source destination Jul 7 06:13:49.290330 waagent[1921]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 7 06:13:49.290330 waagent[1921]: pkts bytes target prot opt in out source destination Jul 7 06:13:49.290330 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 7 06:13:49.290330 waagent[1921]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 7 06:13:49.290330 waagent[1921]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 7 06:13:55.724379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:13:55.726535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:13:57.641448 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:13:57.642850 systemd[1]: Started sshd@0-10.200.4.32:22-10.200.16.10:55216.service - OpenSSH per-connection server daemon (10.200.16.10:55216). Jul 7 06:13:58.919838 sshd[2069]: Accepted publickey for core from 10.200.16.10 port 55216 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:13:58.921435 sshd-session[2069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:58.926112 systemd-logind[1706]: New session 3 of user core. Jul 7 06:13:58.928877 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:13:59.454277 systemd[1]: Started sshd@1-10.200.4.32:22-10.200.16.10:55230.service - OpenSSH per-connection server daemon (10.200.16.10:55230). Jul 7 06:14:00.053530 sshd[2074]: Accepted publickey for core from 10.200.16.10 port 55230 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:00.055040 sshd-session[2074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:00.060017 systemd-logind[1706]: New session 4 of user core. Jul 7 06:14:00.065878 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:14:00.476122 sshd[2076]: Connection closed by 10.200.16.10 port 55230 Jul 7 06:14:00.476743 sshd-session[2074]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:00.480360 systemd[1]: sshd@1-10.200.4.32:22-10.200.16.10:55230.service: Deactivated successfully. Jul 7 06:14:00.482056 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:14:00.482745 systemd-logind[1706]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:14:00.484030 systemd-logind[1706]: Removed session 4. Jul 7 06:14:00.582754 systemd[1]: Started sshd@2-10.200.4.32:22-10.200.16.10:43948.service - OpenSSH per-connection server daemon (10.200.16.10:43948). Jul 7 06:14:01.181388 sshd[2082]: Accepted publickey for core from 10.200.16.10 port 43948 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:01.182908 sshd-session[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:01.188629 systemd-logind[1706]: New session 5 of user core. Jul 7 06:14:01.195900 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:14:01.601933 sshd[2084]: Connection closed by 10.200.16.10 port 43948 Jul 7 06:14:01.602657 sshd-session[2082]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:01.606535 systemd[1]: sshd@2-10.200.4.32:22-10.200.16.10:43948.service: Deactivated successfully. Jul 7 06:14:01.608207 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:14:01.608965 systemd-logind[1706]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:14:01.610384 systemd-logind[1706]: Removed session 5. Jul 7 06:14:01.712005 systemd[1]: Started sshd@3-10.200.4.32:22-10.200.16.10:43950.service - OpenSSH per-connection server daemon (10.200.16.10:43950). Jul 7 06:14:02.317846 sshd[2090]: Accepted publickey for core from 10.200.16.10 port 43950 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:02.319376 sshd-session[2090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:02.324016 systemd-logind[1706]: New session 6 of user core. Jul 7 06:14:02.331861 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:14:02.740213 sshd[2092]: Connection closed by 10.200.16.10 port 43950 Jul 7 06:14:02.740833 sshd-session[2090]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:02.744358 systemd[1]: sshd@3-10.200.4.32:22-10.200.16.10:43950.service: Deactivated successfully. Jul 7 06:14:02.746026 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:14:02.746756 systemd-logind[1706]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:14:02.747854 systemd-logind[1706]: Removed session 6. Jul 7 06:14:02.845252 systemd[1]: Started sshd@4-10.200.4.32:22-10.200.16.10:43962.service - OpenSSH per-connection server daemon (10.200.16.10:43962). Jul 7 06:14:03.861819 sshd[2098]: Accepted publickey for core from 10.200.16.10 port 43962 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:03.862474 sshd-session[2098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:03.869260 systemd-logind[1706]: New session 7 of user core. Jul 7 06:14:03.876049 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:14:04.050036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:04.058897 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:14:04.095644 kubelet[2106]: E0707 06:14:04.095597 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:14:04.098166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:14:04.098289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:14:04.098655 systemd[1]: kubelet.service: Consumed 159ms CPU time, 110.5M memory peak. Jul 7 06:14:04.262314 sudo[2112]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:14:04.262564 sudo[2112]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:04.293335 sudo[2112]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:04.395281 sshd[2100]: Connection closed by 10.200.16.10 port 43962 Jul 7 06:14:04.396120 sshd-session[2098]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:04.400284 systemd[1]: sshd@4-10.200.4.32:22-10.200.16.10:43962.service: Deactivated successfully. Jul 7 06:14:04.401861 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:14:04.402561 systemd-logind[1706]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:14:04.403907 systemd-logind[1706]: Removed session 7. Jul 7 06:14:04.501023 systemd[1]: Started sshd@5-10.200.4.32:22-10.200.16.10:43978.service - OpenSSH per-connection server daemon (10.200.16.10:43978). Jul 7 06:14:05.100419 sshd[2118]: Accepted publickey for core from 10.200.16.10 port 43978 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:05.102053 sshd-session[2118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:05.106933 systemd-logind[1706]: New session 8 of user core. Jul 7 06:14:05.111862 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:14:05.429973 sudo[2122]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:14:05.430404 sudo[2122]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:05.445949 sudo[2122]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:05.450119 sudo[2121]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:14:05.450323 sudo[2121]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:05.458191 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:14:05.489298 augenrules[2144]: No rules Jul 7 06:14:05.490395 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:14:05.490640 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:14:05.491836 sudo[2121]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:05.590053 sshd[2120]: Connection closed by 10.200.16.10 port 43978 Jul 7 06:14:05.590585 sshd-session[2118]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:05.594148 systemd[1]: sshd@5-10.200.4.32:22-10.200.16.10:43978.service: Deactivated successfully. Jul 7 06:14:05.595726 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:14:05.596464 systemd-logind[1706]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:14:05.597578 systemd-logind[1706]: Removed session 8. Jul 7 06:14:05.702040 systemd[1]: Started sshd@6-10.200.4.32:22-10.200.16.10:43994.service - OpenSSH per-connection server daemon (10.200.16.10:43994). Jul 7 06:14:06.302167 sshd[2153]: Accepted publickey for core from 10.200.16.10 port 43994 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:14:06.303666 sshd-session[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:14:06.308934 systemd-logind[1706]: New session 9 of user core. Jul 7 06:14:06.314879 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:14:06.630692 sudo[2156]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:14:06.630951 sudo[2156]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:14:07.577889 chronyd[1726]: Selected source PHC0 Jul 7 06:14:07.841369 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:14:07.853031 (dockerd)[2175]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:14:09.675568 dockerd[2175]: time="2025-07-07T06:14:09.675494922Z" level=info msg="Starting up" Jul 7 06:14:09.676938 dockerd[2175]: time="2025-07-07T06:14:09.676911468Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:14:13.121450 dockerd[2175]: time="2025-07-07T06:14:13.121364732Z" level=info msg="Loading containers: start." Jul 7 06:14:13.478743 kernel: Initializing XFRM netlink socket Jul 7 06:14:13.994124 systemd-networkd[1352]: docker0: Link UP Jul 7 06:14:14.224281 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:14:14.226460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:19.158337 dockerd[2175]: time="2025-07-07T06:14:19.157442436Z" level=info msg="Loading containers: done." Jul 7 06:14:20.117433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:20.130925 (kubelet)[2350]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:14:20.170749 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:14:20.603132 kubelet[2350]: E0707 06:14:20.169445 2350 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:14:20.170855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:14:20.171156 systemd[1]: kubelet.service: Consumed 158ms CPU time, 110.2M memory peak. Jul 7 06:14:20.764346 dockerd[2175]: time="2025-07-07T06:14:20.764274773Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:14:20.764808 dockerd[2175]: time="2025-07-07T06:14:20.764443168Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:14:20.764808 dockerd[2175]: time="2025-07-07T06:14:20.764621401Z" level=info msg="Initializing buildkit" Jul 7 06:14:21.552898 dockerd[2175]: time="2025-07-07T06:14:21.552831285Z" level=info msg="Completed buildkit initialization" Jul 7 06:14:21.559878 dockerd[2175]: time="2025-07-07T06:14:21.559826233Z" level=info msg="Daemon has completed initialization" Jul 7 06:14:21.560623 dockerd[2175]: time="2025-07-07T06:14:21.559972699Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:14:21.560144 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:14:22.814681 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 7 06:14:22.860889 containerd[1732]: time="2025-07-07T06:14:22.860843009Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 06:14:24.661522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569908826.mount: Deactivated successfully. Jul 7 06:14:26.165755 containerd[1732]: time="2025-07-07T06:14:26.165692524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:26.170811 containerd[1732]: time="2025-07-07T06:14:26.170778732Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jul 7 06:14:26.176661 containerd[1732]: time="2025-07-07T06:14:26.176608330Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:26.184687 containerd[1732]: time="2025-07-07T06:14:26.184618447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:26.185608 containerd[1732]: time="2025-07-07T06:14:26.185420297Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 3.324523231s" Jul 7 06:14:26.185608 containerd[1732]: time="2025-07-07T06:14:26.185459972Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 06:14:26.186236 containerd[1732]: time="2025-07-07T06:14:26.186189522Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 06:14:27.741429 containerd[1732]: time="2025-07-07T06:14:27.741353322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:27.747150 containerd[1732]: time="2025-07-07T06:14:27.747106854Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jul 7 06:14:27.754183 containerd[1732]: time="2025-07-07T06:14:27.754138883Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:27.765666 containerd[1732]: time="2025-07-07T06:14:27.765603605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:27.766499 containerd[1732]: time="2025-07-07T06:14:27.766318681Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.58008704s" Jul 7 06:14:27.766499 containerd[1732]: time="2025-07-07T06:14:27.766352426Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 06:14:27.767167 containerd[1732]: time="2025-07-07T06:14:27.767133640Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 06:14:29.029720 update_engine[1708]: I20250707 06:14:29.029578 1708 update_attempter.cc:509] Updating boot flags... Jul 7 06:14:29.237863 containerd[1732]: time="2025-07-07T06:14:29.237811980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:29.242396 containerd[1732]: time="2025-07-07T06:14:29.242326101Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jul 7 06:14:29.245987 containerd[1732]: time="2025-07-07T06:14:29.245944123Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:29.251614 containerd[1732]: time="2025-07-07T06:14:29.251570232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:29.252312 containerd[1732]: time="2025-07-07T06:14:29.252144930Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.484970594s" Jul 7 06:14:29.252312 containerd[1732]: time="2025-07-07T06:14:29.252182082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 06:14:29.252844 containerd[1732]: time="2025-07-07T06:14:29.252827565Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 06:14:30.224395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 06:14:30.226384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:30.711762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896868214.mount: Deactivated successfully. Jul 7 06:14:30.784399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:30.793981 (kubelet)[2485]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:14:30.832228 kubelet[2485]: E0707 06:14:30.832191 2485 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:14:30.833738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:14:30.833857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:14:30.834171 systemd[1]: kubelet.service: Consumed 157ms CPU time, 110.6M memory peak. Jul 7 06:14:31.221642 containerd[1732]: time="2025-07-07T06:14:31.221597885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:31.226988 containerd[1732]: time="2025-07-07T06:14:31.226951333Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jul 7 06:14:31.235439 containerd[1732]: time="2025-07-07T06:14:31.235385184Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:31.239848 containerd[1732]: time="2025-07-07T06:14:31.239802618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:31.240348 containerd[1732]: time="2025-07-07T06:14:31.240173166Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 1.987316584s" Jul 7 06:14:31.240348 containerd[1732]: time="2025-07-07T06:14:31.240208123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 06:14:31.240934 containerd[1732]: time="2025-07-07T06:14:31.240900547Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:14:31.991033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165574434.mount: Deactivated successfully. Jul 7 06:14:33.388215 containerd[1732]: time="2025-07-07T06:14:33.388154544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:33.392248 containerd[1732]: time="2025-07-07T06:14:33.392214822Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 7 06:14:33.399627 containerd[1732]: time="2025-07-07T06:14:33.399582795Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:33.409664 containerd[1732]: time="2025-07-07T06:14:33.409616688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:33.410550 containerd[1732]: time="2025-07-07T06:14:33.410346483Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.169416663s" Jul 7 06:14:33.410550 containerd[1732]: time="2025-07-07T06:14:33.410383592Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 06:14:33.411173 containerd[1732]: time="2025-07-07T06:14:33.411147801Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:14:34.018357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1106688089.mount: Deactivated successfully. Jul 7 06:14:34.053405 containerd[1732]: time="2025-07-07T06:14:34.053362772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:34.056962 containerd[1732]: time="2025-07-07T06:14:34.056931492Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 06:14:34.061171 containerd[1732]: time="2025-07-07T06:14:34.061131435Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:34.065428 containerd[1732]: time="2025-07-07T06:14:34.065385956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:14:34.066060 containerd[1732]: time="2025-07-07T06:14:34.065822805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 654.645639ms" Jul 7 06:14:34.066060 containerd[1732]: time="2025-07-07T06:14:34.065851811Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:14:34.066471 containerd[1732]: time="2025-07-07T06:14:34.066454265Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 06:14:34.766495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount7615453.mount: Deactivated successfully. Jul 7 06:14:36.573010 containerd[1732]: time="2025-07-07T06:14:36.572949154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:36.577082 containerd[1732]: time="2025-07-07T06:14:36.577034477Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 7 06:14:36.580818 containerd[1732]: time="2025-07-07T06:14:36.580773571Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:36.585689 containerd[1732]: time="2025-07-07T06:14:36.585634095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:14:36.586491 containerd[1732]: time="2025-07-07T06:14:36.586297468Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.519819044s" Jul 7 06:14:36.586491 containerd[1732]: time="2025-07-07T06:14:36.586327397Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 06:14:39.902078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:39.902623 systemd[1]: kubelet.service: Consumed 157ms CPU time, 110.6M memory peak. Jul 7 06:14:39.904925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:39.928260 systemd[1]: Reload requested from client PID 2633 ('systemctl') (unit session-9.scope)... Jul 7 06:14:39.928287 systemd[1]: Reloading... Jul 7 06:14:40.027730 zram_generator::config[2675]: No configuration found. Jul 7 06:14:40.155469 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:14:40.254129 systemd[1]: Reloading finished in 325 ms. Jul 7 06:14:40.285405 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:14:40.285505 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:14:40.285789 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:40.285841 systemd[1]: kubelet.service: Consumed 88ms CPU time, 78.1M memory peak. Jul 7 06:14:40.287770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:40.800006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:40.809015 (kubelet)[2746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:14:40.847842 kubelet[2746]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:40.847842 kubelet[2746]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:14:40.847842 kubelet[2746]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:40.848132 kubelet[2746]: I0707 06:14:40.847911 2746 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:14:41.180063 kubelet[2746]: I0707 06:14:41.179968 2746 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:14:41.180063 kubelet[2746]: I0707 06:14:41.179992 2746 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:14:41.180433 kubelet[2746]: I0707 06:14:41.180276 2746 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:14:41.210416 kubelet[2746]: E0707 06:14:41.210387 2746 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:41.211292 kubelet[2746]: I0707 06:14:41.211183 2746 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:14:41.220856 kubelet[2746]: I0707 06:14:41.220838 2746 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:14:41.223722 kubelet[2746]: I0707 06:14:41.223685 2746 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:14:41.223941 kubelet[2746]: I0707 06:14:41.223912 2746 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:14:41.224117 kubelet[2746]: I0707 06:14:41.223940 2746 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-6edf51656b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:14:41.224249 kubelet[2746]: I0707 06:14:41.224128 2746 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:14:41.224249 kubelet[2746]: I0707 06:14:41.224138 2746 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:14:41.224291 kubelet[2746]: I0707 06:14:41.224266 2746 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:41.227388 kubelet[2746]: I0707 06:14:41.227372 2746 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:14:41.227451 kubelet[2746]: I0707 06:14:41.227406 2746 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:14:41.227451 kubelet[2746]: I0707 06:14:41.227433 2746 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:14:41.227451 kubelet[2746]: I0707 06:14:41.227448 2746 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:14:41.234058 kubelet[2746]: W0707 06:14:41.233808 2746 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Jul 7 06:14:41.234187 kubelet[2746]: E0707 06:14:41.234174 2746 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:41.234314 kubelet[2746]: I0707 06:14:41.234305 2746 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:14:41.234732 kubelet[2746]: I0707 06:14:41.234720 2746 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:14:41.235936 kubelet[2746]: W0707 06:14:41.235475 2746 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:14:41.236651 kubelet[2746]: W0707 06:14:41.236411 2746 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-6edf51656b&limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Jul 7 06:14:41.236651 kubelet[2746]: E0707 06:14:41.236470 2746 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372.0.1-a-6edf51656b&limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:41.237875 kubelet[2746]: I0707 06:14:41.237856 2746 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:14:41.237932 kubelet[2746]: I0707 06:14:41.237893 2746 server.go:1287] "Started kubelet" Jul 7 06:14:41.238070 kubelet[2746]: I0707 06:14:41.237984 2746 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:14:41.239093 kubelet[2746]: I0707 06:14:41.238830 2746 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:14:41.241277 kubelet[2746]: I0707 06:14:41.241025 2746 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:14:41.244209 kubelet[2746]: I0707 06:14:41.244175 2746 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:14:41.244389 kubelet[2746]: I0707 06:14:41.244375 2746 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:14:41.246145 kubelet[2746]: E0707 06:14:41.244553 2746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.32:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.32:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372.0.1-a-6edf51656b.184fe377373e89c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372.0.1-a-6edf51656b,UID:ci-4372.0.1-a-6edf51656b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372.0.1-a-6edf51656b,},FirstTimestamp:2025-07-07 06:14:41.23787104 +0000 UTC m=+0.425334333,LastTimestamp:2025-07-07 06:14:41.23787104 +0000 UTC m=+0.425334333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372.0.1-a-6edf51656b,}" Jul 7 06:14:41.246797 kubelet[2746]: I0707 06:14:41.246513 2746 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:14:41.248357 kubelet[2746]: I0707 06:14:41.248164 2746 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:14:41.248357 kubelet[2746]: E0707 06:14:41.248351 2746 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-6edf51656b\" not found" Jul 7 06:14:41.251431 kubelet[2746]: E0707 06:14:41.249691 2746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-6edf51656b?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="200ms" Jul 7 06:14:41.251431 kubelet[2746]: I0707 06:14:41.249919 2746 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:14:41.251431 kubelet[2746]: I0707 06:14:41.249992 2746 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:14:41.251431 kubelet[2746]: I0707 06:14:41.250301 2746 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:14:41.251854 kubelet[2746]: I0707 06:14:41.251838 2746 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:14:41.252281 kubelet[2746]: W0707 06:14:41.252248 2746 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Jul 7 06:14:41.252365 kubelet[2746]: E0707 06:14:41.252352 2746 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:41.253690 kubelet[2746]: E0707 06:14:41.253674 2746 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:14:41.254134 kubelet[2746]: I0707 06:14:41.254119 2746 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:14:41.261622 kubelet[2746]: I0707 06:14:41.261512 2746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:14:41.264403 kubelet[2746]: I0707 06:14:41.264384 2746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:14:41.264611 kubelet[2746]: I0707 06:14:41.264483 2746 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:14:41.264611 kubelet[2746]: I0707 06:14:41.264503 2746 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:14:41.264611 kubelet[2746]: I0707 06:14:41.264511 2746 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:14:41.264611 kubelet[2746]: E0707 06:14:41.264552 2746 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:14:41.272339 kubelet[2746]: W0707 06:14:41.271543 2746 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Jul 7 06:14:41.272339 kubelet[2746]: E0707 06:14:41.271592 2746 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:41.273576 kubelet[2746]: I0707 06:14:41.273560 2746 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:14:41.273642 kubelet[2746]: I0707 06:14:41.273584 2746 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:14:41.273642 kubelet[2746]: I0707 06:14:41.273599 2746 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:41.280137 kubelet[2746]: I0707 06:14:41.280121 2746 policy_none.go:49] "None policy: Start" Jul 7 06:14:41.280137 kubelet[2746]: I0707 06:14:41.280141 2746 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:14:41.280238 kubelet[2746]: I0707 06:14:41.280154 2746 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:14:41.292291 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:14:41.303787 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:14:41.306600 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:14:41.313364 kubelet[2746]: I0707 06:14:41.313203 2746 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:14:41.313439 kubelet[2746]: I0707 06:14:41.313374 2746 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:14:41.313439 kubelet[2746]: I0707 06:14:41.313384 2746 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:14:41.313685 kubelet[2746]: I0707 06:14:41.313605 2746 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:14:41.315155 kubelet[2746]: E0707 06:14:41.315136 2746 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:14:41.315239 kubelet[2746]: E0707 06:14:41.315177 2746 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372.0.1-a-6edf51656b\" not found" Jul 7 06:14:41.372383 systemd[1]: Created slice kubepods-burstable-pod32dd57fc87de939cd662abd52a06893c.slice - libcontainer container kubepods-burstable-pod32dd57fc87de939cd662abd52a06893c.slice. Jul 7 06:14:41.385738 kubelet[2746]: E0707 06:14:41.385646 2746 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-6edf51656b\" not found" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.388614 systemd[1]: Created slice kubepods-burstable-pod8d85c59c77bed14e7855f2148c4e6ced.slice - libcontainer container kubepods-burstable-pod8d85c59c77bed14e7855f2148c4e6ced.slice. Jul 7 06:14:41.390228 kubelet[2746]: E0707 06:14:41.390213 2746 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-6edf51656b\" not found" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.392165 systemd[1]: Created slice kubepods-burstable-pod070719cbb3a0ef5e3d835a49ce33f142.slice - libcontainer container kubepods-burstable-pod070719cbb3a0ef5e3d835a49ce33f142.slice. Jul 7 06:14:41.393642 kubelet[2746]: E0707 06:14:41.393629 2746 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-6edf51656b\" not found" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.414687 kubelet[2746]: I0707 06:14:41.414672 2746 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.415008 kubelet[2746]: E0707 06:14:41.414990 2746 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.450503 kubelet[2746]: E0707 06:14:41.450424 2746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-6edf51656b?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="400ms" Jul 7 06:14:41.551943 kubelet[2746]: I0707 06:14:41.551881 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32dd57fc87de939cd662abd52a06893c-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-6edf51656b\" (UID: \"32dd57fc87de939cd662abd52a06893c\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.551943 kubelet[2746]: I0707 06:14:41.551940 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/070719cbb3a0ef5e3d835a49ce33f142-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" (UID: \"070719cbb3a0ef5e3d835a49ce33f142\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.552079 kubelet[2746]: I0707 06:14:41.551968 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d85c59c77bed14e7855f2148c4e6ced-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-6edf51656b\" (UID: \"8d85c59c77bed14e7855f2148c4e6ced\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.552079 kubelet[2746]: I0707 06:14:41.551989 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d85c59c77bed14e7855f2148c4e6ced-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-6edf51656b\" (UID: \"8d85c59c77bed14e7855f2148c4e6ced\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.552079 kubelet[2746]: I0707 06:14:41.552012 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d85c59c77bed14e7855f2148c4e6ced-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-6edf51656b\" (UID: \"8d85c59c77bed14e7855f2148c4e6ced\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.552079 kubelet[2746]: I0707 06:14:41.552034 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/070719cbb3a0ef5e3d835a49ce33f142-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" (UID: \"070719cbb3a0ef5e3d835a49ce33f142\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.552079 kubelet[2746]: I0707 06:14:41.552056 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/070719cbb3a0ef5e3d835a49ce33f142-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" (UID: \"070719cbb3a0ef5e3d835a49ce33f142\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.552236 kubelet[2746]: I0707 06:14:41.552078 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/070719cbb3a0ef5e3d835a49ce33f142-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" (UID: \"070719cbb3a0ef5e3d835a49ce33f142\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.552236 kubelet[2746]: I0707 06:14:41.552115 2746 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/070719cbb3a0ef5e3d835a49ce33f142-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" (UID: \"070719cbb3a0ef5e3d835a49ce33f142\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.616979 kubelet[2746]: I0707 06:14:41.616946 2746 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.617292 kubelet[2746]: E0707 06:14:41.617261 2746 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:41.687576 containerd[1732]: time="2025-07-07T06:14:41.687531024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-6edf51656b,Uid:32dd57fc87de939cd662abd52a06893c,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:41.691015 containerd[1732]: time="2025-07-07T06:14:41.690982608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-6edf51656b,Uid:8d85c59c77bed14e7855f2148c4e6ced,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:41.694593 containerd[1732]: time="2025-07-07T06:14:41.694554828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-6edf51656b,Uid:070719cbb3a0ef5e3d835a49ce33f142,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:41.851853 kubelet[2746]: E0707 06:14:41.851818 2746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372.0.1-a-6edf51656b?timeout=10s\": dial tcp 10.200.4.32:6443: connect: connection refused" interval="800ms" Jul 7 06:14:41.903775 containerd[1732]: time="2025-07-07T06:14:41.903671954Z" level=info msg="connecting to shim d7716aa99d0186c2cb7fecadde2420099188f3908bed1526e2647870f3c05a3e" address="unix:///run/containerd/s/8b472b8829617c41bfcaafb8c0ed4cbc0a90b4c9a604a7b139e2bfb67bc7e7cf" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:41.912200 containerd[1732]: time="2025-07-07T06:14:41.912148733Z" level=info msg="connecting to shim 5d7276373ab82d8e74e24302e749cecf770ac935d76db1a7980fc9a7ec56d3a5" address="unix:///run/containerd/s/8ec2c05356577ed3f3b32469576b4d816a641816d206b3837b5cf89023984711" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:41.946864 systemd[1]: Started cri-containerd-5d7276373ab82d8e74e24302e749cecf770ac935d76db1a7980fc9a7ec56d3a5.scope - libcontainer container 5d7276373ab82d8e74e24302e749cecf770ac935d76db1a7980fc9a7ec56d3a5. Jul 7 06:14:41.947763 containerd[1732]: time="2025-07-07T06:14:41.946899842Z" level=info msg="connecting to shim 997315591c20d3a94f6621232d8f6b9e8dc11dce82e2f4bd3d3a1dfcc5f343fd" address="unix:///run/containerd/s/44ea1c5c587021a110aa2d02dea9e4239e3e94fcba9ab0a858358821b6645749" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:41.962006 systemd[1]: Started cri-containerd-d7716aa99d0186c2cb7fecadde2420099188f3908bed1526e2647870f3c05a3e.scope - libcontainer container d7716aa99d0186c2cb7fecadde2420099188f3908bed1526e2647870f3c05a3e. Jul 7 06:14:41.976872 systemd[1]: Started cri-containerd-997315591c20d3a94f6621232d8f6b9e8dc11dce82e2f4bd3d3a1dfcc5f343fd.scope - libcontainer container 997315591c20d3a94f6621232d8f6b9e8dc11dce82e2f4bd3d3a1dfcc5f343fd. Jul 7 06:14:42.019912 kubelet[2746]: I0707 06:14:42.019879 2746 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:42.020264 kubelet[2746]: E0707 06:14:42.020225 2746 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.32:6443/api/v1/nodes\": dial tcp 10.200.4.32:6443: connect: connection refused" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:42.022723 containerd[1732]: time="2025-07-07T06:14:42.020873371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372.0.1-a-6edf51656b,Uid:8d85c59c77bed14e7855f2148c4e6ced,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d7276373ab82d8e74e24302e749cecf770ac935d76db1a7980fc9a7ec56d3a5\"" Jul 7 06:14:42.026720 containerd[1732]: time="2025-07-07T06:14:42.026320946Z" level=info msg="CreateContainer within sandbox \"5d7276373ab82d8e74e24302e749cecf770ac935d76db1a7980fc9a7ec56d3a5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:14:42.046378 containerd[1732]: time="2025-07-07T06:14:42.046351498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372.0.1-a-6edf51656b,Uid:32dd57fc87de939cd662abd52a06893c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7716aa99d0186c2cb7fecadde2420099188f3908bed1526e2647870f3c05a3e\"" Jul 7 06:14:42.047696 containerd[1732]: time="2025-07-07T06:14:42.047675719Z" level=info msg="CreateContainer within sandbox \"d7716aa99d0186c2cb7fecadde2420099188f3908bed1526e2647870f3c05a3e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:14:42.060467 containerd[1732]: time="2025-07-07T06:14:42.060446082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372.0.1-a-6edf51656b,Uid:070719cbb3a0ef5e3d835a49ce33f142,Namespace:kube-system,Attempt:0,} returns sandbox id \"997315591c20d3a94f6621232d8f6b9e8dc11dce82e2f4bd3d3a1dfcc5f343fd\"" Jul 7 06:14:42.061949 containerd[1732]: time="2025-07-07T06:14:42.061932437Z" level=info msg="CreateContainer within sandbox \"997315591c20d3a94f6621232d8f6b9e8dc11dce82e2f4bd3d3a1dfcc5f343fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:14:42.070689 containerd[1732]: time="2025-07-07T06:14:42.070667198Z" level=info msg="Container ea67b72b5841ea7c6c9ea4cae99d1281b87199c6fcc6e1e1c2a8c8ba686d81ba: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:42.123334 containerd[1732]: time="2025-07-07T06:14:42.123263329Z" level=info msg="CreateContainer within sandbox \"5d7276373ab82d8e74e24302e749cecf770ac935d76db1a7980fc9a7ec56d3a5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ea67b72b5841ea7c6c9ea4cae99d1281b87199c6fcc6e1e1c2a8c8ba686d81ba\"" Jul 7 06:14:42.123993 containerd[1732]: time="2025-07-07T06:14:42.123972255Z" level=info msg="StartContainer for \"ea67b72b5841ea7c6c9ea4cae99d1281b87199c6fcc6e1e1c2a8c8ba686d81ba\"" Jul 7 06:14:42.124699 containerd[1732]: time="2025-07-07T06:14:42.124669663Z" level=info msg="connecting to shim ea67b72b5841ea7c6c9ea4cae99d1281b87199c6fcc6e1e1c2a8c8ba686d81ba" address="unix:///run/containerd/s/8ec2c05356577ed3f3b32469576b4d816a641816d206b3837b5cf89023984711" protocol=ttrpc version=3 Jul 7 06:14:42.128024 containerd[1732]: time="2025-07-07T06:14:42.127873116Z" level=info msg="Container 78299711c6b6da3e4a145f3f0191389554e55747e0a3f649e4935823d7330e08: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:42.140860 systemd[1]: Started cri-containerd-ea67b72b5841ea7c6c9ea4cae99d1281b87199c6fcc6e1e1c2a8c8ba686d81ba.scope - libcontainer container ea67b72b5841ea7c6c9ea4cae99d1281b87199c6fcc6e1e1c2a8c8ba686d81ba. Jul 7 06:14:42.155376 containerd[1732]: time="2025-07-07T06:14:42.155350317Z" level=info msg="Container 440936c5d17cd4daaf9b8b2c47cde52cc5cc4e413bfb520a42938388a8c418b1: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:42.175135 containerd[1732]: time="2025-07-07T06:14:42.175017886Z" level=info msg="CreateContainer within sandbox \"d7716aa99d0186c2cb7fecadde2420099188f3908bed1526e2647870f3c05a3e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"78299711c6b6da3e4a145f3f0191389554e55747e0a3f649e4935823d7330e08\"" Jul 7 06:14:42.175687 containerd[1732]: time="2025-07-07T06:14:42.175634651Z" level=info msg="StartContainer for \"78299711c6b6da3e4a145f3f0191389554e55747e0a3f649e4935823d7330e08\"" Jul 7 06:14:42.176963 containerd[1732]: time="2025-07-07T06:14:42.176913199Z" level=info msg="connecting to shim 78299711c6b6da3e4a145f3f0191389554e55747e0a3f649e4935823d7330e08" address="unix:///run/containerd/s/8b472b8829617c41bfcaafb8c0ed4cbc0a90b4c9a604a7b139e2bfb67bc7e7cf" protocol=ttrpc version=3 Jul 7 06:14:42.196645 containerd[1732]: time="2025-07-07T06:14:42.196268176Z" level=info msg="StartContainer for \"ea67b72b5841ea7c6c9ea4cae99d1281b87199c6fcc6e1e1c2a8c8ba686d81ba\" returns successfully" Jul 7 06:14:42.197625 containerd[1732]: time="2025-07-07T06:14:42.197600855Z" level=info msg="CreateContainer within sandbox \"997315591c20d3a94f6621232d8f6b9e8dc11dce82e2f4bd3d3a1dfcc5f343fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"440936c5d17cd4daaf9b8b2c47cde52cc5cc4e413bfb520a42938388a8c418b1\"" Jul 7 06:14:42.197966 systemd[1]: Started cri-containerd-78299711c6b6da3e4a145f3f0191389554e55747e0a3f649e4935823d7330e08.scope - libcontainer container 78299711c6b6da3e4a145f3f0191389554e55747e0a3f649e4935823d7330e08. Jul 7 06:14:42.198826 containerd[1732]: time="2025-07-07T06:14:42.198661998Z" level=info msg="StartContainer for \"440936c5d17cd4daaf9b8b2c47cde52cc5cc4e413bfb520a42938388a8c418b1\"" Jul 7 06:14:42.202301 containerd[1732]: time="2025-07-07T06:14:42.202250093Z" level=info msg="connecting to shim 440936c5d17cd4daaf9b8b2c47cde52cc5cc4e413bfb520a42938388a8c418b1" address="unix:///run/containerd/s/44ea1c5c587021a110aa2d02dea9e4239e3e94fcba9ab0a858358821b6645749" protocol=ttrpc version=3 Jul 7 06:14:42.220991 kubelet[2746]: W0707 06:14:42.220861 2746 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.32:6443: connect: connection refused Jul 7 06:14:42.220991 kubelet[2746]: E0707 06:14:42.220957 2746 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.32:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:14:42.225494 systemd[1]: Started cri-containerd-440936c5d17cd4daaf9b8b2c47cde52cc5cc4e413bfb520a42938388a8c418b1.scope - libcontainer container 440936c5d17cd4daaf9b8b2c47cde52cc5cc4e413bfb520a42938388a8c418b1. Jul 7 06:14:42.281915 containerd[1732]: time="2025-07-07T06:14:42.281894567Z" level=info msg="StartContainer for \"78299711c6b6da3e4a145f3f0191389554e55747e0a3f649e4935823d7330e08\" returns successfully" Jul 7 06:14:42.305082 kubelet[2746]: E0707 06:14:42.305058 2746 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-6edf51656b\" not found" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:42.305799 kubelet[2746]: E0707 06:14:42.305779 2746 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-6edf51656b\" not found" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:42.351991 containerd[1732]: time="2025-07-07T06:14:42.351947041Z" level=info msg="StartContainer for \"440936c5d17cd4daaf9b8b2c47cde52cc5cc4e413bfb520a42938388a8c418b1\" returns successfully" Jul 7 06:14:42.824196 kubelet[2746]: I0707 06:14:42.824161 2746 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:43.312949 kubelet[2746]: E0707 06:14:43.312911 2746 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-6edf51656b\" not found" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:43.313379 kubelet[2746]: E0707 06:14:43.313282 2746 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-6edf51656b\" not found" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:43.313592 kubelet[2746]: E0707 06:14:43.313576 2746 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372.0.1-a-6edf51656b\" not found" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:43.764915 kubelet[2746]: E0707 06:14:43.764873 2746 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372.0.1-a-6edf51656b\" not found" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:43.832424 kubelet[2746]: I0707 06:14:43.832391 2746 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:43.848793 kubelet[2746]: I0707 06:14:43.848768 2746 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:43.857978 kubelet[2746]: E0707 06:14:43.857948 2746 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372.0.1-a-6edf51656b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:43.857978 kubelet[2746]: I0707 06:14:43.857973 2746 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:43.860761 kubelet[2746]: E0707 06:14:43.860737 2746 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-6edf51656b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:43.860761 kubelet[2746]: I0707 06:14:43.860761 2746 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:43.864286 kubelet[2746]: E0707 06:14:43.864266 2746 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:44.230943 kubelet[2746]: I0707 06:14:44.230918 2746 apiserver.go:52] "Watching apiserver" Jul 7 06:14:44.252457 kubelet[2746]: I0707 06:14:44.252428 2746 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:14:44.310650 kubelet[2746]: I0707 06:14:44.310625 2746 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:44.312614 kubelet[2746]: E0707 06:14:44.312581 2746 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:45.760929 systemd[1]: Reload requested from client PID 3019 ('systemctl') (unit session-9.scope)... Jul 7 06:14:45.760949 systemd[1]: Reloading... Jul 7 06:14:45.843746 zram_generator::config[3061]: No configuration found. Jul 7 06:14:45.940217 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:14:46.046433 systemd[1]: Reloading finished in 285 ms. Jul 7 06:14:46.074992 kubelet[2746]: I0707 06:14:46.074847 2746 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:14:46.075252 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:46.095687 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:14:46.095978 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:46.096043 systemd[1]: kubelet.service: Consumed 748ms CPU time, 130.4M memory peak. Jul 7 06:14:46.097592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:14:46.568880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:14:46.575991 (kubelet)[3132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:14:46.621415 kubelet[3132]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:46.621415 kubelet[3132]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:14:46.621415 kubelet[3132]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:14:46.621795 kubelet[3132]: I0707 06:14:46.621328 3132 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:14:46.626118 kubelet[3132]: I0707 06:14:46.626096 3132 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:14:46.626118 kubelet[3132]: I0707 06:14:46.626115 3132 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:14:46.626343 kubelet[3132]: I0707 06:14:46.626330 3132 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:14:46.628081 kubelet[3132]: I0707 06:14:46.628061 3132 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:14:46.629869 kubelet[3132]: I0707 06:14:46.629849 3132 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:14:46.637191 kubelet[3132]: I0707 06:14:46.637178 3132 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:14:46.639683 kubelet[3132]: I0707 06:14:46.639665 3132 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:14:46.639882 kubelet[3132]: I0707 06:14:46.639853 3132 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:14:46.640022 kubelet[3132]: I0707 06:14:46.639876 3132 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372.0.1-a-6edf51656b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:14:46.640132 kubelet[3132]: I0707 06:14:46.640032 3132 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:14:46.640132 kubelet[3132]: I0707 06:14:46.640040 3132 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:14:46.640132 kubelet[3132]: I0707 06:14:46.640091 3132 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:46.640216 kubelet[3132]: I0707 06:14:46.640208 3132 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:14:46.640237 kubelet[3132]: I0707 06:14:46.640227 3132 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:14:46.640464 kubelet[3132]: I0707 06:14:46.640250 3132 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:14:46.640464 kubelet[3132]: I0707 06:14:46.640261 3132 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:14:46.641153 kubelet[3132]: I0707 06:14:46.641138 3132 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:14:46.641559 kubelet[3132]: I0707 06:14:46.641550 3132 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:14:46.643563 kubelet[3132]: I0707 06:14:46.643494 3132 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:14:46.643780 kubelet[3132]: I0707 06:14:46.643744 3132 server.go:1287] "Started kubelet" Jul 7 06:14:46.648951 kubelet[3132]: I0707 06:14:46.648873 3132 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:14:46.650231 kubelet[3132]: I0707 06:14:46.650202 3132 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:14:46.651603 kubelet[3132]: I0707 06:14:46.651482 3132 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:14:46.652633 kubelet[3132]: I0707 06:14:46.652585 3132 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:14:46.652812 kubelet[3132]: I0707 06:14:46.652801 3132 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:14:46.653129 kubelet[3132]: I0707 06:14:46.653114 3132 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:14:46.654335 kubelet[3132]: I0707 06:14:46.654314 3132 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:14:46.654744 kubelet[3132]: E0707 06:14:46.654512 3132 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372.0.1-a-6edf51656b\" not found" Jul 7 06:14:46.658550 kubelet[3132]: I0707 06:14:46.658524 3132 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:14:46.658653 kubelet[3132]: I0707 06:14:46.658642 3132 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:14:46.660250 kubelet[3132]: I0707 06:14:46.660163 3132 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:14:46.660250 kubelet[3132]: I0707 06:14:46.660239 3132 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:14:46.667066 kubelet[3132]: I0707 06:14:46.667045 3132 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:14:46.668019 kubelet[3132]: E0707 06:14:46.667998 3132 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:14:46.689108 kubelet[3132]: I0707 06:14:46.689031 3132 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:14:46.691839 kubelet[3132]: I0707 06:14:46.691821 3132 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:14:46.691910 kubelet[3132]: I0707 06:14:46.691845 3132 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:14:46.691910 kubelet[3132]: I0707 06:14:46.691862 3132 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:14:46.691910 kubelet[3132]: I0707 06:14:46.691869 3132 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:14:46.691980 kubelet[3132]: E0707 06:14:46.691920 3132 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:14:46.722131 kubelet[3132]: I0707 06:14:46.722106 3132 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:14:46.722131 kubelet[3132]: I0707 06:14:46.722127 3132 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:14:46.722224 kubelet[3132]: I0707 06:14:46.722143 3132 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:14:46.722279 kubelet[3132]: I0707 06:14:46.722267 3132 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:14:46.722302 kubelet[3132]: I0707 06:14:46.722278 3132 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:14:46.722302 kubelet[3132]: I0707 06:14:46.722294 3132 policy_none.go:49] "None policy: Start" Jul 7 06:14:46.722344 kubelet[3132]: I0707 06:14:46.722304 3132 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:14:46.722344 kubelet[3132]: I0707 06:14:46.722313 3132 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:14:46.722407 kubelet[3132]: I0707 06:14:46.722398 3132 state_mem.go:75] "Updated machine memory state" Jul 7 06:14:46.725383 kubelet[3132]: I0707 06:14:46.725367 3132 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:14:46.725507 kubelet[3132]: I0707 06:14:46.725495 3132 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:14:46.725539 kubelet[3132]: I0707 06:14:46.725511 3132 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:14:46.725885 kubelet[3132]: I0707 06:14:46.725767 3132 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:14:46.728293 kubelet[3132]: E0707 06:14:46.727800 3132 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:14:46.776785 sudo[3167]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 06:14:46.776986 sudo[3167]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 06:14:46.793412 kubelet[3132]: I0707 06:14:46.793384 3132 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.794721 kubelet[3132]: I0707 06:14:46.793639 3132 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.794721 kubelet[3132]: I0707 06:14:46.793850 3132 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.802223 kubelet[3132]: W0707 06:14:46.802205 3132 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:14:46.806021 kubelet[3132]: W0707 06:14:46.806004 3132 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:14:46.806100 kubelet[3132]: W0707 06:14:46.806085 3132 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:14:46.831008 kubelet[3132]: I0707 06:14:46.829740 3132 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.845966 kubelet[3132]: I0707 06:14:46.845951 3132 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.846091 kubelet[3132]: I0707 06:14:46.846085 3132 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.859541 kubelet[3132]: I0707 06:14:46.859518 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8d85c59c77bed14e7855f2148c4e6ced-ca-certs\") pod \"kube-apiserver-ci-4372.0.1-a-6edf51656b\" (UID: \"8d85c59c77bed14e7855f2148c4e6ced\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.960194 kubelet[3132]: I0707 06:14:46.959769 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/070719cbb3a0ef5e3d835a49ce33f142-kubeconfig\") pod \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" (UID: \"070719cbb3a0ef5e3d835a49ce33f142\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.960194 kubelet[3132]: I0707 06:14:46.959832 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/070719cbb3a0ef5e3d835a49ce33f142-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" (UID: \"070719cbb3a0ef5e3d835a49ce33f142\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.960194 kubelet[3132]: I0707 06:14:46.959855 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/32dd57fc87de939cd662abd52a06893c-kubeconfig\") pod \"kube-scheduler-ci-4372.0.1-a-6edf51656b\" (UID: \"32dd57fc87de939cd662abd52a06893c\") " pod="kube-system/kube-scheduler-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.960194 kubelet[3132]: I0707 06:14:46.959873 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/070719cbb3a0ef5e3d835a49ce33f142-flexvolume-dir\") pod \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" (UID: \"070719cbb3a0ef5e3d835a49ce33f142\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.960194 kubelet[3132]: I0707 06:14:46.959943 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/070719cbb3a0ef5e3d835a49ce33f142-k8s-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" (UID: \"070719cbb3a0ef5e3d835a49ce33f142\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.960381 kubelet[3132]: I0707 06:14:46.959999 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8d85c59c77bed14e7855f2148c4e6ced-k8s-certs\") pod \"kube-apiserver-ci-4372.0.1-a-6edf51656b\" (UID: \"8d85c59c77bed14e7855f2148c4e6ced\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.960381 kubelet[3132]: I0707 06:14:46.960018 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8d85c59c77bed14e7855f2148c4e6ced-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372.0.1-a-6edf51656b\" (UID: \"8d85c59c77bed14e7855f2148c4e6ced\") " pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:46.960381 kubelet[3132]: I0707 06:14:46.960151 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/070719cbb3a0ef5e3d835a49ce33f142-ca-certs\") pod \"kube-controller-manager-ci-4372.0.1-a-6edf51656b\" (UID: \"070719cbb3a0ef5e3d835a49ce33f142\") " pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:47.288132 sudo[3167]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:47.645925 kubelet[3132]: I0707 06:14:47.645826 3132 apiserver.go:52] "Watching apiserver" Jul 7 06:14:47.658672 kubelet[3132]: I0707 06:14:47.658645 3132 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:14:47.709726 kubelet[3132]: I0707 06:14:47.708950 3132 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:47.722368 kubelet[3132]: W0707 06:14:47.722340 3132 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 06:14:47.722455 kubelet[3132]: E0707 06:14:47.722407 3132 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372.0.1-a-6edf51656b\" already exists" pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" Jul 7 06:14:47.722877 kubelet[3132]: I0707 06:14:47.722824 3132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372.0.1-a-6edf51656b" podStartSLOduration=1.72281087 podStartE2EDuration="1.72281087s" podCreationTimestamp="2025-07-07 06:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:47.704455647 +0000 UTC m=+1.124656917" watchObservedRunningTime="2025-07-07 06:14:47.72281087 +0000 UTC m=+1.143012141" Jul 7 06:14:47.734127 kubelet[3132]: I0707 06:14:47.734081 3132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372.0.1-a-6edf51656b" podStartSLOduration=1.734067034 podStartE2EDuration="1.734067034s" podCreationTimestamp="2025-07-07 06:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:47.723277558 +0000 UTC m=+1.143478889" watchObservedRunningTime="2025-07-07 06:14:47.734067034 +0000 UTC m=+1.154268366" Jul 7 06:14:47.745175 kubelet[3132]: I0707 06:14:47.745124 3132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372.0.1-a-6edf51656b" podStartSLOduration=1.745110571 podStartE2EDuration="1.745110571s" podCreationTimestamp="2025-07-07 06:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:47.73460641 +0000 UTC m=+1.154807757" watchObservedRunningTime="2025-07-07 06:14:47.745110571 +0000 UTC m=+1.165311841" Jul 7 06:14:48.673196 sudo[2156]: pam_unix(sudo:session): session closed for user root Jul 7 06:14:48.776885 sshd[2155]: Connection closed by 10.200.16.10 port 43994 Jul 7 06:14:48.777462 sshd-session[2153]: pam_unix(sshd:session): session closed for user core Jul 7 06:14:48.781384 systemd[1]: sshd@6-10.200.4.32:22-10.200.16.10:43994.service: Deactivated successfully. Jul 7 06:14:48.783529 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:14:48.783724 systemd[1]: session-9.scope: Consumed 4.316s CPU time, 270.1M memory peak. Jul 7 06:14:48.784941 systemd-logind[1706]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:14:48.786446 systemd-logind[1706]: Removed session 9. Jul 7 06:14:51.053352 kubelet[3132]: I0707 06:14:51.053259 3132 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:14:51.053847 containerd[1732]: time="2025-07-07T06:14:51.053733565Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:14:51.054065 kubelet[3132]: I0707 06:14:51.053988 3132 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:14:51.356356 systemd[1]: Created slice kubepods-besteffort-pod98c3e2d2_ba94_4d04_996e_5a626c3e3a06.slice - libcontainer container kubepods-besteffort-pod98c3e2d2_ba94_4d04_996e_5a626c3e3a06.slice. Jul 7 06:14:51.372177 systemd[1]: Created slice kubepods-burstable-pode36f7f0a_096c_41c4_849d_fc3730f6dd90.slice - libcontainer container kubepods-burstable-pode36f7f0a_096c_41c4_849d_fc3730f6dd90.slice. Jul 7 06:14:51.391940 kubelet[3132]: I0707 06:14:51.391914 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-lib-modules\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392058 kubelet[3132]: I0707 06:14:51.391950 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e36f7f0a-096c-41c4-849d-fc3730f6dd90-clustermesh-secrets\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392058 kubelet[3132]: I0707 06:14:51.391973 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-config-path\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392058 kubelet[3132]: I0707 06:14:51.391991 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhmxf\" (UniqueName: \"kubernetes.io/projected/e36f7f0a-096c-41c4-849d-fc3730f6dd90-kube-api-access-dhmxf\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392058 kubelet[3132]: I0707 06:14:51.392041 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-etc-cni-netd\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392165 kubelet[3132]: I0707 06:14:51.392060 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hd9x\" (UniqueName: \"kubernetes.io/projected/98c3e2d2-ba94-4d04-996e-5a626c3e3a06-kube-api-access-7hd9x\") pod \"kube-proxy-pnjzm\" (UID: \"98c3e2d2-ba94-4d04-996e-5a626c3e3a06\") " pod="kube-system/kube-proxy-pnjzm" Jul 7 06:14:51.392165 kubelet[3132]: I0707 06:14:51.392080 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-host-proc-sys-net\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392165 kubelet[3132]: I0707 06:14:51.392102 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-host-proc-sys-kernel\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392165 kubelet[3132]: I0707 06:14:51.392125 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e36f7f0a-096c-41c4-849d-fc3730f6dd90-hubble-tls\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392165 kubelet[3132]: I0707 06:14:51.392143 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-run\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392279 kubelet[3132]: I0707 06:14:51.392166 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98c3e2d2-ba94-4d04-996e-5a626c3e3a06-xtables-lock\") pod \"kube-proxy-pnjzm\" (UID: \"98c3e2d2-ba94-4d04-996e-5a626c3e3a06\") " pod="kube-system/kube-proxy-pnjzm" Jul 7 06:14:51.392279 kubelet[3132]: I0707 06:14:51.392186 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-bpf-maps\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392279 kubelet[3132]: I0707 06:14:51.392208 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-hostproc\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392279 kubelet[3132]: I0707 06:14:51.392226 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-cgroup\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392279 kubelet[3132]: I0707 06:14:51.392245 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/98c3e2d2-ba94-4d04-996e-5a626c3e3a06-kube-proxy\") pod \"kube-proxy-pnjzm\" (UID: \"98c3e2d2-ba94-4d04-996e-5a626c3e3a06\") " pod="kube-system/kube-proxy-pnjzm" Jul 7 06:14:51.392279 kubelet[3132]: I0707 06:14:51.392261 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cni-path\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392420 kubelet[3132]: I0707 06:14:51.392276 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-xtables-lock\") pod \"cilium-jkqmr\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " pod="kube-system/cilium-jkqmr" Jul 7 06:14:51.392420 kubelet[3132]: I0707 06:14:51.392297 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98c3e2d2-ba94-4d04-996e-5a626c3e3a06-lib-modules\") pod \"kube-proxy-pnjzm\" (UID: \"98c3e2d2-ba94-4d04-996e-5a626c3e3a06\") " pod="kube-system/kube-proxy-pnjzm" Jul 7 06:14:51.508730 kubelet[3132]: E0707 06:14:51.508442 3132 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 7 06:14:51.508730 kubelet[3132]: E0707 06:14:51.508471 3132 projected.go:194] Error preparing data for projected volume kube-api-access-dhmxf for pod kube-system/cilium-jkqmr: configmap "kube-root-ca.crt" not found Jul 7 06:14:51.508730 kubelet[3132]: E0707 06:14:51.508534 3132 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e36f7f0a-096c-41c4-849d-fc3730f6dd90-kube-api-access-dhmxf podName:e36f7f0a-096c-41c4-849d-fc3730f6dd90 nodeName:}" failed. No retries permitted until 2025-07-07 06:14:52.008509051 +0000 UTC m=+5.428710313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dhmxf" (UniqueName: "kubernetes.io/projected/e36f7f0a-096c-41c4-849d-fc3730f6dd90-kube-api-access-dhmxf") pod "cilium-jkqmr" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90") : configmap "kube-root-ca.crt" not found Jul 7 06:14:51.512318 kubelet[3132]: E0707 06:14:51.512052 3132 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 7 06:14:51.512318 kubelet[3132]: E0707 06:14:51.512074 3132 projected.go:194] Error preparing data for projected volume kube-api-access-7hd9x for pod kube-system/kube-proxy-pnjzm: configmap "kube-root-ca.crt" not found Jul 7 06:14:51.512318 kubelet[3132]: E0707 06:14:51.512118 3132 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/98c3e2d2-ba94-4d04-996e-5a626c3e3a06-kube-api-access-7hd9x podName:98c3e2d2-ba94-4d04-996e-5a626c3e3a06 nodeName:}" failed. No retries permitted until 2025-07-07 06:14:52.012098914 +0000 UTC m=+5.432300175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7hd9x" (UniqueName: "kubernetes.io/projected/98c3e2d2-ba94-4d04-996e-5a626c3e3a06-kube-api-access-7hd9x") pod "kube-proxy-pnjzm" (UID: "98c3e2d2-ba94-4d04-996e-5a626c3e3a06") : configmap "kube-root-ca.crt" not found Jul 7 06:14:52.119367 systemd[1]: Created slice kubepods-besteffort-pod2a36b5f6_4940_4e9e_95a9_23f797afb918.slice - libcontainer container kubepods-besteffort-pod2a36b5f6_4940_4e9e_95a9_23f797afb918.slice. Jul 7 06:14:52.197570 kubelet[3132]: I0707 06:14:52.197528 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a36b5f6-4940-4e9e-95a9-23f797afb918-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vqhx7\" (UID: \"2a36b5f6-4940-4e9e-95a9-23f797afb918\") " pod="kube-system/cilium-operator-6c4d7847fc-vqhx7" Jul 7 06:14:52.197935 kubelet[3132]: I0707 06:14:52.197585 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc2hp\" (UniqueName: \"kubernetes.io/projected/2a36b5f6-4940-4e9e-95a9-23f797afb918-kube-api-access-tc2hp\") pod \"cilium-operator-6c4d7847fc-vqhx7\" (UID: \"2a36b5f6-4940-4e9e-95a9-23f797afb918\") " pod="kube-system/cilium-operator-6c4d7847fc-vqhx7" Jul 7 06:14:52.270269 containerd[1732]: time="2025-07-07T06:14:52.270212086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pnjzm,Uid:98c3e2d2-ba94-4d04-996e-5a626c3e3a06,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:52.277899 containerd[1732]: time="2025-07-07T06:14:52.277869726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jkqmr,Uid:e36f7f0a-096c-41c4-849d-fc3730f6dd90,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:52.423217 containerd[1732]: time="2025-07-07T06:14:52.423134451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vqhx7,Uid:2a36b5f6-4940-4e9e-95a9-23f797afb918,Namespace:kube-system,Attempt:0,}" Jul 7 06:14:53.462360 containerd[1732]: time="2025-07-07T06:14:53.462287481Z" level=info msg="connecting to shim 22bd2fbd747017472b82a8b084aa83faba10bb0b694d6061cd7b5b24de868b24" address="unix:///run/containerd/s/c9442a789b9b8eb676cca47efbd23cdb322e1f41717cc589977780514f06c6ad" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:53.484860 systemd[1]: Started cri-containerd-22bd2fbd747017472b82a8b084aa83faba10bb0b694d6061cd7b5b24de868b24.scope - libcontainer container 22bd2fbd747017472b82a8b084aa83faba10bb0b694d6061cd7b5b24de868b24. Jul 7 06:14:53.677060 containerd[1732]: time="2025-07-07T06:14:53.676917809Z" level=info msg="connecting to shim ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0" address="unix:///run/containerd/s/c2ccbf5fc4ae8edc566e0454e84775f38f91188fe088a7c952c526f5d7e82972" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:53.694870 systemd[1]: Started cri-containerd-ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0.scope - libcontainer container ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0. Jul 7 06:14:53.714221 containerd[1732]: time="2025-07-07T06:14:53.713644249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pnjzm,Uid:98c3e2d2-ba94-4d04-996e-5a626c3e3a06,Namespace:kube-system,Attempt:0,} returns sandbox id \"22bd2fbd747017472b82a8b084aa83faba10bb0b694d6061cd7b5b24de868b24\"" Jul 7 06:14:53.718182 containerd[1732]: time="2025-07-07T06:14:53.717667288Z" level=info msg="CreateContainer within sandbox \"22bd2fbd747017472b82a8b084aa83faba10bb0b694d6061cd7b5b24de868b24\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:14:53.851853 containerd[1732]: time="2025-07-07T06:14:53.851827174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jkqmr,Uid:e36f7f0a-096c-41c4-849d-fc3730f6dd90,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\"" Jul 7 06:14:53.853458 containerd[1732]: time="2025-07-07T06:14:53.853377970Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 06:14:53.915001 containerd[1732]: time="2025-07-07T06:14:53.914965961Z" level=info msg="connecting to shim de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156" address="unix:///run/containerd/s/f05f6b2032ba058ebb746b3f87aeed063145a523af16393646c3ca4d06561ae2" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:14:53.935882 systemd[1]: Started cri-containerd-de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156.scope - libcontainer container de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156. Jul 7 06:14:54.060879 containerd[1732]: time="2025-07-07T06:14:54.060855756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vqhx7,Uid:2a36b5f6-4940-4e9e-95a9-23f797afb918,Namespace:kube-system,Attempt:0,} returns sandbox id \"de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156\"" Jul 7 06:14:54.156654 containerd[1732]: time="2025-07-07T06:14:54.156628170Z" level=info msg="Container 29a4ea13b9c8af046ff9a12fa197ca0629b2d797e3a5f4b40aafe1bfc042d34e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:14:54.303686 containerd[1732]: time="2025-07-07T06:14:54.303650713Z" level=info msg="CreateContainer within sandbox \"22bd2fbd747017472b82a8b084aa83faba10bb0b694d6061cd7b5b24de868b24\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"29a4ea13b9c8af046ff9a12fa197ca0629b2d797e3a5f4b40aafe1bfc042d34e\"" Jul 7 06:14:54.304210 containerd[1732]: time="2025-07-07T06:14:54.304182806Z" level=info msg="StartContainer for \"29a4ea13b9c8af046ff9a12fa197ca0629b2d797e3a5f4b40aafe1bfc042d34e\"" Jul 7 06:14:54.305776 containerd[1732]: time="2025-07-07T06:14:54.305748952Z" level=info msg="connecting to shim 29a4ea13b9c8af046ff9a12fa197ca0629b2d797e3a5f4b40aafe1bfc042d34e" address="unix:///run/containerd/s/c9442a789b9b8eb676cca47efbd23cdb322e1f41717cc589977780514f06c6ad" protocol=ttrpc version=3 Jul 7 06:14:54.324902 systemd[1]: Started cri-containerd-29a4ea13b9c8af046ff9a12fa197ca0629b2d797e3a5f4b40aafe1bfc042d34e.scope - libcontainer container 29a4ea13b9c8af046ff9a12fa197ca0629b2d797e3a5f4b40aafe1bfc042d34e. Jul 7 06:14:54.426250 containerd[1732]: time="2025-07-07T06:14:54.426222725Z" level=info msg="StartContainer for \"29a4ea13b9c8af046ff9a12fa197ca0629b2d797e3a5f4b40aafe1bfc042d34e\" returns successfully" Jul 7 06:14:54.786059 kubelet[3132]: I0707 06:14:54.785974 3132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pnjzm" podStartSLOduration=3.785941851 podStartE2EDuration="3.785941851s" podCreationTimestamp="2025-07-07 06:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:14:54.732306731 +0000 UTC m=+8.152507998" watchObservedRunningTime="2025-07-07 06:14:54.785941851 +0000 UTC m=+8.206143114" Jul 7 06:15:06.565505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095491243.mount: Deactivated successfully. Jul 7 06:15:09.522923 containerd[1732]: time="2025-07-07T06:15:09.522871867Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:09.527452 containerd[1732]: time="2025-07-07T06:15:09.527405150Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 06:15:09.534123 containerd[1732]: time="2025-07-07T06:15:09.534050171Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:09.535246 containerd[1732]: time="2025-07-07T06:15:09.535151066Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.681739138s" Jul 7 06:15:09.535246 containerd[1732]: time="2025-07-07T06:15:09.535182872Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 06:15:09.536196 containerd[1732]: time="2025-07-07T06:15:09.536146917Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 06:15:09.537539 containerd[1732]: time="2025-07-07T06:15:09.537509422Z" level=info msg="CreateContainer within sandbox \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:15:09.572892 containerd[1732]: time="2025-07-07T06:15:09.572814899Z" level=info msg="Container 979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:09.619438 containerd[1732]: time="2025-07-07T06:15:09.619413689Z" level=info msg="CreateContainer within sandbox \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\"" Jul 7 06:15:09.619885 containerd[1732]: time="2025-07-07T06:15:09.619838938Z" level=info msg="StartContainer for \"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\"" Jul 7 06:15:09.620809 containerd[1732]: time="2025-07-07T06:15:09.620775348Z" level=info msg="connecting to shim 979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962" address="unix:///run/containerd/s/c2ccbf5fc4ae8edc566e0454e84775f38f91188fe088a7c952c526f5d7e82972" protocol=ttrpc version=3 Jul 7 06:15:09.642868 systemd[1]: Started cri-containerd-979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962.scope - libcontainer container 979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962. Jul 7 06:15:09.705193 containerd[1732]: time="2025-07-07T06:15:09.705156519Z" level=info msg="StartContainer for \"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\" returns successfully" Jul 7 06:15:09.711853 systemd[1]: cri-containerd-979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962.scope: Deactivated successfully. Jul 7 06:15:09.713851 containerd[1732]: time="2025-07-07T06:15:09.713827319Z" level=info msg="received exit event container_id:\"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\" id:\"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\" pid:3547 exited_at:{seconds:1751868909 nanos:713286689}" Jul 7 06:15:09.714001 containerd[1732]: time="2025-07-07T06:15:09.713946626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\" id:\"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\" pid:3547 exited_at:{seconds:1751868909 nanos:713286689}" Jul 7 06:15:09.731048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962-rootfs.mount: Deactivated successfully. Jul 7 06:15:13.703211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1114005726.mount: Deactivated successfully. Jul 7 06:15:13.758173 containerd[1732]: time="2025-07-07T06:15:13.757963289Z" level=info msg="CreateContainer within sandbox \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:15:13.984885 containerd[1732]: time="2025-07-07T06:15:13.984848949Z" level=info msg="Container 36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:14.126507 containerd[1732]: time="2025-07-07T06:15:14.126477062Z" level=info msg="CreateContainer within sandbox \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\"" Jul 7 06:15:14.126877 containerd[1732]: time="2025-07-07T06:15:14.126819693Z" level=info msg="StartContainer for \"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\"" Jul 7 06:15:14.127882 containerd[1732]: time="2025-07-07T06:15:14.127844622Z" level=info msg="connecting to shim 36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b" address="unix:///run/containerd/s/c2ccbf5fc4ae8edc566e0454e84775f38f91188fe088a7c952c526f5d7e82972" protocol=ttrpc version=3 Jul 7 06:15:14.145887 systemd[1]: Started cri-containerd-36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b.scope - libcontainer container 36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b. Jul 7 06:15:14.191855 containerd[1732]: time="2025-07-07T06:15:14.191794308Z" level=info msg="StartContainer for \"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\" returns successfully" Jul 7 06:15:14.204845 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:15:14.205070 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:15:14.205627 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:15:14.207940 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:15:14.210400 systemd[1]: cri-containerd-36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b.scope: Deactivated successfully. Jul 7 06:15:14.210765 containerd[1732]: time="2025-07-07T06:15:14.210670845Z" level=info msg="received exit event container_id:\"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\" id:\"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\" pid:3602 exited_at:{seconds:1751868914 nanos:210033633}" Jul 7 06:15:14.211518 containerd[1732]: time="2025-07-07T06:15:14.211177404Z" level=info msg="TaskExit event in podsandbox handler container_id:\"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\" id:\"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\" pid:3602 exited_at:{seconds:1751868914 nanos:210033633}" Jul 7 06:15:14.241600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:15:14.696071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b-rootfs.mount: Deactivated successfully. Jul 7 06:15:14.704268 containerd[1732]: time="2025-07-07T06:15:14.704219971Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:14.711096 containerd[1732]: time="2025-07-07T06:15:14.711056602Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 06:15:14.715314 containerd[1732]: time="2025-07-07T06:15:14.715266604Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:15:14.716412 containerd[1732]: time="2025-07-07T06:15:14.716317563Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.180136783s" Jul 7 06:15:14.716412 containerd[1732]: time="2025-07-07T06:15:14.716350008Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 06:15:14.718268 containerd[1732]: time="2025-07-07T06:15:14.718239931Z" level=info msg="CreateContainer within sandbox \"de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 06:15:14.763280 containerd[1732]: time="2025-07-07T06:15:14.763097499Z" level=info msg="CreateContainer within sandbox \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:15:14.784949 containerd[1732]: time="2025-07-07T06:15:14.784910603Z" level=info msg="Container b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:14.825582 containerd[1732]: time="2025-07-07T06:15:14.825309899Z" level=info msg="Container f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:14.863163 containerd[1732]: time="2025-07-07T06:15:14.863137999Z" level=info msg="CreateContainer within sandbox \"de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\"" Jul 7 06:15:14.863587 containerd[1732]: time="2025-07-07T06:15:14.863549132Z" level=info msg="StartContainer for \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\"" Jul 7 06:15:14.864875 containerd[1732]: time="2025-07-07T06:15:14.864844324Z" level=info msg="connecting to shim b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833" address="unix:///run/containerd/s/f05f6b2032ba058ebb746b3f87aeed063145a523af16393646c3ca4d06561ae2" protocol=ttrpc version=3 Jul 7 06:15:14.880825 systemd[1]: Started cri-containerd-b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833.scope - libcontainer container b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833. Jul 7 06:15:14.887753 containerd[1732]: time="2025-07-07T06:15:14.887689790Z" level=info msg="CreateContainer within sandbox \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\"" Jul 7 06:15:14.889075 containerd[1732]: time="2025-07-07T06:15:14.888987607Z" level=info msg="StartContainer for \"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\"" Jul 7 06:15:14.891361 containerd[1732]: time="2025-07-07T06:15:14.891320129Z" level=info msg="connecting to shim f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354" address="unix:///run/containerd/s/c2ccbf5fc4ae8edc566e0454e84775f38f91188fe088a7c952c526f5d7e82972" protocol=ttrpc version=3 Jul 7 06:15:14.914108 systemd[1]: Started cri-containerd-f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354.scope - libcontainer container f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354. Jul 7 06:15:14.933434 containerd[1732]: time="2025-07-07T06:15:14.933412469Z" level=info msg="StartContainer for \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\" returns successfully" Jul 7 06:15:14.950643 systemd[1]: cri-containerd-f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354.scope: Deactivated successfully. Jul 7 06:15:14.952558 containerd[1732]: time="2025-07-07T06:15:14.952532728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\" id:\"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\" pid:3683 exited_at:{seconds:1751868914 nanos:952214106}" Jul 7 06:15:14.958720 containerd[1732]: time="2025-07-07T06:15:14.958453165Z" level=info msg="received exit event container_id:\"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\" id:\"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\" pid:3683 exited_at:{seconds:1751868914 nanos:952214106}" Jul 7 06:15:14.978257 containerd[1732]: time="2025-07-07T06:15:14.978235751Z" level=info msg="StartContainer for \"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\" returns successfully" Jul 7 06:15:15.773171 containerd[1732]: time="2025-07-07T06:15:15.773111206Z" level=info msg="CreateContainer within sandbox \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:15:15.814460 kubelet[3132]: I0707 06:15:15.814381 3132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vqhx7" podStartSLOduration=3.15906556 podStartE2EDuration="23.814267335s" podCreationTimestamp="2025-07-07 06:14:52 +0000 UTC" firstStartedPulling="2025-07-07 06:14:54.06179642 +0000 UTC m=+7.481997695" lastFinishedPulling="2025-07-07 06:15:14.716998198 +0000 UTC m=+28.137199470" observedRunningTime="2025-07-07 06:15:15.788189937 +0000 UTC m=+29.208391205" watchObservedRunningTime="2025-07-07 06:15:15.814267335 +0000 UTC m=+29.234468604" Jul 7 06:15:15.843083 containerd[1732]: time="2025-07-07T06:15:15.843047009Z" level=info msg="Container e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:15.876872 containerd[1732]: time="2025-07-07T06:15:15.876840872Z" level=info msg="CreateContainer within sandbox \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\"" Jul 7 06:15:15.877268 containerd[1732]: time="2025-07-07T06:15:15.877208405Z" level=info msg="StartContainer for \"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\"" Jul 7 06:15:15.878152 containerd[1732]: time="2025-07-07T06:15:15.878104789Z" level=info msg="connecting to shim e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6" address="unix:///run/containerd/s/c2ccbf5fc4ae8edc566e0454e84775f38f91188fe088a7c952c526f5d7e82972" protocol=ttrpc version=3 Jul 7 06:15:15.900858 systemd[1]: Started cri-containerd-e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6.scope - libcontainer container e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6. Jul 7 06:15:15.922132 systemd[1]: cri-containerd-e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6.scope: Deactivated successfully. Jul 7 06:15:15.924983 containerd[1732]: time="2025-07-07T06:15:15.924932917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\" id:\"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\" pid:3728 exited_at:{seconds:1751868915 nanos:924578954}" Jul 7 06:15:15.932048 containerd[1732]: time="2025-07-07T06:15:15.931960289Z" level=info msg="received exit event container_id:\"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\" id:\"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\" pid:3728 exited_at:{seconds:1751868915 nanos:924578954}" Jul 7 06:15:15.938108 containerd[1732]: time="2025-07-07T06:15:15.938084344Z" level=info msg="StartContainer for \"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\" returns successfully" Jul 7 06:15:15.946911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6-rootfs.mount: Deactivated successfully. Jul 7 06:15:16.782984 containerd[1732]: time="2025-07-07T06:15:16.782808138Z" level=info msg="CreateContainer within sandbox \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:15:16.819289 containerd[1732]: time="2025-07-07T06:15:16.818833710Z" level=info msg="Container 6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:16.851058 containerd[1732]: time="2025-07-07T06:15:16.851027545Z" level=info msg="CreateContainer within sandbox \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\"" Jul 7 06:15:16.853499 containerd[1732]: time="2025-07-07T06:15:16.853419626Z" level=info msg="StartContainer for \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\"" Jul 7 06:15:16.854352 containerd[1732]: time="2025-07-07T06:15:16.854301828Z" level=info msg="connecting to shim 6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708" address="unix:///run/containerd/s/c2ccbf5fc4ae8edc566e0454e84775f38f91188fe088a7c952c526f5d7e82972" protocol=ttrpc version=3 Jul 7 06:15:16.871855 systemd[1]: Started cri-containerd-6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708.scope - libcontainer container 6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708. Jul 7 06:15:16.906216 containerd[1732]: time="2025-07-07T06:15:16.906187772Z" level=info msg="StartContainer for \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" returns successfully" Jul 7 06:15:16.977059 containerd[1732]: time="2025-07-07T06:15:16.977034667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" id:\"d64b7be979db97d73bc6c0cb24714c9f2dfe711f8572ac4c5b8b1d9f42977ed7\" pid:3801 exited_at:{seconds:1751868916 nanos:976044253}" Jul 7 06:15:17.051653 kubelet[3132]: I0707 06:15:17.051579 3132 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:15:17.096646 systemd[1]: Created slice kubepods-burstable-pod739dbd52_b626_407f_8506_441fa98cd176.slice - libcontainer container kubepods-burstable-pod739dbd52_b626_407f_8506_441fa98cd176.slice. Jul 7 06:15:17.108943 systemd[1]: Created slice kubepods-burstable-poded7ffc54_e710_4b46_be9b_99021fa568f1.slice - libcontainer container kubepods-burstable-poded7ffc54_e710_4b46_be9b_99021fa568f1.slice. Jul 7 06:15:17.153363 kubelet[3132]: I0707 06:15:17.153329 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mdhc\" (UniqueName: \"kubernetes.io/projected/ed7ffc54-e710-4b46-be9b-99021fa568f1-kube-api-access-4mdhc\") pod \"coredns-668d6bf9bc-kn45h\" (UID: \"ed7ffc54-e710-4b46-be9b-99021fa568f1\") " pod="kube-system/coredns-668d6bf9bc-kn45h" Jul 7 06:15:17.153470 kubelet[3132]: I0707 06:15:17.153375 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed7ffc54-e710-4b46-be9b-99021fa568f1-config-volume\") pod \"coredns-668d6bf9bc-kn45h\" (UID: \"ed7ffc54-e710-4b46-be9b-99021fa568f1\") " pod="kube-system/coredns-668d6bf9bc-kn45h" Jul 7 06:15:17.153470 kubelet[3132]: I0707 06:15:17.153398 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l54wc\" (UniqueName: \"kubernetes.io/projected/739dbd52-b626-407f-8506-441fa98cd176-kube-api-access-l54wc\") pod \"coredns-668d6bf9bc-8xblj\" (UID: \"739dbd52-b626-407f-8506-441fa98cd176\") " pod="kube-system/coredns-668d6bf9bc-8xblj" Jul 7 06:15:17.153470 kubelet[3132]: I0707 06:15:17.153421 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/739dbd52-b626-407f-8506-441fa98cd176-config-volume\") pod \"coredns-668d6bf9bc-8xblj\" (UID: \"739dbd52-b626-407f-8506-441fa98cd176\") " pod="kube-system/coredns-668d6bf9bc-8xblj" Jul 7 06:15:17.401248 containerd[1732]: time="2025-07-07T06:15:17.400937930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8xblj,Uid:739dbd52-b626-407f-8506-441fa98cd176,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:17.416523 containerd[1732]: time="2025-07-07T06:15:17.415687879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kn45h,Uid:ed7ffc54-e710-4b46-be9b-99021fa568f1,Namespace:kube-system,Attempt:0,}" Jul 7 06:15:17.798843 kubelet[3132]: I0707 06:15:17.797683 3132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jkqmr" podStartSLOduration=11.114481279 podStartE2EDuration="26.797663271s" podCreationTimestamp="2025-07-07 06:14:51 +0000 UTC" firstStartedPulling="2025-07-07 06:14:53.852847699 +0000 UTC m=+7.273048966" lastFinishedPulling="2025-07-07 06:15:09.536029698 +0000 UTC m=+22.956230958" observedRunningTime="2025-07-07 06:15:17.797560544 +0000 UTC m=+31.217761910" watchObservedRunningTime="2025-07-07 06:15:17.797663271 +0000 UTC m=+31.217864537" Jul 7 06:15:18.935113 systemd-networkd[1352]: cilium_host: Link UP Jul 7 06:15:18.935207 systemd-networkd[1352]: cilium_net: Link UP Jul 7 06:15:18.935298 systemd-networkd[1352]: cilium_net: Gained carrier Jul 7 06:15:18.935371 systemd-networkd[1352]: cilium_host: Gained carrier Jul 7 06:15:19.069671 systemd-networkd[1352]: cilium_vxlan: Link UP Jul 7 06:15:19.069685 systemd-networkd[1352]: cilium_vxlan: Gained carrier Jul 7 06:15:19.232821 systemd-networkd[1352]: cilium_host: Gained IPv6LL Jul 7 06:15:19.311765 kernel: NET: Registered PF_ALG protocol family Jul 7 06:15:19.728921 systemd-networkd[1352]: cilium_net: Gained IPv6LL Jul 7 06:15:19.940635 systemd-networkd[1352]: lxc_health: Link UP Jul 7 06:15:19.963509 systemd-networkd[1352]: lxc_health: Gained carrier Jul 7 06:15:20.446751 kernel: eth0: renamed from tmp77b00 Jul 7 06:15:20.447268 systemd-networkd[1352]: lxc82f16156f2ce: Link UP Jul 7 06:15:20.447617 systemd-networkd[1352]: lxc82f16156f2ce: Gained carrier Jul 7 06:15:20.496729 kernel: eth0: renamed from tmp4a5d3 Jul 7 06:15:20.499592 systemd-networkd[1352]: lxc57500b217ee2: Link UP Jul 7 06:15:20.499874 systemd-networkd[1352]: lxc57500b217ee2: Gained carrier Jul 7 06:15:20.880810 systemd-networkd[1352]: cilium_vxlan: Gained IPv6LL Jul 7 06:15:21.520941 systemd-networkd[1352]: lxc_health: Gained IPv6LL Jul 7 06:15:22.032945 systemd-networkd[1352]: lxc82f16156f2ce: Gained IPv6LL Jul 7 06:15:22.225657 systemd-networkd[1352]: lxc57500b217ee2: Gained IPv6LL Jul 7 06:15:23.667788 containerd[1732]: time="2025-07-07T06:15:23.667730141Z" level=info msg="connecting to shim 4a5d347e52118252d65f527c77325b0214bfd48ec1d33ad681079af593ddb6c2" address="unix:///run/containerd/s/33564835dbc84aa92c7da0b328b29a0fa0bd9a038ea7966d0ad71df124ed861f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:23.690840 systemd[1]: Started cri-containerd-4a5d347e52118252d65f527c77325b0214bfd48ec1d33ad681079af593ddb6c2.scope - libcontainer container 4a5d347e52118252d65f527c77325b0214bfd48ec1d33ad681079af593ddb6c2. Jul 7 06:15:23.845197 containerd[1732]: time="2025-07-07T06:15:23.845156302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kn45h,Uid:ed7ffc54-e710-4b46-be9b-99021fa568f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a5d347e52118252d65f527c77325b0214bfd48ec1d33ad681079af593ddb6c2\"" Jul 7 06:15:23.848876 containerd[1732]: time="2025-07-07T06:15:23.848849739Z" level=info msg="CreateContainer within sandbox \"4a5d347e52118252d65f527c77325b0214bfd48ec1d33ad681079af593ddb6c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:15:23.860700 containerd[1732]: time="2025-07-07T06:15:23.860657920Z" level=info msg="connecting to shim 77b00320ef67e905f18c6736fa31de87e8dc4d74725ad7033e4d7e256d38bd67" address="unix:///run/containerd/s/5802b15d3b0f475007eb70bf5c3f6c593230650103390cd94ca4dbda9d460b43" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:15:23.883886 systemd[1]: Started cri-containerd-77b00320ef67e905f18c6736fa31de87e8dc4d74725ad7033e4d7e256d38bd67.scope - libcontainer container 77b00320ef67e905f18c6736fa31de87e8dc4d74725ad7033e4d7e256d38bd67. Jul 7 06:15:24.051116 containerd[1732]: time="2025-07-07T06:15:24.051074477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8xblj,Uid:739dbd52-b626-407f-8506-441fa98cd176,Namespace:kube-system,Attempt:0,} returns sandbox id \"77b00320ef67e905f18c6736fa31de87e8dc4d74725ad7033e4d7e256d38bd67\"" Jul 7 06:15:24.054008 containerd[1732]: time="2025-07-07T06:15:24.053965072Z" level=info msg="CreateContainer within sandbox \"77b00320ef67e905f18c6736fa31de87e8dc4d74725ad7033e4d7e256d38bd67\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:15:24.107847 containerd[1732]: time="2025-07-07T06:15:24.107818692Z" level=info msg="Container 76b5c7a0463b1a5e71c48a94df943160c9d504c902b459b76cf663193383ba8e: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:24.447786 containerd[1732]: time="2025-07-07T06:15:24.447605786Z" level=info msg="Container 555e3cbef7e745a1c1501d74432c124e7613af43744e6f6749a01fabf263b92f: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:15:24.449849 containerd[1732]: time="2025-07-07T06:15:24.449820360Z" level=info msg="CreateContainer within sandbox \"4a5d347e52118252d65f527c77325b0214bfd48ec1d33ad681079af593ddb6c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76b5c7a0463b1a5e71c48a94df943160c9d504c902b459b76cf663193383ba8e\"" Jul 7 06:15:24.450464 containerd[1732]: time="2025-07-07T06:15:24.450374936Z" level=info msg="StartContainer for \"76b5c7a0463b1a5e71c48a94df943160c9d504c902b459b76cf663193383ba8e\"" Jul 7 06:15:24.451409 containerd[1732]: time="2025-07-07T06:15:24.451312981Z" level=info msg="connecting to shim 76b5c7a0463b1a5e71c48a94df943160c9d504c902b459b76cf663193383ba8e" address="unix:///run/containerd/s/33564835dbc84aa92c7da0b328b29a0fa0bd9a038ea7966d0ad71df124ed861f" protocol=ttrpc version=3 Jul 7 06:15:24.467883 systemd[1]: Started cri-containerd-76b5c7a0463b1a5e71c48a94df943160c9d504c902b459b76cf663193383ba8e.scope - libcontainer container 76b5c7a0463b1a5e71c48a94df943160c9d504c902b459b76cf663193383ba8e. Jul 7 06:15:24.560717 containerd[1732]: time="2025-07-07T06:15:24.560663500Z" level=info msg="StartContainer for \"76b5c7a0463b1a5e71c48a94df943160c9d504c902b459b76cf663193383ba8e\" returns successfully" Jul 7 06:15:24.611851 containerd[1732]: time="2025-07-07T06:15:24.611821300Z" level=info msg="CreateContainer within sandbox \"77b00320ef67e905f18c6736fa31de87e8dc4d74725ad7033e4d7e256d38bd67\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"555e3cbef7e745a1c1501d74432c124e7613af43744e6f6749a01fabf263b92f\"" Jul 7 06:15:24.612364 containerd[1732]: time="2025-07-07T06:15:24.612330848Z" level=info msg="StartContainer for \"555e3cbef7e745a1c1501d74432c124e7613af43744e6f6749a01fabf263b92f\"" Jul 7 06:15:24.613535 containerd[1732]: time="2025-07-07T06:15:24.613465087Z" level=info msg="connecting to shim 555e3cbef7e745a1c1501d74432c124e7613af43744e6f6749a01fabf263b92f" address="unix:///run/containerd/s/5802b15d3b0f475007eb70bf5c3f6c593230650103390cd94ca4dbda9d460b43" protocol=ttrpc version=3 Jul 7 06:15:24.630876 systemd[1]: Started cri-containerd-555e3cbef7e745a1c1501d74432c124e7613af43744e6f6749a01fabf263b92f.scope - libcontainer container 555e3cbef7e745a1c1501d74432c124e7613af43744e6f6749a01fabf263b92f. Jul 7 06:15:24.664517 containerd[1732]: time="2025-07-07T06:15:24.664489767Z" level=info msg="StartContainer for \"555e3cbef7e745a1c1501d74432c124e7613af43744e6f6749a01fabf263b92f\" returns successfully" Jul 7 06:15:24.666332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2808709058.mount: Deactivated successfully. Jul 7 06:15:24.842368 kubelet[3132]: I0707 06:15:24.842236 3132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8xblj" podStartSLOduration=32.842215015 podStartE2EDuration="32.842215015s" podCreationTimestamp="2025-07-07 06:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:15:24.81402512 +0000 UTC m=+38.234226383" watchObservedRunningTime="2025-07-07 06:15:24.842215015 +0000 UTC m=+38.262416274" Jul 7 06:15:24.857449 kubelet[3132]: I0707 06:15:24.857402 3132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kn45h" podStartSLOduration=32.857387918 podStartE2EDuration="32.857387918s" podCreationTimestamp="2025-07-07 06:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:15:24.85601245 +0000 UTC m=+38.276213717" watchObservedRunningTime="2025-07-07 06:15:24.857387918 +0000 UTC m=+38.277589190" Jul 7 06:16:24.119715 systemd[1]: Started sshd@7-10.200.4.32:22-10.200.16.10:53474.service - OpenSSH per-connection server daemon (10.200.16.10:53474). Jul 7 06:16:24.721249 sshd[4453]: Accepted publickey for core from 10.200.16.10 port 53474 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:24.722616 sshd-session[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:24.727982 systemd-logind[1706]: New session 10 of user core. Jul 7 06:16:24.733907 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:16:25.212018 sshd[4457]: Connection closed by 10.200.16.10 port 53474 Jul 7 06:16:25.212650 sshd-session[4453]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:25.215818 systemd[1]: sshd@7-10.200.4.32:22-10.200.16.10:53474.service: Deactivated successfully. Jul 7 06:16:25.217934 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:16:25.219452 systemd-logind[1706]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:16:25.221004 systemd-logind[1706]: Removed session 10. Jul 7 06:16:30.319886 systemd[1]: Started sshd@8-10.200.4.32:22-10.200.16.10:38560.service - OpenSSH per-connection server daemon (10.200.16.10:38560). Jul 7 06:16:30.921660 sshd[4470]: Accepted publickey for core from 10.200.16.10 port 38560 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:30.922992 sshd-session[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:30.928274 systemd-logind[1706]: New session 11 of user core. Jul 7 06:16:30.936861 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:16:31.407803 sshd[4472]: Connection closed by 10.200.16.10 port 38560 Jul 7 06:16:31.408390 sshd-session[4470]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:31.411409 systemd[1]: sshd@8-10.200.4.32:22-10.200.16.10:38560.service: Deactivated successfully. Jul 7 06:16:31.413620 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:16:31.415095 systemd-logind[1706]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:16:31.416693 systemd-logind[1706]: Removed session 11. Jul 7 06:16:36.515751 systemd[1]: Started sshd@9-10.200.4.32:22-10.200.16.10:38572.service - OpenSSH per-connection server daemon (10.200.16.10:38572). Jul 7 06:16:37.117397 sshd[4485]: Accepted publickey for core from 10.200.16.10 port 38572 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:37.118945 sshd-session[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:37.124253 systemd-logind[1706]: New session 12 of user core. Jul 7 06:16:37.128915 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:16:37.597220 sshd[4487]: Connection closed by 10.200.16.10 port 38572 Jul 7 06:16:37.598044 sshd-session[4485]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:37.602202 systemd[1]: sshd@9-10.200.4.32:22-10.200.16.10:38572.service: Deactivated successfully. Jul 7 06:16:37.604116 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:16:37.604922 systemd-logind[1706]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:16:37.606486 systemd-logind[1706]: Removed session 12. Jul 7 06:16:42.709874 systemd[1]: Started sshd@10-10.200.4.32:22-10.200.16.10:52976.service - OpenSSH per-connection server daemon (10.200.16.10:52976). Jul 7 06:16:43.310407 sshd[4500]: Accepted publickey for core from 10.200.16.10 port 52976 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:43.312029 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:43.317284 systemd-logind[1706]: New session 13 of user core. Jul 7 06:16:43.320959 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:16:43.793613 sshd[4502]: Connection closed by 10.200.16.10 port 52976 Jul 7 06:16:43.794268 sshd-session[4500]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:43.798325 systemd[1]: sshd@10-10.200.4.32:22-10.200.16.10:52976.service: Deactivated successfully. Jul 7 06:16:43.800379 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:16:43.801333 systemd-logind[1706]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:16:43.802868 systemd-logind[1706]: Removed session 13. Jul 7 06:16:43.913852 systemd[1]: Started sshd@11-10.200.4.32:22-10.200.16.10:52992.service - OpenSSH per-connection server daemon (10.200.16.10:52992). Jul 7 06:16:44.516833 sshd[4515]: Accepted publickey for core from 10.200.16.10 port 52992 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:44.518180 sshd-session[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:44.523425 systemd-logind[1706]: New session 14 of user core. Jul 7 06:16:44.531863 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:16:45.016074 sshd[4518]: Connection closed by 10.200.16.10 port 52992 Jul 7 06:16:45.016691 sshd-session[4515]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:45.020525 systemd[1]: sshd@11-10.200.4.32:22-10.200.16.10:52992.service: Deactivated successfully. Jul 7 06:16:45.022936 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:16:45.024285 systemd-logind[1706]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:16:45.025545 systemd-logind[1706]: Removed session 14. Jul 7 06:16:45.129945 systemd[1]: Started sshd@12-10.200.4.32:22-10.200.16.10:52998.service - OpenSSH per-connection server daemon (10.200.16.10:52998). Jul 7 06:16:45.728993 sshd[4528]: Accepted publickey for core from 10.200.16.10 port 52998 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:45.730553 sshd-session[4528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:45.735804 systemd-logind[1706]: New session 15 of user core. Jul 7 06:16:45.739880 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:16:46.211198 sshd[4530]: Connection closed by 10.200.16.10 port 52998 Jul 7 06:16:46.212223 sshd-session[4528]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:46.216251 systemd[1]: sshd@12-10.200.4.32:22-10.200.16.10:52998.service: Deactivated successfully. Jul 7 06:16:46.218360 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:16:46.219594 systemd-logind[1706]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:16:46.221045 systemd-logind[1706]: Removed session 15. Jul 7 06:16:51.332732 systemd[1]: Started sshd@13-10.200.4.32:22-10.200.16.10:39838.service - OpenSSH per-connection server daemon (10.200.16.10:39838). Jul 7 06:16:51.934925 sshd[4544]: Accepted publickey for core from 10.200.16.10 port 39838 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:51.936081 sshd-session[4544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:51.940170 systemd-logind[1706]: New session 16 of user core. Jul 7 06:16:51.945888 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:16:52.429699 sshd[4546]: Connection closed by 10.200.16.10 port 39838 Jul 7 06:16:52.430747 sshd-session[4544]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:52.434460 systemd[1]: sshd@13-10.200.4.32:22-10.200.16.10:39838.service: Deactivated successfully. Jul 7 06:16:52.436340 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:16:52.437183 systemd-logind[1706]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:16:52.438553 systemd-logind[1706]: Removed session 16. Jul 7 06:16:52.558732 systemd[1]: Started sshd@14-10.200.4.32:22-10.200.16.10:39852.service - OpenSSH per-connection server daemon (10.200.16.10:39852). Jul 7 06:16:53.155749 sshd[4558]: Accepted publickey for core from 10.200.16.10 port 39852 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:53.157088 sshd-session[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:53.161759 systemd-logind[1706]: New session 17 of user core. Jul 7 06:16:53.171870 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:16:53.678987 sshd[4560]: Connection closed by 10.200.16.10 port 39852 Jul 7 06:16:53.679530 sshd-session[4558]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:53.682498 systemd[1]: sshd@14-10.200.4.32:22-10.200.16.10:39852.service: Deactivated successfully. Jul 7 06:16:53.684448 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:16:53.686455 systemd-logind[1706]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:16:53.688600 systemd-logind[1706]: Removed session 17. Jul 7 06:16:53.786996 systemd[1]: Started sshd@15-10.200.4.32:22-10.200.16.10:39864.service - OpenSSH per-connection server daemon (10.200.16.10:39864). Jul 7 06:16:54.386753 sshd[4570]: Accepted publickey for core from 10.200.16.10 port 39864 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:54.388467 sshd-session[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:54.394844 systemd-logind[1706]: New session 18 of user core. Jul 7 06:16:54.399887 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:16:55.628370 sshd[4572]: Connection closed by 10.200.16.10 port 39864 Jul 7 06:16:55.628917 sshd-session[4570]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:55.633165 systemd[1]: sshd@15-10.200.4.32:22-10.200.16.10:39864.service: Deactivated successfully. Jul 7 06:16:55.635327 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:16:55.636312 systemd-logind[1706]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:16:55.637811 systemd-logind[1706]: Removed session 18. Jul 7 06:16:55.734859 systemd[1]: Started sshd@16-10.200.4.32:22-10.200.16.10:39876.service - OpenSSH per-connection server daemon (10.200.16.10:39876). Jul 7 06:16:56.341922 sshd[4591]: Accepted publickey for core from 10.200.16.10 port 39876 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:56.343567 sshd-session[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:56.348920 systemd-logind[1706]: New session 19 of user core. Jul 7 06:16:56.355897 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:16:56.928632 sshd[4593]: Connection closed by 10.200.16.10 port 39876 Jul 7 06:16:56.929324 sshd-session[4591]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:56.932403 systemd[1]: sshd@16-10.200.4.32:22-10.200.16.10:39876.service: Deactivated successfully. Jul 7 06:16:56.934681 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:16:56.936569 systemd-logind[1706]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:16:56.938012 systemd-logind[1706]: Removed session 19. Jul 7 06:16:57.035970 systemd[1]: Started sshd@17-10.200.4.32:22-10.200.16.10:39882.service - OpenSSH per-connection server daemon (10.200.16.10:39882). Jul 7 06:16:57.634245 sshd[4603]: Accepted publickey for core from 10.200.16.10 port 39882 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:16:57.635781 sshd-session[4603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:16:57.640867 systemd-logind[1706]: New session 20 of user core. Jul 7 06:16:57.644918 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:16:58.119189 sshd[4605]: Connection closed by 10.200.16.10 port 39882 Jul 7 06:16:58.119751 sshd-session[4603]: pam_unix(sshd:session): session closed for user core Jul 7 06:16:58.122622 systemd[1]: sshd@17-10.200.4.32:22-10.200.16.10:39882.service: Deactivated successfully. Jul 7 06:16:58.124883 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:16:58.126620 systemd-logind[1706]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:16:58.128174 systemd-logind[1706]: Removed session 20. Jul 7 06:17:03.239873 systemd[1]: Started sshd@18-10.200.4.32:22-10.200.16.10:40902.service - OpenSSH per-connection server daemon (10.200.16.10:40902). Jul 7 06:17:03.835379 sshd[4619]: Accepted publickey for core from 10.200.16.10 port 40902 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:03.836835 sshd-session[4619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:03.841368 systemd-logind[1706]: New session 21 of user core. Jul 7 06:17:03.849854 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:17:04.313724 sshd[4621]: Connection closed by 10.200.16.10 port 40902 Jul 7 06:17:04.316907 sshd-session[4619]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:04.323322 systemd[1]: sshd@18-10.200.4.32:22-10.200.16.10:40902.service: Deactivated successfully. Jul 7 06:17:04.328461 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:17:04.330908 systemd-logind[1706]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:17:04.334838 systemd-logind[1706]: Removed session 21. Jul 7 06:17:09.424557 systemd[1]: Started sshd@19-10.200.4.32:22-10.200.16.10:40908.service - OpenSSH per-connection server daemon (10.200.16.10:40908). Jul 7 06:17:10.019274 sshd[4632]: Accepted publickey for core from 10.200.16.10 port 40908 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:10.020636 sshd-session[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:10.025808 systemd-logind[1706]: New session 22 of user core. Jul 7 06:17:10.032875 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:17:10.495711 sshd[4634]: Connection closed by 10.200.16.10 port 40908 Jul 7 06:17:10.496313 sshd-session[4632]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:10.499456 systemd[1]: sshd@19-10.200.4.32:22-10.200.16.10:40908.service: Deactivated successfully. Jul 7 06:17:10.501826 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:17:10.502766 systemd-logind[1706]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:17:10.504630 systemd-logind[1706]: Removed session 22. Jul 7 06:17:15.605952 systemd[1]: Started sshd@20-10.200.4.32:22-10.200.16.10:55040.service - OpenSSH per-connection server daemon (10.200.16.10:55040). Jul 7 06:17:16.203923 sshd[4646]: Accepted publickey for core from 10.200.16.10 port 55040 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:16.205598 sshd-session[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:16.211039 systemd-logind[1706]: New session 23 of user core. Jul 7 06:17:16.215905 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:17:16.691224 sshd[4648]: Connection closed by 10.200.16.10 port 55040 Jul 7 06:17:16.691917 sshd-session[4646]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:16.696622 systemd[1]: sshd@20-10.200.4.32:22-10.200.16.10:55040.service: Deactivated successfully. Jul 7 06:17:16.700926 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:17:16.702286 systemd-logind[1706]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:17:16.704174 systemd-logind[1706]: Removed session 23. Jul 7 06:17:16.797727 systemd[1]: Started sshd@21-10.200.4.32:22-10.200.16.10:55054.service - OpenSSH per-connection server daemon (10.200.16.10:55054). Jul 7 06:17:17.390969 sshd[4660]: Accepted publickey for core from 10.200.16.10 port 55054 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:17.392405 sshd-session[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:17.397397 systemd-logind[1706]: New session 24 of user core. Jul 7 06:17:17.402905 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:17:19.010338 containerd[1732]: time="2025-07-07T06:17:19.010268006Z" level=info msg="StopContainer for \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\" with timeout 30 (s)" Jul 7 06:17:19.011785 containerd[1732]: time="2025-07-07T06:17:19.011294771Z" level=info msg="Stop container \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\" with signal terminated" Jul 7 06:17:19.025312 systemd[1]: cri-containerd-b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833.scope: Deactivated successfully. Jul 7 06:17:19.028111 containerd[1732]: time="2025-07-07T06:17:19.028067828Z" level=info msg="received exit event container_id:\"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\" id:\"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\" pid:3655 exited_at:{seconds:1751869039 nanos:27778741}" Jul 7 06:17:19.033210 containerd[1732]: time="2025-07-07T06:17:19.033125724Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:17:19.035374 containerd[1732]: time="2025-07-07T06:17:19.035336612Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\" id:\"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\" pid:3655 exited_at:{seconds:1751869039 nanos:27778741}" Jul 7 06:17:19.040024 containerd[1732]: time="2025-07-07T06:17:19.039975322Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" id:\"6983a4a69ed0d03a57ed8096b70dc3286e680c3becfe57c84d79fde8b2c07fd4\" pid:4690 exited_at:{seconds:1751869039 nanos:39729742}" Jul 7 06:17:19.041851 containerd[1732]: time="2025-07-07T06:17:19.041815868Z" level=info msg="StopContainer for \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" with timeout 2 (s)" Jul 7 06:17:19.042362 containerd[1732]: time="2025-07-07T06:17:19.042343838Z" level=info msg="Stop container \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" with signal terminated" Jul 7 06:17:19.051684 systemd-networkd[1352]: lxc_health: Link DOWN Jul 7 06:17:19.051694 systemd-networkd[1352]: lxc_health: Lost carrier Jul 7 06:17:19.060661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833-rootfs.mount: Deactivated successfully. Jul 7 06:17:19.068002 systemd[1]: cri-containerd-6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708.scope: Deactivated successfully. Jul 7 06:17:19.068290 systemd[1]: cri-containerd-6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708.scope: Consumed 5.138s CPU time, 123.1M memory peak, 136K read from disk, 13.3M written to disk. Jul 7 06:17:19.069503 containerd[1732]: time="2025-07-07T06:17:19.069480537Z" level=info msg="received exit event container_id:\"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" id:\"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" pid:3766 exited_at:{seconds:1751869039 nanos:69375020}" Jul 7 06:17:19.069687 containerd[1732]: time="2025-07-07T06:17:19.069672838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" id:\"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" pid:3766 exited_at:{seconds:1751869039 nanos:69375020}" Jul 7 06:17:19.084119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708-rootfs.mount: Deactivated successfully. Jul 7 06:17:19.155832 containerd[1732]: time="2025-07-07T06:17:19.155804305Z" level=info msg="StopContainer for \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" returns successfully" Jul 7 06:17:19.156427 containerd[1732]: time="2025-07-07T06:17:19.156405559Z" level=info msg="StopPodSandbox for \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\"" Jul 7 06:17:19.156493 containerd[1732]: time="2025-07-07T06:17:19.156470755Z" level=info msg="Container to stop \"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:19.156493 containerd[1732]: time="2025-07-07T06:17:19.156484682Z" level=info msg="Container to stop \"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:19.156542 containerd[1732]: time="2025-07-07T06:17:19.156497558Z" level=info msg="Container to stop \"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:19.156542 containerd[1732]: time="2025-07-07T06:17:19.156509583Z" level=info msg="Container to stop \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:19.156542 containerd[1732]: time="2025-07-07T06:17:19.156519779Z" level=info msg="Container to stop \"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:19.161688 systemd[1]: cri-containerd-ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0.scope: Deactivated successfully. Jul 7 06:17:19.162648 containerd[1732]: time="2025-07-07T06:17:19.162388845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" id:\"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" pid:3286 exit_status:137 exited_at:{seconds:1751869039 nanos:162010022}" Jul 7 06:17:19.167168 containerd[1732]: time="2025-07-07T06:17:19.167117992Z" level=info msg="StopContainer for \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\" returns successfully" Jul 7 06:17:19.167857 containerd[1732]: time="2025-07-07T06:17:19.167564895Z" level=info msg="StopPodSandbox for \"de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156\"" Jul 7 06:17:19.167857 containerd[1732]: time="2025-07-07T06:17:19.167625144Z" level=info msg="Container to stop \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:17:19.175183 systemd[1]: cri-containerd-de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156.scope: Deactivated successfully. Jul 7 06:17:19.190489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0-rootfs.mount: Deactivated successfully. Jul 7 06:17:19.201734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156-rootfs.mount: Deactivated successfully. Jul 7 06:17:19.211153 containerd[1732]: time="2025-07-07T06:17:19.211068285Z" level=info msg="received exit event sandbox_id:\"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" exit_status:137 exited_at:{seconds:1751869039 nanos:162010022}" Jul 7 06:17:19.212404 containerd[1732]: time="2025-07-07T06:17:19.212171226Z" level=info msg="shim disconnected" id=de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156 namespace=k8s.io Jul 7 06:17:19.212404 containerd[1732]: time="2025-07-07T06:17:19.212402017Z" level=warning msg="cleaning up after shim disconnected" id=de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156 namespace=k8s.io Jul 7 06:17:19.214195 containerd[1732]: time="2025-07-07T06:17:19.212412279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:17:19.215801 containerd[1732]: time="2025-07-07T06:17:19.214361562Z" level=info msg="shim disconnected" id=ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0 namespace=k8s.io Jul 7 06:17:19.215801 containerd[1732]: time="2025-07-07T06:17:19.214388068Z" level=warning msg="cleaning up after shim disconnected" id=ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0 namespace=k8s.io Jul 7 06:17:19.215801 containerd[1732]: time="2025-07-07T06:17:19.214396427Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:17:19.215556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0-shm.mount: Deactivated successfully. Jul 7 06:17:19.218158 containerd[1732]: time="2025-07-07T06:17:19.217982995Z" level=info msg="TearDown network for sandbox \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" successfully" Jul 7 06:17:19.218245 containerd[1732]: time="2025-07-07T06:17:19.218231975Z" level=info msg="StopPodSandbox for \"ab14eb18a91663f061ba77a0833996c5f859cedb6bd10aef674dd05903eef6f0\" returns successfully" Jul 7 06:17:19.236282 containerd[1732]: time="2025-07-07T06:17:19.236259309Z" level=info msg="received exit event sandbox_id:\"de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156\" exit_status:137 exited_at:{seconds:1751869039 nanos:182938069}" Jul 7 06:17:19.236995 containerd[1732]: time="2025-07-07T06:17:19.236974504Z" level=info msg="TearDown network for sandbox \"de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156\" successfully" Jul 7 06:17:19.237087 containerd[1732]: time="2025-07-07T06:17:19.237067778Z" level=info msg="StopPodSandbox for \"de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156\" returns successfully" Jul 7 06:17:19.239444 containerd[1732]: time="2025-07-07T06:17:19.239325399Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156\" id:\"de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156\" pid:3332 exit_status:137 exited_at:{seconds:1751869039 nanos:182938069}" Jul 7 06:17:19.338072 kubelet[3132]: I0707 06:17:19.337164 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cni-path\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338072 kubelet[3132]: I0707 06:17:19.337206 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-run\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338072 kubelet[3132]: I0707 06:17:19.337222 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-bpf-maps\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338072 kubelet[3132]: I0707 06:17:19.337250 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a36b5f6-4940-4e9e-95a9-23f797afb918-cilium-config-path\") pod \"2a36b5f6-4940-4e9e-95a9-23f797afb918\" (UID: \"2a36b5f6-4940-4e9e-95a9-23f797afb918\") " Jul 7 06:17:19.338072 kubelet[3132]: I0707 06:17:19.337269 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-etc-cni-netd\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338072 kubelet[3132]: I0707 06:17:19.337288 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-host-proc-sys-net\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338505 kubelet[3132]: I0707 06:17:19.337319 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhmxf\" (UniqueName: \"kubernetes.io/projected/e36f7f0a-096c-41c4-849d-fc3730f6dd90-kube-api-access-dhmxf\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338505 kubelet[3132]: I0707 06:17:19.337333 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-hostproc\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338505 kubelet[3132]: I0707 06:17:19.337349 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-xtables-lock\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338505 kubelet[3132]: I0707 06:17:19.337369 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc2hp\" (UniqueName: \"kubernetes.io/projected/2a36b5f6-4940-4e9e-95a9-23f797afb918-kube-api-access-tc2hp\") pod \"2a36b5f6-4940-4e9e-95a9-23f797afb918\" (UID: \"2a36b5f6-4940-4e9e-95a9-23f797afb918\") " Jul 7 06:17:19.338505 kubelet[3132]: I0707 06:17:19.337390 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-lib-modules\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338505 kubelet[3132]: I0707 06:17:19.337409 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-config-path\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338648 kubelet[3132]: I0707 06:17:19.337429 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-cgroup\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338648 kubelet[3132]: I0707 06:17:19.337454 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e36f7f0a-096c-41c4-849d-fc3730f6dd90-hubble-tls\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338648 kubelet[3132]: I0707 06:17:19.337479 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e36f7f0a-096c-41c4-849d-fc3730f6dd90-clustermesh-secrets\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338648 kubelet[3132]: I0707 06:17:19.337501 3132 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-host-proc-sys-kernel\") pod \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\" (UID: \"e36f7f0a-096c-41c4-849d-fc3730f6dd90\") " Jul 7 06:17:19.338648 kubelet[3132]: I0707 06:17:19.337563 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:17:19.338786 kubelet[3132]: I0707 06:17:19.337604 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cni-path" (OuterVolumeSpecName: "cni-path") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:17:19.338786 kubelet[3132]: I0707 06:17:19.337619 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:17:19.338786 kubelet[3132]: I0707 06:17:19.337632 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:17:19.339903 kubelet[3132]: I0707 06:17:19.339869 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a36b5f6-4940-4e9e-95a9-23f797afb918-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a36b5f6-4940-4e9e-95a9-23f797afb918" (UID: "2a36b5f6-4940-4e9e-95a9-23f797afb918"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:17:19.339978 kubelet[3132]: I0707 06:17:19.339934 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:17:19.340253 kubelet[3132]: I0707 06:17:19.340200 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:17:19.340253 kubelet[3132]: I0707 06:17:19.340229 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:17:19.341904 kubelet[3132]: I0707 06:17:19.341824 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-hostproc" (OuterVolumeSpecName: "hostproc") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:17:19.341904 kubelet[3132]: I0707 06:17:19.341824 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:17:19.341904 kubelet[3132]: I0707 06:17:19.341872 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 06:17:19.343420 kubelet[3132]: I0707 06:17:19.343391 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:17:19.343594 kubelet[3132]: I0707 06:17:19.343577 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a36b5f6-4940-4e9e-95a9-23f797afb918-kube-api-access-tc2hp" (OuterVolumeSpecName: "kube-api-access-tc2hp") pod "2a36b5f6-4940-4e9e-95a9-23f797afb918" (UID: "2a36b5f6-4940-4e9e-95a9-23f797afb918"). InnerVolumeSpecName "kube-api-access-tc2hp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:17:19.345632 kubelet[3132]: I0707 06:17:19.345608 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36f7f0a-096c-41c4-849d-fc3730f6dd90-kube-api-access-dhmxf" (OuterVolumeSpecName: "kube-api-access-dhmxf") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "kube-api-access-dhmxf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:17:19.345934 kubelet[3132]: I0707 06:17:19.345900 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e36f7f0a-096c-41c4-849d-fc3730f6dd90-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:17:19.346355 kubelet[3132]: I0707 06:17:19.346335 3132 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36f7f0a-096c-41c4-849d-fc3730f6dd90-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e36f7f0a-096c-41c4-849d-fc3730f6dd90" (UID: "e36f7f0a-096c-41c4-849d-fc3730f6dd90"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:17:19.438492 kubelet[3132]: I0707 06:17:19.438461 3132 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-run\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438492 kubelet[3132]: I0707 06:17:19.438486 3132 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-bpf-maps\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438492 kubelet[3132]: I0707 06:17:19.438496 3132 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a36b5f6-4940-4e9e-95a9-23f797afb918-cilium-config-path\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438492 kubelet[3132]: I0707 06:17:19.438506 3132 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-etc-cni-netd\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438760 kubelet[3132]: I0707 06:17:19.438518 3132 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-host-proc-sys-net\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438760 kubelet[3132]: I0707 06:17:19.438528 3132 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dhmxf\" (UniqueName: \"kubernetes.io/projected/e36f7f0a-096c-41c4-849d-fc3730f6dd90-kube-api-access-dhmxf\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438760 kubelet[3132]: I0707 06:17:19.438540 3132 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-hostproc\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438760 kubelet[3132]: I0707 06:17:19.438550 3132 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-xtables-lock\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438760 kubelet[3132]: I0707 06:17:19.438560 3132 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tc2hp\" (UniqueName: \"kubernetes.io/projected/2a36b5f6-4940-4e9e-95a9-23f797afb918-kube-api-access-tc2hp\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438760 kubelet[3132]: I0707 06:17:19.438570 3132 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-lib-modules\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438760 kubelet[3132]: I0707 06:17:19.438580 3132 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-config-path\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438760 kubelet[3132]: I0707 06:17:19.438589 3132 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cilium-cgroup\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438916 kubelet[3132]: I0707 06:17:19.438599 3132 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e36f7f0a-096c-41c4-849d-fc3730f6dd90-hubble-tls\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438916 kubelet[3132]: I0707 06:17:19.438607 3132 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e36f7f0a-096c-41c4-849d-fc3730f6dd90-clustermesh-secrets\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438916 kubelet[3132]: I0707 06:17:19.438617 3132 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-host-proc-sys-kernel\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:19.438916 kubelet[3132]: I0707 06:17:19.438626 3132 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e36f7f0a-096c-41c4-849d-fc3730f6dd90-cni-path\") on node \"ci-4372.0.1-a-6edf51656b\" DevicePath \"\"" Jul 7 06:17:20.039278 kubelet[3132]: I0707 06:17:20.039179 3132 scope.go:117] "RemoveContainer" containerID="b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833" Jul 7 06:17:20.043256 containerd[1732]: time="2025-07-07T06:17:20.042451017Z" level=info msg="RemoveContainer for \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\"" Jul 7 06:17:20.048174 systemd[1]: Removed slice kubepods-besteffort-pod2a36b5f6_4940_4e9e_95a9_23f797afb918.slice - libcontainer container kubepods-besteffort-pod2a36b5f6_4940_4e9e_95a9_23f797afb918.slice. Jul 7 06:17:20.061541 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de8e4b8a7116e84ed7f419dae8f76d3a4a7c82be11a1b00a8591ca12393b0156-shm.mount: Deactivated successfully. Jul 7 06:17:20.061689 systemd[1]: var-lib-kubelet-pods-2a36b5f6\x2d4940\x2d4e9e\x2d95a9\x2d23f797afb918-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtc2hp.mount: Deactivated successfully. Jul 7 06:17:20.061774 systemd[1]: var-lib-kubelet-pods-e36f7f0a\x2d096c\x2d41c4\x2d849d\x2dfc3730f6dd90-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddhmxf.mount: Deactivated successfully. Jul 7 06:17:20.061967 systemd[1]: var-lib-kubelet-pods-e36f7f0a\x2d096c\x2d41c4\x2d849d\x2dfc3730f6dd90-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 06:17:20.062080 systemd[1]: var-lib-kubelet-pods-e36f7f0a\x2d096c\x2d41c4\x2d849d\x2dfc3730f6dd90-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 06:17:20.066896 systemd[1]: Removed slice kubepods-burstable-pode36f7f0a_096c_41c4_849d_fc3730f6dd90.slice - libcontainer container kubepods-burstable-pode36f7f0a_096c_41c4_849d_fc3730f6dd90.slice. Jul 7 06:17:20.067080 systemd[1]: kubepods-burstable-pode36f7f0a_096c_41c4_849d_fc3730f6dd90.slice: Consumed 5.216s CPU time, 123.6M memory peak, 136K read from disk, 13.3M written to disk. Jul 7 06:17:20.068586 containerd[1732]: time="2025-07-07T06:17:20.068546814Z" level=info msg="RemoveContainer for \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\" returns successfully" Jul 7 06:17:20.069141 kubelet[3132]: I0707 06:17:20.069124 3132 scope.go:117] "RemoveContainer" containerID="b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833" Jul 7 06:17:20.069369 containerd[1732]: time="2025-07-07T06:17:20.069327411Z" level=error msg="ContainerStatus for \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\": not found" Jul 7 06:17:20.069511 kubelet[3132]: E0707 06:17:20.069473 3132 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\": not found" containerID="b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833" Jul 7 06:17:20.069617 kubelet[3132]: I0707 06:17:20.069517 3132 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833"} err="failed to get container status \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\": rpc error: code = NotFound desc = an error occurred when try to find container \"b521175a147dc793790f7154e97833f6765b6f508ec3cf9bbf9252b0dd5ce833\": not found" Jul 7 06:17:20.069617 kubelet[3132]: I0707 06:17:20.069615 3132 scope.go:117] "RemoveContainer" containerID="6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708" Jul 7 06:17:20.071169 containerd[1732]: time="2025-07-07T06:17:20.071138400Z" level=info msg="RemoveContainer for \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\"" Jul 7 06:17:20.085993 containerd[1732]: time="2025-07-07T06:17:20.085965791Z" level=info msg="RemoveContainer for \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" returns successfully" Jul 7 06:17:20.086174 kubelet[3132]: I0707 06:17:20.086110 3132 scope.go:117] "RemoveContainer" containerID="e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6" Jul 7 06:17:20.089146 containerd[1732]: time="2025-07-07T06:17:20.088938135Z" level=info msg="RemoveContainer for \"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\"" Jul 7 06:17:20.099568 containerd[1732]: time="2025-07-07T06:17:20.099542109Z" level=info msg="RemoveContainer for \"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\" returns successfully" Jul 7 06:17:20.099738 kubelet[3132]: I0707 06:17:20.099700 3132 scope.go:117] "RemoveContainer" containerID="f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354" Jul 7 06:17:20.101259 containerd[1732]: time="2025-07-07T06:17:20.101236773Z" level=info msg="RemoveContainer for \"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\"" Jul 7 06:17:20.109086 containerd[1732]: time="2025-07-07T06:17:20.109062600Z" level=info msg="RemoveContainer for \"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\" returns successfully" Jul 7 06:17:20.109243 kubelet[3132]: I0707 06:17:20.109216 3132 scope.go:117] "RemoveContainer" containerID="36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b" Jul 7 06:17:20.110318 containerd[1732]: time="2025-07-07T06:17:20.110300528Z" level=info msg="RemoveContainer for \"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\"" Jul 7 06:17:20.125235 containerd[1732]: time="2025-07-07T06:17:20.125210323Z" level=info msg="RemoveContainer for \"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\" returns successfully" Jul 7 06:17:20.125387 kubelet[3132]: I0707 06:17:20.125371 3132 scope.go:117] "RemoveContainer" containerID="979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962" Jul 7 06:17:20.126605 containerd[1732]: time="2025-07-07T06:17:20.126582888Z" level=info msg="RemoveContainer for \"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\"" Jul 7 06:17:20.134421 containerd[1732]: time="2025-07-07T06:17:20.134397536Z" level=info msg="RemoveContainer for \"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\" returns successfully" Jul 7 06:17:20.134596 kubelet[3132]: I0707 06:17:20.134561 3132 scope.go:117] "RemoveContainer" containerID="6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708" Jul 7 06:17:20.134850 containerd[1732]: time="2025-07-07T06:17:20.134817037Z" level=error msg="ContainerStatus for \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\": not found" Jul 7 06:17:20.134963 kubelet[3132]: E0707 06:17:20.134925 3132 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\": not found" containerID="6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708" Jul 7 06:17:20.135012 kubelet[3132]: I0707 06:17:20.134968 3132 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708"} err="failed to get container status \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\": rpc error: code = NotFound desc = an error occurred when try to find container \"6223dd7077736ac7c2df16e5e20effd317606395c6078545497c94a962a02708\": not found" Jul 7 06:17:20.135012 kubelet[3132]: I0707 06:17:20.134989 3132 scope.go:117] "RemoveContainer" containerID="e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6" Jul 7 06:17:20.135160 containerd[1732]: time="2025-07-07T06:17:20.135131785Z" level=error msg="ContainerStatus for \"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\": not found" Jul 7 06:17:20.135272 kubelet[3132]: E0707 06:17:20.135250 3132 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\": not found" containerID="e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6" Jul 7 06:17:20.135317 kubelet[3132]: I0707 06:17:20.135274 3132 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6"} err="failed to get container status \"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\": rpc error: code = NotFound desc = an error occurred when try to find container \"e82990396632351f41c1b885513a134dc0dcad8c0c3dc1f3bb32e23505f93ae6\": not found" Jul 7 06:17:20.135317 kubelet[3132]: I0707 06:17:20.135291 3132 scope.go:117] "RemoveContainer" containerID="f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354" Jul 7 06:17:20.135512 containerd[1732]: time="2025-07-07T06:17:20.135483310Z" level=error msg="ContainerStatus for \"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\": not found" Jul 7 06:17:20.135621 kubelet[3132]: E0707 06:17:20.135589 3132 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\": not found" containerID="f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354" Jul 7 06:17:20.135661 kubelet[3132]: I0707 06:17:20.135622 3132 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354"} err="failed to get container status \"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\": rpc error: code = NotFound desc = an error occurred when try to find container \"f481dcf20a28e2f261c2ed9883bcfbf6259742916557bba0baceadaf86e1b354\": not found" Jul 7 06:17:20.135661 kubelet[3132]: I0707 06:17:20.135641 3132 scope.go:117] "RemoveContainer" containerID="36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b" Jul 7 06:17:20.135881 containerd[1732]: time="2025-07-07T06:17:20.135829665Z" level=error msg="ContainerStatus for \"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\": not found" Jul 7 06:17:20.135965 kubelet[3132]: E0707 06:17:20.135949 3132 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\": not found" containerID="36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b" Jul 7 06:17:20.135994 kubelet[3132]: I0707 06:17:20.135970 3132 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b"} err="failed to get container status \"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\": rpc error: code = NotFound desc = an error occurred when try to find container \"36921879596d1884e5628e9461dd6e7fec8d6516880dfd2ea58c82ae613aa38b\": not found" Jul 7 06:17:20.135994 kubelet[3132]: I0707 06:17:20.135987 3132 scope.go:117] "RemoveContainer" containerID="979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962" Jul 7 06:17:20.136160 containerd[1732]: time="2025-07-07T06:17:20.136133806Z" level=error msg="ContainerStatus for \"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\": not found" Jul 7 06:17:20.136246 kubelet[3132]: E0707 06:17:20.136227 3132 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\": not found" containerID="979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962" Jul 7 06:17:20.136284 kubelet[3132]: I0707 06:17:20.136249 3132 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962"} err="failed to get container status \"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\": rpc error: code = NotFound desc = an error occurred when try to find container \"979e9cbf81c4a077c06031ec74851c0ef2249a3da2171d7973d37d2e38e08962\": not found" Jul 7 06:17:20.695190 kubelet[3132]: I0707 06:17:20.695131 3132 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a36b5f6-4940-4e9e-95a9-23f797afb918" path="/var/lib/kubelet/pods/2a36b5f6-4940-4e9e-95a9-23f797afb918/volumes" Jul 7 06:17:20.695728 kubelet[3132]: I0707 06:17:20.695538 3132 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e36f7f0a-096c-41c4-849d-fc3730f6dd90" path="/var/lib/kubelet/pods/e36f7f0a-096c-41c4-849d-fc3730f6dd90/volumes" Jul 7 06:17:21.056387 sshd[4662]: Connection closed by 10.200.16.10 port 55054 Jul 7 06:17:21.058521 sshd-session[4660]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:21.062189 systemd[1]: sshd@21-10.200.4.32:22-10.200.16.10:55054.service: Deactivated successfully. Jul 7 06:17:21.064578 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:17:21.066854 systemd-logind[1706]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:17:21.068090 systemd-logind[1706]: Removed session 24. Jul 7 06:17:21.170105 systemd[1]: Started sshd@22-10.200.4.32:22-10.200.16.10:57756.service - OpenSSH per-connection server daemon (10.200.16.10:57756). Jul 7 06:17:21.761492 kubelet[3132]: E0707 06:17:21.761382 3132 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 06:17:21.768296 sshd[4814]: Accepted publickey for core from 10.200.16.10 port 57756 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:21.769603 sshd-session[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:21.774284 systemd-logind[1706]: New session 25 of user core. Jul 7 06:17:21.778889 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:17:22.526730 kubelet[3132]: I0707 06:17:22.526675 3132 memory_manager.go:355] "RemoveStaleState removing state" podUID="e36f7f0a-096c-41c4-849d-fc3730f6dd90" containerName="cilium-agent" Jul 7 06:17:22.528351 kubelet[3132]: I0707 06:17:22.528331 3132 memory_manager.go:355] "RemoveStaleState removing state" podUID="2a36b5f6-4940-4e9e-95a9-23f797afb918" containerName="cilium-operator" Jul 7 06:17:22.549831 systemd[1]: Created slice kubepods-burstable-pod12067e34_e89c_428d_b29a_c5e808f26b40.slice - libcontainer container kubepods-burstable-pod12067e34_e89c_428d_b29a_c5e808f26b40.slice. Jul 7 06:17:22.555503 kubelet[3132]: I0707 06:17:22.555376 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/12067e34-e89c-428d-b29a-c5e808f26b40-cilium-ipsec-secrets\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.556955 kubelet[3132]: I0707 06:17:22.555636 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/12067e34-e89c-428d-b29a-c5e808f26b40-cilium-run\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.556955 kubelet[3132]: I0707 06:17:22.555665 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/12067e34-e89c-428d-b29a-c5e808f26b40-hostproc\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.556955 kubelet[3132]: I0707 06:17:22.555693 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/12067e34-e89c-428d-b29a-c5e808f26b40-host-proc-sys-kernel\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.556955 kubelet[3132]: I0707 06:17:22.555736 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/12067e34-e89c-428d-b29a-c5e808f26b40-cni-path\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.556955 kubelet[3132]: I0707 06:17:22.555764 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/12067e34-e89c-428d-b29a-c5e808f26b40-hubble-tls\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.556955 kubelet[3132]: I0707 06:17:22.555790 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/12067e34-e89c-428d-b29a-c5e808f26b40-cilium-config-path\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.557184 kubelet[3132]: I0707 06:17:22.555810 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/12067e34-e89c-428d-b29a-c5e808f26b40-cilium-cgroup\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.557184 kubelet[3132]: I0707 06:17:22.555830 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12067e34-e89c-428d-b29a-c5e808f26b40-lib-modules\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.557184 kubelet[3132]: I0707 06:17:22.555854 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/12067e34-e89c-428d-b29a-c5e808f26b40-host-proc-sys-net\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.557184 kubelet[3132]: I0707 06:17:22.555882 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/12067e34-e89c-428d-b29a-c5e808f26b40-bpf-maps\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.557184 kubelet[3132]: I0707 06:17:22.555901 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/12067e34-e89c-428d-b29a-c5e808f26b40-etc-cni-netd\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.557184 kubelet[3132]: I0707 06:17:22.555923 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/12067e34-e89c-428d-b29a-c5e808f26b40-clustermesh-secrets\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.557325 kubelet[3132]: I0707 06:17:22.555946 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chkvh\" (UniqueName: \"kubernetes.io/projected/12067e34-e89c-428d-b29a-c5e808f26b40-kube-api-access-chkvh\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.557325 kubelet[3132]: I0707 06:17:22.555971 3132 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12067e34-e89c-428d-b29a-c5e808f26b40-xtables-lock\") pod \"cilium-jj72m\" (UID: \"12067e34-e89c-428d-b29a-c5e808f26b40\") " pod="kube-system/cilium-jj72m" Jul 7 06:17:22.621083 sshd[4816]: Connection closed by 10.200.16.10 port 57756 Jul 7 06:17:22.621939 sshd-session[4814]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:22.625927 systemd[1]: sshd@22-10.200.4.32:22-10.200.16.10:57756.service: Deactivated successfully. Jul 7 06:17:22.627798 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:17:22.629087 systemd-logind[1706]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:17:22.630391 systemd-logind[1706]: Removed session 25. Jul 7 06:17:22.732196 systemd[1]: Started sshd@23-10.200.4.32:22-10.200.16.10:57760.service - OpenSSH per-connection server daemon (10.200.16.10:57760). Jul 7 06:17:22.859784 containerd[1732]: time="2025-07-07T06:17:22.859678196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jj72m,Uid:12067e34-e89c-428d-b29a-c5e808f26b40,Namespace:kube-system,Attempt:0,}" Jul 7 06:17:22.902099 containerd[1732]: time="2025-07-07T06:17:22.902051181Z" level=info msg="connecting to shim c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764" address="unix:///run/containerd/s/ebae671c26ace2f0fc265c67ba82795526cf626a14568dabc7e05c5e8ec1bd2e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:17:22.923902 systemd[1]: Started cri-containerd-c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764.scope - libcontainer container c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764. Jul 7 06:17:22.949241 containerd[1732]: time="2025-07-07T06:17:22.949211426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jj72m,Uid:12067e34-e89c-428d-b29a-c5e808f26b40,Namespace:kube-system,Attempt:0,} returns sandbox id \"c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764\"" Jul 7 06:17:22.951721 containerd[1732]: time="2025-07-07T06:17:22.951671272Z" level=info msg="CreateContainer within sandbox \"c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 06:17:22.966613 containerd[1732]: time="2025-07-07T06:17:22.966587576Z" level=info msg="Container 98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:22.987363 containerd[1732]: time="2025-07-07T06:17:22.987337277Z" level=info msg="CreateContainer within sandbox \"c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9\"" Jul 7 06:17:22.987803 containerd[1732]: time="2025-07-07T06:17:22.987773044Z" level=info msg="StartContainer for \"98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9\"" Jul 7 06:17:22.989035 containerd[1732]: time="2025-07-07T06:17:22.989005611Z" level=info msg="connecting to shim 98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9" address="unix:///run/containerd/s/ebae671c26ace2f0fc265c67ba82795526cf626a14568dabc7e05c5e8ec1bd2e" protocol=ttrpc version=3 Jul 7 06:17:23.003831 systemd[1]: Started cri-containerd-98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9.scope - libcontainer container 98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9. Jul 7 06:17:23.033583 systemd[1]: cri-containerd-98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9.scope: Deactivated successfully. Jul 7 06:17:23.035215 containerd[1732]: time="2025-07-07T06:17:23.035185847Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9\" id:\"98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9\" pid:4888 exited_at:{seconds:1751869043 nanos:34853416}" Jul 7 06:17:23.036112 containerd[1732]: time="2025-07-07T06:17:23.035760715Z" level=info msg="received exit event container_id:\"98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9\" id:\"98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9\" pid:4888 exited_at:{seconds:1751869043 nanos:34853416}" Jul 7 06:17:23.037003 containerd[1732]: time="2025-07-07T06:17:23.036985001Z" level=info msg="StartContainer for \"98f97a2345dbc6f04224095cf94a865f44544d5a99cc88ae0c5afd2e1e19eba9\" returns successfully" Jul 7 06:17:23.328081 sshd[4831]: Accepted publickey for core from 10.200.16.10 port 57760 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:23.329405 sshd-session[4831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:23.333803 systemd-logind[1706]: New session 26 of user core. Jul 7 06:17:23.336848 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:17:23.751591 sshd[4923]: Connection closed by 10.200.16.10 port 57760 Jul 7 06:17:23.752192 sshd-session[4831]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:23.755879 systemd[1]: sshd@23-10.200.4.32:22-10.200.16.10:57760.service: Deactivated successfully. Jul 7 06:17:23.757772 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:17:23.758499 systemd-logind[1706]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:17:23.759983 systemd-logind[1706]: Removed session 26. Jul 7 06:17:23.861758 systemd[1]: Started sshd@24-10.200.4.32:22-10.200.16.10:57766.service - OpenSSH per-connection server daemon (10.200.16.10:57766). Jul 7 06:17:24.073776 containerd[1732]: time="2025-07-07T06:17:24.073545919Z" level=info msg="CreateContainer within sandbox \"c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 06:17:24.111818 containerd[1732]: time="2025-07-07T06:17:24.111783480Z" level=info msg="Container ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:24.115815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1302472940.mount: Deactivated successfully. Jul 7 06:17:24.131500 containerd[1732]: time="2025-07-07T06:17:24.131471313Z" level=info msg="CreateContainer within sandbox \"c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea\"" Jul 7 06:17:24.131989 containerd[1732]: time="2025-07-07T06:17:24.131966265Z" level=info msg="StartContainer for \"ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea\"" Jul 7 06:17:24.133088 containerd[1732]: time="2025-07-07T06:17:24.133042049Z" level=info msg="connecting to shim ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea" address="unix:///run/containerd/s/ebae671c26ace2f0fc265c67ba82795526cf626a14568dabc7e05c5e8ec1bd2e" protocol=ttrpc version=3 Jul 7 06:17:24.151911 systemd[1]: Started cri-containerd-ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea.scope - libcontainer container ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea. Jul 7 06:17:24.192349 systemd[1]: cri-containerd-ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea.scope: Deactivated successfully. Jul 7 06:17:24.193584 containerd[1732]: time="2025-07-07T06:17:24.193559623Z" level=info msg="received exit event container_id:\"ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea\" id:\"ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea\" pid:4945 exited_at:{seconds:1751869044 nanos:193323909}" Jul 7 06:17:24.193881 containerd[1732]: time="2025-07-07T06:17:24.193814844Z" level=info msg="StartContainer for \"ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea\" returns successfully" Jul 7 06:17:24.194274 containerd[1732]: time="2025-07-07T06:17:24.194239239Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea\" id:\"ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea\" pid:4945 exited_at:{seconds:1751869044 nanos:193323909}" Jul 7 06:17:24.212166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebf3dc8ab7eca1e85176a25fe89c59881f686547067dcdb9338c4a66841ddfea-rootfs.mount: Deactivated successfully. Jul 7 06:17:24.463414 sshd[4930]: Accepted publickey for core from 10.200.16.10 port 57766 ssh2: RSA SHA256:TtYY2cCdjUVnQ2wrlCI6ybohLXcXMigw2WWdDIb49hQ Jul 7 06:17:24.464924 sshd-session[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:17:24.469128 systemd-logind[1706]: New session 27 of user core. Jul 7 06:17:24.476886 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 06:17:25.076309 containerd[1732]: time="2025-07-07T06:17:25.076253547Z" level=info msg="CreateContainer within sandbox \"c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 06:17:25.107097 containerd[1732]: time="2025-07-07T06:17:25.107062475Z" level=info msg="Container 8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:25.126456 containerd[1732]: time="2025-07-07T06:17:25.126426071Z" level=info msg="CreateContainer within sandbox \"c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111\"" Jul 7 06:17:25.126942 containerd[1732]: time="2025-07-07T06:17:25.126893556Z" level=info msg="StartContainer for \"8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111\"" Jul 7 06:17:25.128451 containerd[1732]: time="2025-07-07T06:17:25.128395014Z" level=info msg="connecting to shim 8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111" address="unix:///run/containerd/s/ebae671c26ace2f0fc265c67ba82795526cf626a14568dabc7e05c5e8ec1bd2e" protocol=ttrpc version=3 Jul 7 06:17:25.151898 systemd[1]: Started cri-containerd-8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111.scope - libcontainer container 8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111. Jul 7 06:17:25.183779 systemd[1]: cri-containerd-8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111.scope: Deactivated successfully. Jul 7 06:17:25.186691 containerd[1732]: time="2025-07-07T06:17:25.186627413Z" level=info msg="received exit event container_id:\"8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111\" id:\"8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111\" pid:5002 exited_at:{seconds:1751869045 nanos:186388175}" Jul 7 06:17:25.187716 containerd[1732]: time="2025-07-07T06:17:25.187639326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111\" id:\"8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111\" pid:5002 exited_at:{seconds:1751869045 nanos:186388175}" Jul 7 06:17:25.193612 containerd[1732]: time="2025-07-07T06:17:25.193589772Z" level=info msg="StartContainer for \"8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111\" returns successfully" Jul 7 06:17:25.203685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a1bf05ff1efcde7a9fa85c1fe8cdadaa4d6afa2775254dd32bee300cfc13111-rootfs.mount: Deactivated successfully. Jul 7 06:17:26.082600 containerd[1732]: time="2025-07-07T06:17:26.081799672Z" level=info msg="CreateContainer within sandbox \"c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 06:17:26.117734 containerd[1732]: time="2025-07-07T06:17:26.116224651Z" level=info msg="Container a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:26.120278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000341499.mount: Deactivated successfully. Jul 7 06:17:26.132718 containerd[1732]: time="2025-07-07T06:17:26.132681116Z" level=info msg="CreateContainer within sandbox \"c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5\"" Jul 7 06:17:26.133154 containerd[1732]: time="2025-07-07T06:17:26.133049972Z" level=info msg="StartContainer for \"a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5\"" Jul 7 06:17:26.134044 containerd[1732]: time="2025-07-07T06:17:26.134006117Z" level=info msg="connecting to shim a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5" address="unix:///run/containerd/s/ebae671c26ace2f0fc265c67ba82795526cf626a14568dabc7e05c5e8ec1bd2e" protocol=ttrpc version=3 Jul 7 06:17:26.157848 systemd[1]: Started cri-containerd-a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5.scope - libcontainer container a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5. Jul 7 06:17:26.181166 systemd[1]: cri-containerd-a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5.scope: Deactivated successfully. Jul 7 06:17:26.182009 containerd[1732]: time="2025-07-07T06:17:26.181980543Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5\" id:\"a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5\" pid:5041 exited_at:{seconds:1751869046 nanos:181528343}" Jul 7 06:17:26.185293 containerd[1732]: time="2025-07-07T06:17:26.185200682Z" level=info msg="received exit event container_id:\"a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5\" id:\"a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5\" pid:5041 exited_at:{seconds:1751869046 nanos:181528343}" Jul 7 06:17:26.190543 containerd[1732]: time="2025-07-07T06:17:26.190522719Z" level=info msg="StartContainer for \"a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5\" returns successfully" Jul 7 06:17:26.762169 kubelet[3132]: E0707 06:17:26.762124 3132 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 06:17:27.086759 containerd[1732]: time="2025-07-07T06:17:27.086631887Z" level=info msg="CreateContainer within sandbox \"c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 06:17:27.107277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a59c6736798bceabc3562c9f74589aaf65835a65df641b9cebb0d489ae51a1e5-rootfs.mount: Deactivated successfully. Jul 7 06:17:27.113738 containerd[1732]: time="2025-07-07T06:17:27.112246129Z" level=info msg="Container 7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:17:27.131556 containerd[1732]: time="2025-07-07T06:17:27.131530723Z" level=info msg="CreateContainer within sandbox \"c573f4a73769df0f8ff123c890ec233bdf995516e1287c79ce782bab365ba764\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7\"" Jul 7 06:17:27.132061 containerd[1732]: time="2025-07-07T06:17:27.131956109Z" level=info msg="StartContainer for \"7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7\"" Jul 7 06:17:27.133361 containerd[1732]: time="2025-07-07T06:17:27.133334745Z" level=info msg="connecting to shim 7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7" address="unix:///run/containerd/s/ebae671c26ace2f0fc265c67ba82795526cf626a14568dabc7e05c5e8ec1bd2e" protocol=ttrpc version=3 Jul 7 06:17:27.152948 systemd[1]: Started cri-containerd-7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7.scope - libcontainer container 7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7. Jul 7 06:17:27.185165 containerd[1732]: time="2025-07-07T06:17:27.185136652Z" level=info msg="StartContainer for \"7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7\" returns successfully" Jul 7 06:17:27.240545 containerd[1732]: time="2025-07-07T06:17:27.240521975Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7\" id:\"6384517d7e7a6e77b7320d5a8788e50587a2ae6b4790539377d96d95b15b6a7f\" pid:5110 exited_at:{seconds:1751869047 nanos:240313781}" Jul 7 06:17:27.498747 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-vaes-avx10_512)) Jul 7 06:17:28.108034 kubelet[3132]: I0707 06:17:28.107959 3132 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jj72m" podStartSLOduration=6.107938183 podStartE2EDuration="6.107938183s" podCreationTimestamp="2025-07-07 06:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:17:28.107462449 +0000 UTC m=+161.527663716" watchObservedRunningTime="2025-07-07 06:17:28.107938183 +0000 UTC m=+161.528139449" Jul 7 06:17:29.005196 containerd[1732]: time="2025-07-07T06:17:29.005142955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7\" id:\"3de637454dca38065801bfb779bd5c381ce85b8e944d73f4a881e6fee3f333c4\" pid:5250 exit_status:1 exited_at:{seconds:1751869049 nanos:4748366}" Jul 7 06:17:30.030694 systemd-networkd[1352]: lxc_health: Link UP Jul 7 06:17:30.033219 systemd-networkd[1352]: lxc_health: Gained carrier Jul 7 06:17:30.363579 kubelet[3132]: I0707 06:17:30.361831 3132 setters.go:602] "Node became not ready" node="ci-4372.0.1-a-6edf51656b" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T06:17:30Z","lastTransitionTime":"2025-07-07T06:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 06:17:31.141532 containerd[1732]: time="2025-07-07T06:17:31.141470782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7\" id:\"5b97781f7e499b05d54461c970ed6e29ce4198fbb3c50b2880cbf3563de08ad3\" pid:5630 exited_at:{seconds:1751869051 nanos:141233913}" Jul 7 06:17:31.144369 kubelet[3132]: E0707 06:17:31.144302 3132 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35968->127.0.0.1:34097: write tcp 127.0.0.1:35968->127.0.0.1:34097: write: broken pipe Jul 7 06:17:31.249054 systemd-networkd[1352]: lxc_health: Gained IPv6LL Jul 7 06:17:33.278303 containerd[1732]: time="2025-07-07T06:17:33.278244521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7\" id:\"23dec9b5a2e33887bc36e1d9154d20f2433937c8a59efa08359b0fc7fe5930d3\" pid:5671 exited_at:{seconds:1751869053 nanos:277797452}" Jul 7 06:17:35.367835 containerd[1732]: time="2025-07-07T06:17:35.367718102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e111eef1742afde4599260f472abb3bf0b292fee04ae91046d636f24cfc0af7\" id:\"5ba975c42b2a9379640e07a0b2c0ddc2eda80fadd2ed54c6e52ed6ab642e8906\" pid:5700 exited_at:{seconds:1751869055 nanos:367200714}" Jul 7 06:17:35.499201 sshd[4980]: Connection closed by 10.200.16.10 port 57766 Jul 7 06:17:35.499921 sshd-session[4930]: pam_unix(sshd:session): session closed for user core Jul 7 06:17:35.504517 systemd[1]: sshd@24-10.200.4.32:22-10.200.16.10:57766.service: Deactivated successfully. Jul 7 06:17:35.506546 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 06:17:35.507676 systemd-logind[1706]: Session 27 logged out. Waiting for processes to exit. Jul 7 06:17:35.509289 systemd-logind[1706]: Removed session 27.