Jun 20 18:52:35.101231 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:12:40 -00 2025 Jun 20 18:52:35.101271 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:52:35.101286 kernel: BIOS-provided physical RAM map: Jun 20 18:52:35.101297 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jun 20 18:52:35.101308 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jun 20 18:52:35.101319 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jun 20 18:52:35.101332 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jun 20 18:52:35.101344 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jun 20 18:52:35.101359 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jun 20 18:52:35.101370 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jun 20 18:52:35.101382 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jun 20 18:52:35.101393 kernel: printk: bootconsole [earlyser0] enabled Jun 20 18:52:35.101404 kernel: NX (Execute Disable) protection: active Jun 20 18:52:35.101416 kernel: APIC: Static calls initialized Jun 20 18:52:35.101434 kernel: efi: EFI v2.7 by Microsoft Jun 20 18:52:35.101447 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 RNG=0x3ffd1018 Jun 20 18:52:35.101460 kernel: random: crng init done Jun 20 18:52:35.101473 kernel: secureboot: Secure boot disabled Jun 20 18:52:35.101487 kernel: SMBIOS 3.1.0 present. Jun 20 18:52:35.101501 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jun 20 18:52:35.101514 kernel: Hypervisor detected: Microsoft Hyper-V Jun 20 18:52:35.101527 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jun 20 18:52:35.101540 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jun 20 18:52:35.101552 kernel: Hyper-V: Nested features: 0x1e0101 Jun 20 18:52:35.101568 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jun 20 18:52:35.101581 kernel: Hyper-V: Using hypercall for remote TLB flush Jun 20 18:52:35.101594 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 18:52:35.101607 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jun 20 18:52:35.101621 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jun 20 18:52:35.101634 kernel: tsc: Detected 2593.905 MHz processor Jun 20 18:52:35.101647 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 18:52:35.101660 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 18:52:35.101673 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jun 20 18:52:35.101689 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 18:52:35.101702 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 18:52:35.101715 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jun 20 18:52:35.101727 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jun 20 18:52:35.101740 kernel: Using GB pages for direct mapping Jun 20 18:52:35.101753 kernel: ACPI: Early table checksum verification disabled Jun 20 18:52:35.101766 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jun 20 18:52:35.101784 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:52:35.101800 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:52:35.101814 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jun 20 18:52:35.101828 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jun 20 18:52:35.101841 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:52:35.101856 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:52:35.101870 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:52:35.101886 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:52:35.101900 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:52:35.101914 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:52:35.101928 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jun 20 18:52:35.101942 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jun 20 18:52:35.101956 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jun 20 18:52:35.101970 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jun 20 18:52:35.101984 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jun 20 18:52:35.101998 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jun 20 18:52:35.102014 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jun 20 18:52:35.102028 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jun 20 18:52:35.102042 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jun 20 18:52:35.102055 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jun 20 18:52:35.102069 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jun 20 18:52:35.102083 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 20 18:52:35.102097 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 20 18:52:35.102122 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jun 20 18:52:35.102136 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jun 20 18:52:35.102152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jun 20 18:52:35.102166 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jun 20 18:52:35.102180 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jun 20 18:52:35.102194 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jun 20 18:52:35.102208 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jun 20 18:52:35.102222 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jun 20 18:52:35.102236 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jun 20 18:52:35.102250 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jun 20 18:52:35.102267 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jun 20 18:52:35.102281 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jun 20 18:52:35.102295 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jun 20 18:52:35.102309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jun 20 18:52:35.102322 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jun 20 18:52:35.102336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jun 20 18:52:35.102350 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jun 20 18:52:35.102364 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jun 20 18:52:35.102378 kernel: Zone ranges: Jun 20 18:52:35.102394 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 18:52:35.102408 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 18:52:35.102422 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 18:52:35.102436 kernel: Movable zone start for each node Jun 20 18:52:35.102449 kernel: Early memory node ranges Jun 20 18:52:35.102463 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jun 20 18:52:35.102477 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jun 20 18:52:35.102491 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jun 20 18:52:35.102504 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jun 20 18:52:35.102521 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jun 20 18:52:35.102534 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 18:52:35.102548 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jun 20 18:52:35.102562 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jun 20 18:52:35.102576 kernel: ACPI: PM-Timer IO Port: 0x408 Jun 20 18:52:35.102589 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jun 20 18:52:35.102603 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jun 20 18:52:35.102617 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 18:52:35.102631 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 18:52:35.102647 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jun 20 18:52:35.102661 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 18:52:35.102675 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jun 20 18:52:35.102689 kernel: Booting paravirtualized kernel on Hyper-V Jun 20 18:52:35.102703 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 18:52:35.102717 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 18:52:35.102731 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jun 20 18:52:35.102745 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jun 20 18:52:35.102759 kernel: pcpu-alloc: [0] 0 1 Jun 20 18:52:35.102775 kernel: Hyper-V: PV spinlocks enabled Jun 20 18:52:35.102789 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 18:52:35.102804 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:52:35.102819 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:52:35.102832 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jun 20 18:52:35.102846 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 18:52:35.102860 kernel: Fallback order for Node 0: 0 Jun 20 18:52:35.102873 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jun 20 18:52:35.102890 kernel: Policy zone: Normal Jun 20 18:52:35.102916 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:52:35.102930 kernel: software IO TLB: area num 2. Jun 20 18:52:35.102948 kernel: Memory: 8075040K/8387460K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 312164K reserved, 0K cma-reserved) Jun 20 18:52:35.102963 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:52:35.102979 kernel: ftrace: allocating 37938 entries in 149 pages Jun 20 18:52:35.102995 kernel: ftrace: allocated 149 pages with 4 groups Jun 20 18:52:35.103010 kernel: Dynamic Preempt: voluntary Jun 20 18:52:35.103025 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:52:35.103040 kernel: rcu: RCU event tracing is enabled. Jun 20 18:52:35.103055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:52:35.103073 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:52:35.103088 kernel: Rude variant of Tasks RCU enabled. Jun 20 18:52:35.105763 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:52:35.105781 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:52:35.105791 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:52:35.105802 kernel: Using NULL legacy PIC Jun 20 18:52:35.105817 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jun 20 18:52:35.105828 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:52:35.105837 kernel: Console: colour dummy device 80x25 Jun 20 18:52:35.105845 kernel: printk: console [tty1] enabled Jun 20 18:52:35.105853 kernel: printk: console [ttyS0] enabled Jun 20 18:52:35.105862 kernel: printk: bootconsole [earlyser0] disabled Jun 20 18:52:35.105872 kernel: ACPI: Core revision 20230628 Jun 20 18:52:35.105881 kernel: Failed to register legacy timer interrupt Jun 20 18:52:35.105892 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 18:52:35.105903 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jun 20 18:52:35.105914 kernel: Hyper-V: Using IPI hypercalls Jun 20 18:52:35.105924 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jun 20 18:52:35.105934 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jun 20 18:52:35.105944 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jun 20 18:52:35.105953 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jun 20 18:52:35.105963 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jun 20 18:52:35.105972 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jun 20 18:52:35.105981 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jun 20 18:52:35.105991 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 20 18:52:35.106002 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jun 20 18:52:35.106011 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 18:52:35.106018 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 18:52:35.106026 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 18:52:35.106037 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 20 18:52:35.106046 kernel: RETBleed: Vulnerable Jun 20 18:52:35.106054 kernel: Speculative Store Bypass: Vulnerable Jun 20 18:52:35.106065 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 18:52:35.106073 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 20 18:52:35.106085 kernel: ITS: Mitigation: Aligned branch/return thunks Jun 20 18:52:35.106094 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 18:52:35.106111 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 18:52:35.106122 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 18:52:35.106130 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 20 18:52:35.106138 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 20 18:52:35.106149 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 20 18:52:35.106157 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 18:52:35.106168 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jun 20 18:52:35.106176 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jun 20 18:52:35.106186 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jun 20 18:52:35.106198 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jun 20 18:52:35.106207 kernel: Freeing SMP alternatives memory: 32K Jun 20 18:52:35.106217 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:52:35.106225 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 18:52:35.106236 kernel: landlock: Up and running. Jun 20 18:52:35.106244 kernel: SELinux: Initializing. Jun 20 18:52:35.106253 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 18:52:35.106263 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 18:52:35.106271 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 20 18:52:35.106282 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:52:35.106292 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:52:35.106304 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:52:35.106316 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 20 18:52:35.106325 kernel: signal: max sigframe size: 3632 Jun 20 18:52:35.106338 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:52:35.106350 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:52:35.106361 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 20 18:52:35.106370 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:52:35.106378 kernel: smpboot: x86: Booting SMP configuration: Jun 20 18:52:35.106386 kernel: .... node #0, CPUs: #1 Jun 20 18:52:35.106397 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jun 20 18:52:35.106412 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 20 18:52:35.106426 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:52:35.106435 kernel: smpboot: Max logical packages: 1 Jun 20 18:52:35.106443 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jun 20 18:52:35.106461 kernel: devtmpfs: initialized Jun 20 18:52:35.106479 kernel: x86/mm: Memory block size: 128MB Jun 20 18:52:35.106490 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jun 20 18:52:35.106501 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:52:35.106520 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:52:35.106536 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:52:35.106545 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:52:35.106554 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:52:35.106569 kernel: audit: type=2000 audit(1750445553.029:1): state=initialized audit_enabled=0 res=1 Jun 20 18:52:35.106587 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:52:35.106602 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 18:52:35.106615 kernel: cpuidle: using governor menu Jun 20 18:52:35.106625 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:52:35.106638 kernel: dca service started, version 1.12.1 Jun 20 18:52:35.106653 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jun 20 18:52:35.106670 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 18:52:35.106681 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:52:35.106689 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:52:35.106703 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:52:35.106716 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:52:35.106724 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:52:35.106741 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:52:35.106758 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:52:35.106771 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:52:35.106779 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 20 18:52:35.106792 kernel: ACPI: Interpreter enabled Jun 20 18:52:35.106811 kernel: ACPI: PM: (supports S0 S5) Jun 20 18:52:35.106826 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 18:52:35.106835 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 18:52:35.106845 kernel: PCI: Ignoring E820 reservations for host bridge windows Jun 20 18:52:35.106870 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jun 20 18:52:35.106886 kernel: iommu: Default domain type: Translated Jun 20 18:52:35.106896 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 18:52:35.106905 kernel: efivars: Registered efivars operations Jun 20 18:52:35.106919 kernel: PCI: Using ACPI for IRQ routing Jun 20 18:52:35.106935 kernel: PCI: System does not support PCI Jun 20 18:52:35.106947 kernel: vgaarb: loaded Jun 20 18:52:35.106955 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jun 20 18:52:35.106967 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:52:35.106990 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:52:35.106999 kernel: pnp: PnP ACPI init Jun 20 18:52:35.107007 kernel: pnp: PnP ACPI: found 3 devices Jun 20 18:52:35.107016 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 18:52:35.107031 kernel: NET: Registered PF_INET protocol family Jun 20 18:52:35.107048 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 18:52:35.107062 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jun 20 18:52:35.107070 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:52:35.107080 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 18:52:35.107762 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jun 20 18:52:35.107781 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jun 20 18:52:35.107809 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 18:52:35.107834 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jun 20 18:52:35.107849 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:52:35.107864 kernel: NET: Registered PF_XDP protocol family Jun 20 18:52:35.107878 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:52:35.107894 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 18:52:35.107908 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Jun 20 18:52:35.107928 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 20 18:52:35.107943 kernel: Initialise system trusted keyrings Jun 20 18:52:35.107957 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jun 20 18:52:35.107971 kernel: Key type asymmetric registered Jun 20 18:52:35.107990 kernel: Asymmetric key parser 'x509' registered Jun 20 18:52:35.108007 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 20 18:52:35.108026 kernel: io scheduler mq-deadline registered Jun 20 18:52:35.108044 kernel: io scheduler kyber registered Jun 20 18:52:35.108061 kernel: io scheduler bfq registered Jun 20 18:52:35.108084 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 18:52:35.110079 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:52:35.110113 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 18:52:35.110132 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jun 20 18:52:35.110145 kernel: i8042: PNP: No PS/2 controller found. Jun 20 18:52:35.110343 kernel: rtc_cmos 00:02: registered as rtc0 Jun 20 18:52:35.110469 kernel: rtc_cmos 00:02: setting system clock to 2025-06-20T18:52:34 UTC (1750445554) Jun 20 18:52:35.110586 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jun 20 18:52:35.110608 kernel: intel_pstate: CPU model not supported Jun 20 18:52:35.110616 kernel: efifb: probing for efifb Jun 20 18:52:35.110628 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jun 20 18:52:35.110637 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jun 20 18:52:35.110647 kernel: efifb: scrolling: redraw Jun 20 18:52:35.110656 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 18:52:35.110665 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:52:35.110675 kernel: fb0: EFI VGA frame buffer device Jun 20 18:52:35.110684 kernel: pstore: Using crash dump compression: deflate Jun 20 18:52:35.110698 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 18:52:35.110706 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:52:35.110718 kernel: Segment Routing with IPv6 Jun 20 18:52:35.110726 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:52:35.110736 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:52:35.110745 kernel: Key type dns_resolver registered Jun 20 18:52:35.110753 kernel: IPI shorthand broadcast: enabled Jun 20 18:52:35.110764 kernel: sched_clock: Marking stable (845003100, 49525400)->(1127444100, -232915600) Jun 20 18:52:35.110773 kernel: registered taskstats version 1 Jun 20 18:52:35.110786 kernel: Loading compiled-in X.509 certificates Jun 20 18:52:35.110795 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 583832681762bbd3c2cbcca308896cbba88c4497' Jun 20 18:52:35.110806 kernel: Key type .fscrypt registered Jun 20 18:52:35.110814 kernel: Key type fscrypt-provisioning registered Jun 20 18:52:35.110824 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:52:35.110833 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:52:35.110842 kernel: ima: No architecture policies found Jun 20 18:52:35.110853 kernel: clk: Disabling unused clocks Jun 20 18:52:35.110861 kernel: Freeing unused kernel image (initmem) memory: 43488K Jun 20 18:52:35.110875 kernel: Write protecting the kernel read-only data: 38912k Jun 20 18:52:35.110883 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jun 20 18:52:35.110894 kernel: Run /init as init process Jun 20 18:52:35.110903 kernel: with arguments: Jun 20 18:52:35.110912 kernel: /init Jun 20 18:52:35.110922 kernel: with environment: Jun 20 18:52:35.110930 kernel: HOME=/ Jun 20 18:52:35.110941 kernel: TERM=linux Jun 20 18:52:35.110949 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:52:35.110963 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:52:35.110976 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:52:35.110988 systemd[1]: Detected virtualization microsoft. Jun 20 18:52:35.111000 systemd[1]: Detected architecture x86-64. Jun 20 18:52:35.111010 systemd[1]: Running in initrd. Jun 20 18:52:35.111020 systemd[1]: No hostname configured, using default hostname. Jun 20 18:52:35.111031 systemd[1]: Hostname set to . Jun 20 18:52:35.111045 systemd[1]: Initializing machine ID from random generator. Jun 20 18:52:35.111053 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:52:35.111065 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:52:35.111074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:52:35.111086 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:52:35.111095 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:52:35.111117 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:52:35.111132 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:52:35.111142 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:52:35.111153 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:52:35.111163 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:52:35.111173 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:52:35.111183 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:52:35.111193 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:52:35.111206 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:52:35.111217 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:52:35.111226 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:52:35.111234 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:52:35.111243 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:52:35.111252 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:52:35.111261 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:52:35.111269 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:52:35.111278 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:52:35.111289 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:52:35.111300 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:52:35.111311 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:52:35.111320 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:52:35.111328 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:52:35.111340 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:52:35.111349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:52:35.111361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:52:35.111391 systemd-journald[177]: Collecting audit messages is disabled. Jun 20 18:52:35.111424 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:52:35.111436 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:52:35.111449 systemd-journald[177]: Journal started Jun 20 18:52:35.111472 systemd-journald[177]: Runtime Journal (/run/log/journal/eadf4647281a4621b7fb819a8a6a1dd9) is 8M, max 158.8M, 150.8M free. Jun 20 18:52:35.119175 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:52:35.119246 systemd-modules-load[178]: Inserted module 'overlay' Jun 20 18:52:35.126241 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:52:35.130985 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:35.148271 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:52:35.156558 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:52:35.167264 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:52:35.173791 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:52:35.182488 kernel: Bridge firewalling registered Jun 20 18:52:35.181203 systemd-modules-load[178]: Inserted module 'br_netfilter' Jun 20 18:52:35.188649 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:52:35.193288 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:52:35.202807 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:52:35.209922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:52:35.215188 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:52:35.225246 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:52:35.235773 dracut-cmdline[204]: dracut-dracut-053 Jun 20 18:52:35.245397 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:52:35.240321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:52:35.245083 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:52:35.276287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:52:35.286330 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:52:35.338131 kernel: SCSI subsystem initialized Jun 20 18:52:35.338668 systemd-resolved[262]: Positive Trust Anchors: Jun 20 18:52:35.341045 systemd-resolved[262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:52:35.341119 systemd-resolved[262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:52:35.369457 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:52:35.368356 systemd-resolved[262]: Defaulting to hostname 'linux'. Jun 20 18:52:35.372269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:52:35.375494 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:52:35.387122 kernel: iscsi: registered transport (tcp) Jun 20 18:52:35.409552 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:52:35.409645 kernel: QLogic iSCSI HBA Driver Jun 20 18:52:35.446630 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:52:35.456270 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:52:35.484028 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:52:35.484147 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:52:35.487414 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 18:52:35.528126 kernel: raid6: avx512x4 gen() 18298 MB/s Jun 20 18:52:35.547119 kernel: raid6: avx512x2 gen() 18159 MB/s Jun 20 18:52:35.566114 kernel: raid6: avx512x1 gen() 18290 MB/s Jun 20 18:52:35.585113 kernel: raid6: avx2x4 gen() 18101 MB/s Jun 20 18:52:35.604117 kernel: raid6: avx2x2 gen() 18083 MB/s Jun 20 18:52:35.624169 kernel: raid6: avx2x1 gen() 13652 MB/s Jun 20 18:52:35.624213 kernel: raid6: using algorithm avx512x4 gen() 18298 MB/s Jun 20 18:52:35.645227 kernel: raid6: .... xor() 7476 MB/s, rmw enabled Jun 20 18:52:35.645280 kernel: raid6: using avx512x2 recovery algorithm Jun 20 18:52:35.669135 kernel: xor: automatically using best checksumming function avx Jun 20 18:52:35.811130 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:52:35.820528 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:52:35.828272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:52:35.846472 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jun 20 18:52:35.851591 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:52:35.864284 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:52:35.884813 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Jun 20 18:52:35.914812 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:52:35.926370 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:52:35.968990 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:52:35.981387 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:52:35.997296 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:52:36.014207 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:52:36.015735 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:52:36.026402 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:52:36.040278 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:52:36.061426 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:52:36.081126 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 18:52:36.094131 kernel: hv_vmbus: Vmbus version:5.2 Jun 20 18:52:36.114403 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 20 18:52:36.114467 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 20 18:52:36.122200 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:52:36.137721 kernel: hv_vmbus: registering driver hv_netvsc Jun 20 18:52:36.137752 kernel: AVX2 version of gcm_enc/dec engaged. Jun 20 18:52:36.122448 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:52:36.144235 kernel: AES CTR mode by8 optimization enabled Jun 20 18:52:36.127800 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:52:36.154644 kernel: hv_vmbus: registering driver hyperv_keyboard Jun 20 18:52:36.154672 kernel: PTP clock support registered Jun 20 18:52:36.140791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:52:36.170051 kernel: hv_vmbus: registering driver hv_storvsc Jun 20 18:52:36.170123 kernel: scsi host0: storvsc_host_t Jun 20 18:52:36.170386 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jun 20 18:52:36.141097 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:36.144548 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:52:36.179125 kernel: scsi host1: storvsc_host_t Jun 20 18:52:36.187311 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jun 20 18:52:36.187358 kernel: hv_utils: Registering HyperV Utility Driver Jun 20 18:52:36.187286 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:52:36.198747 kernel: hv_vmbus: registering driver hv_utils Jun 20 18:52:36.198771 kernel: hv_utils: Heartbeat IC version 3.0 Jun 20 18:52:36.198787 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jun 20 18:52:36.204982 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:52:36.323537 kernel: hv_utils: Shutdown IC version 3.2 Jun 20 18:52:36.323683 kernel: hv_utils: TimeSync IC version 4.0 Jun 20 18:52:36.210176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:52:36.210316 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:36.323357 systemd-resolved[262]: Clock change detected. Flushing caches. Jun 20 18:52:36.336272 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:52:36.345560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:52:36.360021 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:52:36.371960 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jun 20 18:52:36.372232 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 18:52:36.379614 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jun 20 18:52:36.385363 kernel: hv_vmbus: registering driver hid_hyperv Jun 20 18:52:36.394079 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jun 20 18:52:36.404070 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jun 20 18:52:36.407334 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:36.425776 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jun 20 18:52:36.425995 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jun 20 18:52:36.430083 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 18:52:36.430267 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jun 20 18:52:36.430394 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jun 20 18:52:36.433526 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:52:36.446094 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:52:36.450092 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 18:52:36.461783 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:52:36.488071 kernel: hv_netvsc 7ced8d46-a34d-7ced-8d46-a34d7ced8d46 eth0: VF slot 1 added Jun 20 18:52:36.497073 kernel: hv_vmbus: registering driver hv_pci Jun 20 18:52:36.497124 kernel: hv_pci cf97770a-1a1e-4c79-a4bd-361c6d143d87: PCI VMBus probing: Using version 0x10004 Jun 20 18:52:36.506367 kernel: hv_pci cf97770a-1a1e-4c79-a4bd-361c6d143d87: PCI host bridge to bus 1a1e:00 Jun 20 18:52:36.506708 kernel: pci_bus 1a1e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jun 20 18:52:36.509400 kernel: pci_bus 1a1e:00: No busn resource found for root bus, will use [bus 00-ff] Jun 20 18:52:36.515527 kernel: pci 1a1e:00:02.0: [15b3:1016] type 00 class 0x020000 Jun 20 18:52:36.521752 kernel: pci 1a1e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 20 18:52:36.526251 kernel: pci 1a1e:00:02.0: enabling Extended Tags Jun 20 18:52:36.539119 kernel: pci 1a1e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1a1e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jun 20 18:52:36.545103 kernel: pci_bus 1a1e:00: busn_res: [bus 00-ff] end is updated to 00 Jun 20 18:52:36.545462 kernel: pci 1a1e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jun 20 18:52:36.711300 kernel: mlx5_core 1a1e:00:02.0: enabling device (0000 -> 0002) Jun 20 18:52:36.717092 kernel: mlx5_core 1a1e:00:02.0: firmware version: 14.30.5000 Jun 20 18:52:36.883677 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jun 20 18:52:36.928082 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (452) Jun 20 18:52:36.946076 kernel: BTRFS: device fsid 5ff786f3-14e2-4689-ad32-ff903cf13f91 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (472) Jun 20 18:52:36.946131 kernel: hv_netvsc 7ced8d46-a34d-7ced-8d46-a34d7ced8d46 eth0: VF registering: eth1 Jun 20 18:52:36.963076 kernel: mlx5_core 1a1e:00:02.0 eth1: joined to eth0 Jun 20 18:52:36.969585 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:52:36.981155 kernel: mlx5_core 1a1e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jun 20 18:52:36.988087 kernel: mlx5_core 1a1e:00:02.0 enP6686s1: renamed from eth1 Jun 20 18:52:36.992650 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jun 20 18:52:37.023841 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jun 20 18:52:37.027503 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jun 20 18:52:37.046185 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:52:37.063076 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:52:37.071082 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:52:38.078630 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:52:38.079993 disk-uuid[611]: The operation has completed successfully. Jun 20 18:52:38.162245 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:52:38.162376 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:52:38.214248 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:52:38.222922 sh[697]: Success Jun 20 18:52:38.257123 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 20 18:52:38.425739 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:52:38.439185 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:52:38.444134 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:52:38.464624 kernel: BTRFS info (device dm-0): first mount of filesystem 5ff786f3-14e2-4689-ad32-ff903cf13f91 Jun 20 18:52:38.464697 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:52:38.468167 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 18:52:38.470857 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 18:52:38.473263 kernel: BTRFS info (device dm-0): using free space tree Jun 20 18:52:38.739551 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:52:38.743094 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:52:38.750296 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:52:38.756188 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:52:38.786830 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:52:38.786887 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:52:38.786900 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:52:38.805103 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:52:38.814112 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:52:38.817843 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:52:38.829966 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:52:38.849092 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:52:38.858276 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:52:38.883738 systemd-networkd[878]: lo: Link UP Jun 20 18:52:38.883749 systemd-networkd[878]: lo: Gained carrier Jun 20 18:52:38.886041 systemd-networkd[878]: Enumeration completed Jun 20 18:52:38.886515 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:52:38.892248 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:38.892252 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:52:38.893307 systemd[1]: Reached target network.target - Network. Jun 20 18:52:38.959085 kernel: mlx5_core 1a1e:00:02.0 enP6686s1: Link up Jun 20 18:52:38.995790 kernel: hv_netvsc 7ced8d46-a34d-7ced-8d46-a34d7ced8d46 eth0: Data path switched to VF: enP6686s1 Jun 20 18:52:38.995359 systemd-networkd[878]: enP6686s1: Link UP Jun 20 18:52:38.995486 systemd-networkd[878]: eth0: Link UP Jun 20 18:52:38.995682 systemd-networkd[878]: eth0: Gained carrier Jun 20 18:52:38.995696 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:39.008343 systemd-networkd[878]: enP6686s1: Gained carrier Jun 20 18:52:39.038097 systemd-networkd[878]: eth0: DHCPv4 address 10.200.8.21/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 20 18:52:39.679197 ignition[851]: Ignition 2.20.0 Jun 20 18:52:39.679210 ignition[851]: Stage: fetch-offline Jun 20 18:52:39.679256 ignition[851]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:39.682259 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:52:39.679267 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:52:39.679376 ignition[851]: parsed url from cmdline: "" Jun 20 18:52:39.679381 ignition[851]: no config URL provided Jun 20 18:52:39.679388 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:52:39.679398 ignition[851]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:52:39.700180 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:52:39.679408 ignition[851]: failed to fetch config: resource requires networking Jun 20 18:52:39.679658 ignition[851]: Ignition finished successfully Jun 20 18:52:39.714235 ignition[887]: Ignition 2.20.0 Jun 20 18:52:39.714247 ignition[887]: Stage: fetch Jun 20 18:52:39.714488 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:39.714501 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:52:39.714589 ignition[887]: parsed url from cmdline: "" Jun 20 18:52:39.714592 ignition[887]: no config URL provided Jun 20 18:52:39.714597 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:52:39.714604 ignition[887]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:52:39.714628 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jun 20 18:52:39.826179 ignition[887]: GET result: OK Jun 20 18:52:39.826306 ignition[887]: config has been read from IMDS userdata Jun 20 18:52:39.826347 ignition[887]: parsing config with SHA512: 06dd7781b33e3fcceddd852494b0b79a86d777f6c6c578e063f272541e9ed4b8e7d2b83ab28f8821343624549c6d6ad491069f398f73cbd0167c65b172d1cd17 Jun 20 18:52:39.834203 unknown[887]: fetched base config from "system" Jun 20 18:52:39.834217 unknown[887]: fetched base config from "system" Jun 20 18:52:39.834661 ignition[887]: fetch: fetch complete Jun 20 18:52:39.834228 unknown[887]: fetched user config from "azure" Jun 20 18:52:39.834667 ignition[887]: fetch: fetch passed Jun 20 18:52:39.836284 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:52:39.834711 ignition[887]: Ignition finished successfully Jun 20 18:52:39.851273 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:52:39.869584 ignition[894]: Ignition 2.20.0 Jun 20 18:52:39.869596 ignition[894]: Stage: kargs Jun 20 18:52:39.871791 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:52:39.869807 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:39.869821 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:52:39.870698 ignition[894]: kargs: kargs passed Jun 20 18:52:39.883336 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:52:39.870744 ignition[894]: Ignition finished successfully Jun 20 18:52:39.896961 ignition[900]: Ignition 2.20.0 Jun 20 18:52:39.896972 ignition[900]: Stage: disks Jun 20 18:52:39.898894 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:52:39.897201 ignition[900]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:39.903636 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:52:39.897214 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:52:39.906739 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:52:39.898066 ignition[900]: disks: disks passed Jun 20 18:52:39.909589 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:52:39.898108 ignition[900]: Ignition finished successfully Jun 20 18:52:39.917110 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:52:39.922400 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:52:39.941134 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:52:39.987772 systemd-fsck[909]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jun 20 18:52:39.992534 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:52:40.005216 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:52:40.097080 kernel: EXT4-fs (sda9): mounted filesystem 943f8432-3dc9-4e22-b9bd-c29bf6a1f5e1 r/w with ordered data mode. Quota mode: none. Jun 20 18:52:40.097317 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:52:40.101849 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:52:40.139186 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:52:40.145172 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:52:40.151565 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 18:52:40.157151 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (920) Jun 20 18:52:40.162183 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:52:40.163560 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:52:40.179769 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:52:40.179809 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:52:40.179823 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:52:40.185071 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:52:40.189128 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:52:40.190973 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:52:40.200264 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:52:40.611342 systemd-networkd[878]: eth0: Gained IPv6LL Jun 20 18:52:40.760789 coreos-metadata[922]: Jun 20 18:52:40.760 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:52:40.764944 coreos-metadata[922]: Jun 20 18:52:40.763 INFO Fetch successful Jun 20 18:52:40.764944 coreos-metadata[922]: Jun 20 18:52:40.763 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:52:40.773081 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:52:40.776178 coreos-metadata[922]: Jun 20 18:52:40.775 INFO Fetch successful Jun 20 18:52:40.776178 coreos-metadata[922]: Jun 20 18:52:40.776 INFO wrote hostname ci-4230.2.0-a-bab85c4a2e to /sysroot/etc/hostname Jun 20 18:52:40.780320 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:52:40.805511 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:52:40.812903 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:52:40.834227 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:52:40.867211 systemd-networkd[878]: enP6686s1: Gained IPv6LL Jun 20 18:52:41.544215 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:52:41.553227 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:52:41.562234 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:52:41.570088 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:52:41.572643 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:52:41.598701 ignition[1039]: INFO : Ignition 2.20.0 Jun 20 18:52:41.598701 ignition[1039]: INFO : Stage: mount Jun 20 18:52:41.606088 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:41.606088 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:52:41.606088 ignition[1039]: INFO : mount: mount passed Jun 20 18:52:41.606088 ignition[1039]: INFO : Ignition finished successfully Jun 20 18:52:41.603662 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:52:41.617990 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:52:41.632210 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:52:41.647270 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:52:41.661074 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1050) Jun 20 18:52:41.669077 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:52:41.669137 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:52:41.669153 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:52:41.675253 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:52:41.676834 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:52:41.699219 ignition[1067]: INFO : Ignition 2.20.0 Jun 20 18:52:41.699219 ignition[1067]: INFO : Stage: files Jun 20 18:52:41.703271 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:41.703271 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:52:41.703271 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:52:41.722458 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:52:41.722458 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:52:41.797261 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:52:41.801077 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:52:41.804703 unknown[1067]: wrote ssh authorized keys file for user: core Jun 20 18:52:41.807383 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:52:41.818301 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 18:52:41.823241 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jun 20 18:52:41.888769 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:52:42.032459 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 18:52:42.038211 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:52:42.038211 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 18:52:42.575852 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 18:52:42.713802 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:52:42.718821 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jun 20 18:52:43.377492 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 18:52:43.652342 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:52:43.652342 ignition[1067]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 18:52:43.676452 ignition[1067]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:52:43.684352 ignition[1067]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:52:43.684352 ignition[1067]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 18:52:43.684352 ignition[1067]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:52:43.684352 ignition[1067]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:52:43.684352 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:52:43.684352 ignition[1067]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:52:43.684352 ignition[1067]: INFO : files: files passed Jun 20 18:52:43.684352 ignition[1067]: INFO : Ignition finished successfully Jun 20 18:52:43.678501 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:52:43.700292 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:52:43.722295 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:52:43.729231 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:52:43.729332 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:52:43.748749 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:52:43.748749 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:52:43.764625 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:52:43.751453 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:52:43.760613 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:52:43.777748 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:52:43.808562 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:52:43.808683 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:52:43.815022 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:52:43.820772 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:52:43.826165 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:52:43.833249 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:52:43.847585 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:52:43.859348 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:52:43.871726 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:52:43.872836 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:52:43.873250 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:52:43.873640 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:52:43.873762 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:52:43.874513 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:52:43.874928 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:52:43.875790 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:52:43.876220 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:52:43.876625 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:52:43.877069 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:52:43.877601 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:52:43.878043 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:52:43.878453 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:52:43.878850 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:52:43.879237 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:52:43.879389 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:52:43.880120 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:52:43.880594 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:52:43.880949 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:52:44.008311 ignition[1120]: INFO : Ignition 2.20.0 Jun 20 18:52:44.008311 ignition[1120]: INFO : Stage: umount Jun 20 18:52:44.008311 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:44.008311 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jun 20 18:52:44.008311 ignition[1120]: INFO : umount: umount passed Jun 20 18:52:44.008311 ignition[1120]: INFO : Ignition finished successfully Jun 20 18:52:43.920456 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:52:43.923808 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:52:43.923988 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:52:43.929828 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:52:43.929991 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:52:43.936807 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:52:43.939706 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:52:43.947440 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 18:52:43.947598 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:52:43.960313 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:52:43.967170 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:52:43.967385 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:52:43.982220 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:52:43.988617 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:52:43.989404 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:52:43.997640 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:52:43.997819 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:52:44.004042 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:52:44.004233 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:52:44.010738 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:52:44.011034 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:52:44.017551 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:52:44.017606 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:52:44.027110 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:52:44.027173 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:52:44.030160 systemd[1]: Stopped target network.target - Network. Jun 20 18:52:44.097351 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:52:44.097475 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:52:44.102961 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:52:44.107921 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:52:44.108010 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:52:44.115567 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:52:44.117931 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:52:44.120456 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:52:44.120507 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:52:44.126588 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:52:44.126641 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:52:44.141490 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:52:44.141578 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:52:44.147994 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:52:44.148067 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:52:44.156403 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:52:44.159287 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:52:44.161416 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:52:44.162176 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:52:44.162259 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:52:44.166594 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:52:44.166689 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:52:44.169714 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:52:44.169770 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:52:44.188184 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:52:44.188286 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:52:44.197130 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:52:44.197352 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:52:44.197454 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:52:44.201998 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:52:44.202995 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:52:44.203093 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:52:44.224534 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:52:44.229427 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:52:44.231973 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:52:44.238402 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:52:44.238482 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:52:44.243495 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:52:44.243549 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:52:44.248595 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:52:44.248654 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:52:44.257394 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:52:44.270571 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:52:44.273870 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:52:44.287474 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:52:44.287636 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:52:44.292885 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:52:44.292973 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:52:44.296852 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:52:44.296897 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:52:44.299521 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:52:44.299576 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:52:44.307629 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:52:44.307695 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:52:44.312980 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:52:44.339091 kernel: hv_netvsc 7ced8d46-a34d-7ced-8d46-a34d7ced8d46 eth0: Data path switched from VF: enP6686s1 Jun 20 18:52:44.313032 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:52:44.338467 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:52:44.348739 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:52:44.348838 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:52:44.354990 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:52:44.358130 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:44.364743 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 18:52:44.364813 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:52:44.365405 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:52:44.365500 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:52:44.369872 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:52:44.369963 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:52:44.376273 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:52:44.400200 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:52:44.409463 systemd[1]: Switching root. Jun 20 18:52:44.472069 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jun 20 18:52:44.472169 systemd-journald[177]: Journal stopped Jun 20 18:52:49.391317 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:52:49.391371 kernel: SELinux: policy capability open_perms=1 Jun 20 18:52:49.391390 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:52:49.391404 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:52:49.391418 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:52:49.391432 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:52:49.391449 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:52:49.391464 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:52:49.391482 kernel: audit: type=1403 audit(1750445566.261:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:52:49.391500 systemd[1]: Successfully loaded SELinux policy in 164.681ms. Jun 20 18:52:49.391519 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.704ms. Jun 20 18:52:49.391537 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:52:49.391553 systemd[1]: Detected virtualization microsoft. Jun 20 18:52:49.391569 systemd[1]: Detected architecture x86-64. Jun 20 18:52:49.391590 systemd[1]: Detected first boot. Jun 20 18:52:49.391610 systemd[1]: Hostname set to . Jun 20 18:52:49.391627 systemd[1]: Initializing machine ID from random generator. Jun 20 18:52:49.391643 zram_generator::config[1166]: No configuration found. Jun 20 18:52:49.391661 kernel: Guest personality initialized and is inactive Jun 20 18:52:49.391679 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Jun 20 18:52:49.391695 kernel: Initialized host personality Jun 20 18:52:49.391711 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:52:49.391726 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:52:49.391745 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:52:49.391762 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:52:49.391779 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:52:49.391795 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:52:49.391812 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:52:49.391834 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:52:49.391851 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:52:49.391867 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:52:49.391885 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:52:49.391902 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:52:49.391919 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:52:49.391936 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:52:49.391956 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:52:49.391975 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:52:49.391992 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:52:49.392009 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:52:49.392027 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:52:49.392092 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:52:49.392112 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 18:52:49.392130 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:52:49.392152 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:52:49.392170 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:52:49.392185 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:52:49.392202 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:52:49.392220 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:52:49.392238 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:52:49.392255 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:52:49.392276 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:52:49.392294 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:52:49.392312 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:52:49.392330 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:52:49.392349 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:52:49.392370 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:52:49.392389 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:52:49.392409 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:52:49.392428 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:52:49.392446 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:52:49.392464 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:52:49.392483 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:52:49.392501 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:52:49.392522 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:52:49.392540 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:52:49.392559 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:52:49.392577 systemd[1]: Reached target machines.target - Containers. Jun 20 18:52:49.392595 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:52:49.392614 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:52:49.392632 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:52:49.392650 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:52:49.392668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:52:49.392689 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:52:49.392706 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:52:49.392724 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:52:49.392743 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:52:49.392761 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:52:49.392779 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:52:49.392799 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:52:49.392817 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:52:49.392838 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:52:49.392857 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:52:49.392875 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:52:49.392893 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:52:49.392912 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:52:49.392930 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:52:49.392948 kernel: loop: module loaded Jun 20 18:52:49.392965 kernel: fuse: init (API version 7.39) Jun 20 18:52:49.393014 systemd-journald[1273]: Collecting audit messages is disabled. Jun 20 18:52:49.393062 systemd-journald[1273]: Journal started Jun 20 18:52:49.393103 systemd-journald[1273]: Runtime Journal (/run/log/journal/8211114170a545ef989fd028380892d6) is 8M, max 158.8M, 150.8M free. Jun 20 18:52:49.405260 kernel: ACPI: bus type drm_connector registered Jun 20 18:52:48.772263 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:52:48.780043 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 18:52:48.780469 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:52:49.412898 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:52:49.424674 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:52:49.430112 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:52:49.430201 systemd[1]: Stopped verity-setup.service. Jun 20 18:52:49.440091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:52:49.448825 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:52:49.451399 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:52:49.454279 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:52:49.457149 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:52:49.459626 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:52:49.462545 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:52:49.466698 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:52:49.472309 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:52:49.475851 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:52:49.479936 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:52:49.480202 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:52:49.483915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:52:49.484534 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:52:49.488214 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:52:49.488587 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:52:49.492365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:52:49.492678 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:52:49.496749 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:52:49.497195 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:52:49.500778 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:52:49.501239 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:52:49.504964 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:52:49.508725 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:52:49.512939 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:52:49.517007 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:52:49.537999 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:52:49.549970 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:52:49.559299 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:52:49.562186 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:52:49.562237 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:52:49.566307 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:52:49.570826 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:52:49.586268 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:52:49.589579 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:52:49.591099 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:52:49.602311 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:52:49.605421 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:52:49.606608 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:52:49.609344 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:52:49.613271 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:52:49.619919 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:52:49.626281 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:52:49.633164 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:52:49.645806 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:52:49.656038 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:52:49.659801 systemd-journald[1273]: Time spent on flushing to /var/log/journal/8211114170a545ef989fd028380892d6 is 34.604ms for 976 entries. Jun 20 18:52:49.659801 systemd-journald[1273]: System Journal (/var/log/journal/8211114170a545ef989fd028380892d6) is 8M, max 2.6G, 2.6G free. Jun 20 18:52:49.711773 systemd-journald[1273]: Received client request to flush runtime journal. Jun 20 18:52:49.711822 kernel: loop0: detected capacity change from 0 to 138176 Jun 20 18:52:49.662454 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:52:49.669533 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:52:49.676899 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:52:49.693261 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:52:49.701265 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 18:52:49.716109 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:52:49.729505 udevadm[1319]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 20 18:52:49.737644 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:52:49.854368 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:52:49.855214 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:52:49.883734 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:52:49.892347 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:52:49.946144 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jun 20 18:52:49.946173 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Jun 20 18:52:49.953491 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:52:50.090859 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:52:50.120026 kernel: loop1: detected capacity change from 0 to 229808 Jun 20 18:52:50.187149 kernel: loop2: detected capacity change from 0 to 147912 Jun 20 18:52:50.564097 kernel: loop3: detected capacity change from 0 to 28272 Jun 20 18:52:50.846083 kernel: loop4: detected capacity change from 0 to 138176 Jun 20 18:52:50.905194 kernel: loop5: detected capacity change from 0 to 229808 Jun 20 18:52:50.920119 kernel: loop6: detected capacity change from 0 to 147912 Jun 20 18:52:50.937078 kernel: loop7: detected capacity change from 0 to 28272 Jun 20 18:52:50.941331 (sd-merge)[1333]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jun 20 18:52:50.941991 (sd-merge)[1333]: Merged extensions into '/usr'. Jun 20 18:52:50.949021 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:52:50.952838 systemd[1]: Reload requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:52:50.952868 systemd[1]: Reloading... Jun 20 18:52:51.022099 zram_generator::config[1362]: No configuration found. Jun 20 18:52:51.178001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:52:51.269346 systemd[1]: Reloading finished in 315 ms. Jun 20 18:52:51.289777 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:52:51.298191 systemd[1]: Starting ensure-sysext.service... Jun 20 18:52:51.303245 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:52:51.309300 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:52:51.329965 systemd-tmpfiles[1420]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:52:51.330307 systemd-tmpfiles[1420]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:52:51.331041 systemd-tmpfiles[1420]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:52:51.331303 systemd-tmpfiles[1420]: ACLs are not supported, ignoring. Jun 20 18:52:51.331353 systemd-tmpfiles[1420]: ACLs are not supported, ignoring. Jun 20 18:52:51.342949 systemd[1]: Reload requested from client PID 1419 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:52:51.342972 systemd[1]: Reloading... Jun 20 18:52:51.348188 systemd-tmpfiles[1420]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:52:51.348203 systemd-tmpfiles[1420]: Skipping /boot Jun 20 18:52:51.371776 systemd-udevd[1421]: Using default interface naming scheme 'v255'. Jun 20 18:52:51.376760 systemd-tmpfiles[1420]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:52:51.376779 systemd-tmpfiles[1420]: Skipping /boot Jun 20 18:52:51.449150 zram_generator::config[1448]: No configuration found. Jun 20 18:52:51.729613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:52:51.743075 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:52:51.758077 kernel: hv_vmbus: registering driver hv_balloon Jun 20 18:52:51.762086 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jun 20 18:52:51.782091 kernel: hv_vmbus: registering driver hyperv_fb Jun 20 18:52:51.784827 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jun 20 18:52:51.793073 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jun 20 18:52:51.801276 kernel: Console: switching to colour dummy device 80x25 Jun 20 18:52:51.807186 kernel: Console: switching to colour frame buffer device 128x48 Jun 20 18:52:52.033673 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 18:52:52.034142 systemd[1]: Reloading finished in 690 ms. Jun 20 18:52:52.045167 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:52:52.050867 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:52:52.066213 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jun 20 18:52:52.158186 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1523) Jun 20 18:52:52.184491 systemd[1]: Finished ensure-sysext.service. Jun 20 18:52:52.202516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:52:52.215904 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:52:52.247432 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:52:52.252564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:52:52.259396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:52:52.265967 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:52:52.284392 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:52:52.296384 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:52:52.299294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:52:52.299482 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:52:52.301321 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:52:52.318370 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:52:52.329389 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:52:52.333256 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:52:52.342409 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:52:52.350384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:52:52.358430 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:52:52.364239 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:52:52.365165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:52:52.368921 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:52:52.370166 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:52:52.376622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:52:52.377323 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:52:52.382661 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:52:52.382892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:52:52.440690 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 18:52:52.453735 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:52:52.473361 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jun 20 18:52:52.475136 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:52:52.489308 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 18:52:52.494093 augenrules[1650]: No rules Jun 20 18:52:52.495795 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:52:52.496925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:52:52.496990 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:52:52.500229 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:52:52.502790 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:52:52.503115 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:52:52.549503 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:52:52.562139 lvm[1648]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:52:52.568102 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:52:52.591362 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 18:52:52.602247 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:52:52.613337 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 18:52:52.618730 lvm[1668]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:52:52.661011 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 18:52:52.724496 systemd-resolved[1623]: Positive Trust Anchors: Jun 20 18:52:52.724923 systemd-resolved[1623]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:52:52.724987 systemd-resolved[1623]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:52:52.731893 systemd-networkd[1619]: lo: Link UP Jun 20 18:52:52.731903 systemd-networkd[1619]: lo: Gained carrier Jun 20 18:52:52.734665 systemd-networkd[1619]: Enumeration completed Jun 20 18:52:52.734910 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:52:52.738225 systemd-networkd[1619]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:52.738236 systemd-networkd[1619]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:52:52.740314 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:52:52.746333 systemd-resolved[1623]: Using system hostname 'ci-4230.2.0-a-bab85c4a2e'. Jun 20 18:52:52.752317 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:52:52.766764 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:52.803079 kernel: mlx5_core 1a1e:00:02.0 enP6686s1: Link up Jun 20 18:52:52.824086 kernel: hv_netvsc 7ced8d46-a34d-7ced-8d46-a34d7ced8d46 eth0: Data path switched to VF: enP6686s1 Jun 20 18:52:52.825507 systemd-networkd[1619]: enP6686s1: Link UP Jun 20 18:52:52.825659 systemd-networkd[1619]: eth0: Link UP Jun 20 18:52:52.825664 systemd-networkd[1619]: eth0: Gained carrier Jun 20 18:52:52.825690 systemd-networkd[1619]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:52.828633 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:52:52.832678 systemd-networkd[1619]: enP6686s1: Gained carrier Jun 20 18:52:52.838660 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:52:52.841757 systemd[1]: Reached target network.target - Network. Jun 20 18:52:52.844206 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:52:52.871120 systemd-networkd[1619]: eth0: DHCPv4 address 10.200.8.21/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 20 18:52:52.992099 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:52:52.996354 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:52:54.179283 systemd-networkd[1619]: eth0: Gained IPv6LL Jun 20 18:52:54.182995 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:52:54.187147 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:52:54.563223 systemd-networkd[1619]: enP6686s1: Gained IPv6LL Jun 20 18:52:54.590187 ldconfig[1303]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:52:54.614788 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:52:54.623320 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:52:54.633203 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:52:54.636413 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:52:54.639349 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:52:54.642639 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:52:54.645998 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:52:54.648674 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:52:54.651764 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:52:54.654881 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:52:54.654925 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:52:54.657148 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:52:54.660381 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:52:54.669272 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:52:54.674635 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:52:54.678271 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:52:54.681583 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:52:54.695868 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:52:54.699075 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:52:54.702865 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:52:54.705700 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:52:54.708014 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:52:54.710280 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:52:54.710318 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:52:54.718182 systemd[1]: Starting chronyd.service - NTP client/server... Jun 20 18:52:54.723206 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:52:54.736920 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:52:54.743251 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:52:54.747565 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:52:54.759277 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:52:54.761986 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:52:54.762046 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jun 20 18:52:54.763582 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jun 20 18:52:54.766142 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jun 20 18:52:54.772405 jq[1691]: false Jun 20 18:52:54.777199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:52:54.782271 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:52:54.788268 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:52:54.804250 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:52:54.810849 KVP[1693]: KVP starting; pid is:1693 Jun 20 18:52:54.818476 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:52:54.828271 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:52:54.838620 kernel: hv_utils: KVP IC version 4.0 Jun 20 18:52:54.837493 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:52:54.840103 KVP[1693]: KVP LIC Version: 3.1 Jun 20 18:52:54.846091 (chronyd)[1684]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jun 20 18:52:54.847722 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:52:54.852915 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:52:54.854296 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:52:54.865217 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:52:54.870711 chronyd[1712]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jun 20 18:52:54.878844 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:52:54.879179 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:52:54.884475 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:52:54.884798 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:52:54.894481 chronyd[1712]: Timezone right/UTC failed leap second check, ignoring Jun 20 18:52:54.895066 chronyd[1712]: Loaded seccomp filter (level 2) Jun 20 18:52:54.896640 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:52:54.897496 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:52:54.905996 systemd[1]: Started chronyd.service - NTP client/server. Jun 20 18:52:54.907561 extend-filesystems[1692]: Found loop4 Jun 20 18:52:54.911564 extend-filesystems[1692]: Found loop5 Jun 20 18:52:54.911564 extend-filesystems[1692]: Found loop6 Jun 20 18:52:54.911564 extend-filesystems[1692]: Found loop7 Jun 20 18:52:54.911564 extend-filesystems[1692]: Found sda Jun 20 18:52:54.911564 extend-filesystems[1692]: Found sda1 Jun 20 18:52:54.911564 extend-filesystems[1692]: Found sda2 Jun 20 18:52:54.911564 extend-filesystems[1692]: Found sda3 Jun 20 18:52:54.911564 extend-filesystems[1692]: Found usr Jun 20 18:52:54.911564 extend-filesystems[1692]: Found sda4 Jun 20 18:52:54.911564 extend-filesystems[1692]: Found sda6 Jun 20 18:52:54.911564 extend-filesystems[1692]: Found sda7 Jun 20 18:52:54.911564 extend-filesystems[1692]: Found sda9 Jun 20 18:52:54.911564 extend-filesystems[1692]: Checking size of /dev/sda9 Jun 20 18:52:54.933232 jq[1710]: true Jun 20 18:52:54.951133 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:52:54.985394 dbus-daemon[1687]: [system] SELinux support is enabled Jun 20 18:52:54.985611 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:52:54.988425 tar[1716]: linux-amd64/LICENSE Jun 20 18:52:54.988707 tar[1716]: linux-amd64/helm Jun 20 18:52:54.998816 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:52:54.998863 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:52:55.002721 jq[1728]: true Jun 20 18:52:55.009912 extend-filesystems[1692]: Old size kept for /dev/sda9 Jun 20 18:52:55.009912 extend-filesystems[1692]: Found sr0 Jun 20 18:52:55.010074 (ntainerd)[1729]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:52:55.013671 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:52:55.013699 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:52:55.020850 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:52:55.026480 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:52:55.086697 coreos-metadata[1686]: Jun 20 18:52:55.086 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jun 20 18:52:55.088947 coreos-metadata[1686]: Jun 20 18:52:55.088 INFO Fetch successful Jun 20 18:52:55.088947 coreos-metadata[1686]: Jun 20 18:52:55.088 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jun 20 18:52:55.090465 update_engine[1709]: I20250620 18:52:55.089700 1709 main.cc:92] Flatcar Update Engine starting Jun 20 18:52:55.094111 coreos-metadata[1686]: Jun 20 18:52:55.093 INFO Fetch successful Jun 20 18:52:55.104005 coreos-metadata[1686]: Jun 20 18:52:55.098 INFO Fetching http://168.63.129.16/machine/f928c6a0-7ef3-4ca4-b4b7-6e57f4707d3e/a0d7d716%2D03fa%2D4937%2D8c9f%2D468aacbc8dba.%5Fci%2D4230.2.0%2Da%2Dbab85c4a2e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jun 20 18:52:55.104131 update_engine[1709]: I20250620 18:52:55.103797 1709 update_check_scheduler.cc:74] Next update check in 3m36s Jun 20 18:52:55.100888 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:52:55.105861 coreos-metadata[1686]: Jun 20 18:52:55.105 INFO Fetch successful Jun 20 18:52:55.107288 coreos-metadata[1686]: Jun 20 18:52:55.107 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jun 20 18:52:55.113254 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:52:55.117635 systemd-logind[1704]: New seat seat0. Jun 20 18:52:55.135961 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1512) Jun 20 18:52:55.136113 coreos-metadata[1686]: Jun 20 18:52:55.134 INFO Fetch successful Jun 20 18:52:55.148973 systemd-logind[1704]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 18:52:55.155324 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:52:55.218002 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:52:55.230347 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:52:55.316181 bash[1792]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:52:55.316550 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:52:55.359617 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 18:52:55.518413 sshd_keygen[1727]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:52:55.526724 locksmithd[1760]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:52:55.582657 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:52:55.598481 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:52:55.607913 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jun 20 18:52:55.636362 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:52:55.636639 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:52:55.650439 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:52:55.681118 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:52:55.696598 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:52:55.709588 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 18:52:55.714778 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:52:55.731285 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jun 20 18:52:56.021887 tar[1716]: linux-amd64/README.md Jun 20 18:52:56.032788 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:52:56.245077 containerd[1729]: time="2025-06-20T18:52:56.243745200Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 18:52:56.276137 containerd[1729]: time="2025-06-20T18:52:56.276003500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:56.277632 containerd[1729]: time="2025-06-20T18:52:56.277589800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:52:56.277632 containerd[1729]: time="2025-06-20T18:52:56.277623000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 18:52:56.277784 containerd[1729]: time="2025-06-20T18:52:56.277644700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 18:52:56.277850 containerd[1729]: time="2025-06-20T18:52:56.277826100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 18:52:56.277898 containerd[1729]: time="2025-06-20T18:52:56.277854500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:56.277961 containerd[1729]: time="2025-06-20T18:52:56.277937000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:52:56.278002 containerd[1729]: time="2025-06-20T18:52:56.277959400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:56.278256 containerd[1729]: time="2025-06-20T18:52:56.278229600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:52:56.278256 containerd[1729]: time="2025-06-20T18:52:56.278251800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:56.278350 containerd[1729]: time="2025-06-20T18:52:56.278270800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:52:56.278350 containerd[1729]: time="2025-06-20T18:52:56.278283400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:56.278470 containerd[1729]: time="2025-06-20T18:52:56.278390000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:56.278638 containerd[1729]: time="2025-06-20T18:52:56.278611700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:56.278796 containerd[1729]: time="2025-06-20T18:52:56.278773000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:52:56.278796 containerd[1729]: time="2025-06-20T18:52:56.278791400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 18:52:56.278909 containerd[1729]: time="2025-06-20T18:52:56.278890600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 18:52:56.278969 containerd[1729]: time="2025-06-20T18:52:56.278951100Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:52:56.296757 containerd[1729]: time="2025-06-20T18:52:56.296095200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 18:52:56.296757 containerd[1729]: time="2025-06-20T18:52:56.296211800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 18:52:56.296757 containerd[1729]: time="2025-06-20T18:52:56.296254500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 18:52:56.296757 containerd[1729]: time="2025-06-20T18:52:56.296275400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 18:52:56.296757 containerd[1729]: time="2025-06-20T18:52:56.296296000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 18:52:56.296757 containerd[1729]: time="2025-06-20T18:52:56.296517700Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 18:52:56.297047 containerd[1729]: time="2025-06-20T18:52:56.296957400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297180100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297207200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297240900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297262900Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297282700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297299500Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297329000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297350000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297367300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297383900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297413700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297440700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297459000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299070 containerd[1729]: time="2025-06-20T18:52:56.297489700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297508700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297525500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297567000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297590000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297609200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297640600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297662500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297679300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297696300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297729200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297750300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297789200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297807500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.299577 containerd[1729]: time="2025-06-20T18:52:56.297822700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 18:52:56.300078 containerd[1729]: time="2025-06-20T18:52:56.297906100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 18:52:56.300078 containerd[1729]: time="2025-06-20T18:52:56.297998400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 18:52:56.300078 containerd[1729]: time="2025-06-20T18:52:56.298016800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 18:52:56.300078 containerd[1729]: time="2025-06-20T18:52:56.298033300Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 18:52:56.300078 containerd[1729]: time="2025-06-20T18:52:56.298047900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.300078 containerd[1729]: time="2025-06-20T18:52:56.298084500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 18:52:56.300078 containerd[1729]: time="2025-06-20T18:52:56.298100500Z" level=info msg="NRI interface is disabled by configuration." Jun 20 18:52:56.300078 containerd[1729]: time="2025-06-20T18:52:56.298114700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.298534200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.298598800Z" level=info msg="Connect containerd service" Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.298649600Z" level=info msg="using legacy CRI server" Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.298662200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.298837300Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.299804200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.299902000Z" level=info msg="Start subscribing containerd event" Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.299956000Z" level=info msg="Start recovering state" Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.300029800Z" level=info msg="Start event monitor" Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.300076800Z" level=info msg="Start snapshots syncer" Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.300090900Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:52:56.300350 containerd[1729]: time="2025-06-20T18:52:56.300101200Z" level=info msg="Start streaming server" Jun 20 18:52:56.300902 containerd[1729]: time="2025-06-20T18:52:56.300698800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:52:56.300902 containerd[1729]: time="2025-06-20T18:52:56.300819300Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:52:56.301067 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:52:56.310076 containerd[1729]: time="2025-06-20T18:52:56.308618600Z" level=info msg="containerd successfully booted in 0.066066s" Jun 20 18:52:56.600238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:52:56.604441 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:52:56.607899 systemd[1]: Startup finished in 792ms (firmware) + 22.381s (loader) + 986ms (kernel) + 11.277s (initrd) + 10.510s (userspace) = 45.948s. Jun 20 18:52:56.620454 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:52:56.887480 login[1842]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 18:52:56.890895 login[1844]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 18:52:56.897550 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:52:56.905387 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:52:56.915951 systemd-logind[1704]: New session 2 of user core. Jun 20 18:52:56.923258 systemd-logind[1704]: New session 1 of user core. Jun 20 18:52:56.931200 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:52:56.939466 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:52:56.945242 (systemd)[1873]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:52:56.948378 systemd-logind[1704]: New session c1 of user core. Jun 20 18:52:57.235339 systemd[1873]: Queued start job for default target default.target. Jun 20 18:52:57.241398 systemd[1873]: Created slice app.slice - User Application Slice. Jun 20 18:52:57.241936 systemd[1873]: Reached target paths.target - Paths. Jun 20 18:52:57.242042 systemd[1873]: Reached target timers.target - Timers. Jun 20 18:52:57.246916 systemd[1873]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:52:57.268762 systemd[1873]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:52:57.269152 systemd[1873]: Reached target sockets.target - Sockets. Jun 20 18:52:57.269269 systemd[1873]: Reached target basic.target - Basic System. Jun 20 18:52:57.269324 systemd[1873]: Reached target default.target - Main User Target. Jun 20 18:52:57.269365 systemd[1873]: Startup finished in 309ms. Jun 20 18:52:57.269510 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:52:57.278234 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:52:57.279486 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:52:57.426879 kubelet[1862]: E0620 18:52:57.426791 1862 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:52:57.430561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:52:57.431744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:52:57.432464 systemd[1]: kubelet.service: Consumed 1.061s CPU time, 271M memory peak. Jun 20 18:52:57.493680 waagent[1845]: 2025-06-20T18:52:57.493503Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.495149Z INFO Daemon Daemon OS: flatcar 4230.2.0 Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.496158Z INFO Daemon Daemon Python: 3.11.11 Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.496878Z INFO Daemon Daemon Run daemon Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.497793Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.0' Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.498695Z INFO Daemon Daemon Using waagent for provisioning Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.499817Z INFO Daemon Daemon Activate resource disk Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.500676Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.506569Z INFO Daemon Daemon Found device: None Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.507553Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.508034Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.508917Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:52:57.529491 waagent[1845]: 2025-06-20T18:52:57.509768Z INFO Daemon Daemon Running default provisioning handler Jun 20 18:52:57.533272 waagent[1845]: 2025-06-20T18:52:57.533164Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jun 20 18:52:57.545204 waagent[1845]: 2025-06-20T18:52:57.545113Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jun 20 18:52:57.550501 waagent[1845]: 2025-06-20T18:52:57.550408Z INFO Daemon Daemon cloud-init is enabled: False Jun 20 18:52:57.554818 waagent[1845]: 2025-06-20T18:52:57.551516Z INFO Daemon Daemon Copying ovf-env.xml Jun 20 18:52:57.633047 waagent[1845]: 2025-06-20T18:52:57.630481Z INFO Daemon Daemon Successfully mounted dvd Jun 20 18:52:57.658386 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jun 20 18:52:57.660891 waagent[1845]: 2025-06-20T18:52:57.660800Z INFO Daemon Daemon Detect protocol endpoint Jun 20 18:52:57.663756 waagent[1845]: 2025-06-20T18:52:57.663679Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jun 20 18:52:57.675974 waagent[1845]: 2025-06-20T18:52:57.664780Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jun 20 18:52:57.675974 waagent[1845]: 2025-06-20T18:52:57.665599Z INFO Daemon Daemon Test for route to 168.63.129.16 Jun 20 18:52:57.675974 waagent[1845]: 2025-06-20T18:52:57.666192Z INFO Daemon Daemon Route to 168.63.129.16 exists Jun 20 18:52:57.675974 waagent[1845]: 2025-06-20T18:52:57.666919Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jun 20 18:52:57.701031 waagent[1845]: 2025-06-20T18:52:57.700957Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jun 20 18:52:57.709090 waagent[1845]: 2025-06-20T18:52:57.702642Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jun 20 18:52:57.709090 waagent[1845]: 2025-06-20T18:52:57.703292Z INFO Daemon Daemon Server preferred version:2015-04-05 Jun 20 18:52:57.788077 waagent[1845]: 2025-06-20T18:52:57.787890Z INFO Daemon Daemon Initializing goal state during protocol detection Jun 20 18:52:57.791540 waagent[1845]: 2025-06-20T18:52:57.791462Z INFO Daemon Daemon Forcing an update of the goal state. Jun 20 18:52:57.796663 waagent[1845]: 2025-06-20T18:52:57.796607Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:52:57.811521 waagent[1845]: 2025-06-20T18:52:57.811459Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Jun 20 18:52:57.829368 waagent[1845]: 2025-06-20T18:52:57.813246Z INFO Daemon Jun 20 18:52:57.829368 waagent[1845]: 2025-06-20T18:52:57.814999Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 90769933-4284-453c-831c-2162495c6ef3 eTag: 1208122759739622825 source: Fabric] Jun 20 18:52:57.829368 waagent[1845]: 2025-06-20T18:52:57.816685Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jun 20 18:52:57.829368 waagent[1845]: 2025-06-20T18:52:57.817729Z INFO Daemon Jun 20 18:52:57.829368 waagent[1845]: 2025-06-20T18:52:57.818594Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:52:57.829368 waagent[1845]: 2025-06-20T18:52:57.823383Z INFO Daemon Daemon Downloading artifacts profile blob Jun 20 18:52:57.970643 waagent[1845]: 2025-06-20T18:52:57.970552Z INFO Daemon Downloaded certificate {'thumbprint': 'A6AC463979DC1D6112D3B1E53EF9172670017307', 'hasPrivateKey': True} Jun 20 18:52:57.977926 waagent[1845]: 2025-06-20T18:52:57.972668Z INFO Daemon Fetch goal state completed Jun 20 18:52:58.016746 waagent[1845]: 2025-06-20T18:52:58.016654Z INFO Daemon Daemon Starting provisioning Jun 20 18:52:58.023748 waagent[1845]: 2025-06-20T18:52:58.018021Z INFO Daemon Daemon Handle ovf-env.xml. Jun 20 18:52:58.023748 waagent[1845]: 2025-06-20T18:52:58.018900Z INFO Daemon Daemon Set hostname [ci-4230.2.0-a-bab85c4a2e] Jun 20 18:52:58.044642 waagent[1845]: 2025-06-20T18:52:58.044547Z INFO Daemon Daemon Publish hostname [ci-4230.2.0-a-bab85c4a2e] Jun 20 18:52:58.052554 waagent[1845]: 2025-06-20T18:52:58.046128Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jun 20 18:52:58.052554 waagent[1845]: 2025-06-20T18:52:58.046882Z INFO Daemon Daemon Primary interface is [eth0] Jun 20 18:52:58.056813 systemd-networkd[1619]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:58.056823 systemd-networkd[1619]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:52:58.056875 systemd-networkd[1619]: eth0: DHCP lease lost Jun 20 18:52:58.058250 waagent[1845]: 2025-06-20T18:52:58.058167Z INFO Daemon Daemon Create user account if not exists Jun 20 18:52:58.067599 waagent[1845]: 2025-06-20T18:52:58.059785Z INFO Daemon Daemon User core already exists, skip useradd Jun 20 18:52:58.067599 waagent[1845]: 2025-06-20T18:52:58.060567Z INFO Daemon Daemon Configure sudoer Jun 20 18:52:58.067599 waagent[1845]: 2025-06-20T18:52:58.061691Z INFO Daemon Daemon Configure sshd Jun 20 18:52:58.067599 waagent[1845]: 2025-06-20T18:52:58.062866Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jun 20 18:52:58.067599 waagent[1845]: 2025-06-20T18:52:58.063107Z INFO Daemon Daemon Deploy ssh public key. Jun 20 18:52:58.120128 systemd-networkd[1619]: eth0: DHCPv4 address 10.200.8.21/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jun 20 18:53:07.624443 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:53:07.630812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:53:07.748155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:53:07.759433 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:53:08.457378 kubelet[1934]: E0620 18:53:08.457322 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:53:08.461080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:53:08.461271 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:53:08.461701 systemd[1]: kubelet.service: Consumed 814ms CPU time, 108.5M memory peak. Jun 20 18:53:18.624850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:53:18.630377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:53:18.689964 chronyd[1712]: Selected source PHC0 Jun 20 18:53:18.738365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:53:18.743341 (kubelet)[1949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:53:19.457891 kubelet[1949]: E0620 18:53:19.457821 1949 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:53:19.460456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:53:19.460662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:53:19.461104 systemd[1]: kubelet.service: Consumed 168ms CPU time, 108.8M memory peak. Jun 20 18:53:28.158255 waagent[1845]: 2025-06-20T18:53:28.158185Z INFO Daemon Daemon Provisioning complete Jun 20 18:53:28.171959 waagent[1845]: 2025-06-20T18:53:28.171868Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jun 20 18:53:28.179124 waagent[1845]: 2025-06-20T18:53:28.173191Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jun 20 18:53:28.179124 waagent[1845]: 2025-06-20T18:53:28.173652Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jun 20 18:53:28.304738 waagent[1956]: 2025-06-20T18:53:28.304638Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jun 20 18:53:28.305190 waagent[1956]: 2025-06-20T18:53:28.304816Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.0 Jun 20 18:53:28.305190 waagent[1956]: 2025-06-20T18:53:28.304903Z INFO ExtHandler ExtHandler Python: 3.11.11 Jun 20 18:53:28.339865 waagent[1956]: 2025-06-20T18:53:28.339764Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jun 20 18:53:28.340130 waagent[1956]: 2025-06-20T18:53:28.340071Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:53:28.340238 waagent[1956]: 2025-06-20T18:53:28.340193Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:53:28.348069 waagent[1956]: 2025-06-20T18:53:28.347978Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jun 20 18:53:28.358603 waagent[1956]: 2025-06-20T18:53:28.358539Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Jun 20 18:53:28.359172 waagent[1956]: 2025-06-20T18:53:28.359117Z INFO ExtHandler Jun 20 18:53:28.359276 waagent[1956]: 2025-06-20T18:53:28.359223Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 005bb693-14e7-4563-a3af-a630e040bde2 eTag: 1208122759739622825 source: Fabric] Jun 20 18:53:28.359608 waagent[1956]: 2025-06-20T18:53:28.359555Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jun 20 18:53:28.360209 waagent[1956]: 2025-06-20T18:53:28.360152Z INFO ExtHandler Jun 20 18:53:28.360283 waagent[1956]: 2025-06-20T18:53:28.360240Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jun 20 18:53:28.364187 waagent[1956]: 2025-06-20T18:53:28.364139Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jun 20 18:53:28.426482 waagent[1956]: 2025-06-20T18:53:28.426323Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A6AC463979DC1D6112D3B1E53EF9172670017307', 'hasPrivateKey': True} Jun 20 18:53:28.427012 waagent[1956]: 2025-06-20T18:53:28.426951Z INFO ExtHandler Fetch goal state completed Jun 20 18:53:28.439974 waagent[1956]: 2025-06-20T18:53:28.439890Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1956 Jun 20 18:53:28.440194 waagent[1956]: 2025-06-20T18:53:28.440134Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jun 20 18:53:28.441901 waagent[1956]: 2025-06-20T18:53:28.441836Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.0', '', 'Flatcar Container Linux by Kinvolk'] Jun 20 18:53:28.442309 waagent[1956]: 2025-06-20T18:53:28.442257Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jun 20 18:53:28.475663 waagent[1956]: 2025-06-20T18:53:28.475609Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jun 20 18:53:28.475911 waagent[1956]: 2025-06-20T18:53:28.475859Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jun 20 18:53:28.482846 waagent[1956]: 2025-06-20T18:53:28.482790Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jun 20 18:53:28.490710 systemd[1]: Reload requested from client PID 1969 ('systemctl') (unit waagent.service)... Jun 20 18:53:28.490728 systemd[1]: Reloading... Jun 20 18:53:28.595089 zram_generator::config[2011]: No configuration found. Jun 20 18:53:28.721847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:53:28.839701 systemd[1]: Reloading finished in 348 ms. Jun 20 18:53:28.859798 waagent[1956]: 2025-06-20T18:53:28.857488Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jun 20 18:53:28.867250 systemd[1]: Reload requested from client PID 2065 ('systemctl') (unit waagent.service)... Jun 20 18:53:28.867268 systemd[1]: Reloading... Jun 20 18:53:28.952080 zram_generator::config[2100]: No configuration found. Jun 20 18:53:29.091156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:53:29.209926 systemd[1]: Reloading finished in 342 ms. Jun 20 18:53:29.228085 waagent[1956]: 2025-06-20T18:53:29.227119Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jun 20 18:53:29.228085 waagent[1956]: 2025-06-20T18:53:29.227337Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jun 20 18:53:29.516917 waagent[1956]: 2025-06-20T18:53:29.516818Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jun 20 18:53:29.517628 waagent[1956]: 2025-06-20T18:53:29.517556Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jun 20 18:53:29.518425 waagent[1956]: 2025-06-20T18:53:29.518372Z INFO ExtHandler ExtHandler Starting env monitor service. Jun 20 18:53:29.518833 waagent[1956]: 2025-06-20T18:53:29.518769Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jun 20 18:53:29.518962 waagent[1956]: 2025-06-20T18:53:29.518912Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:53:29.519115 waagent[1956]: 2025-06-20T18:53:29.519032Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jun 20 18:53:29.519457 waagent[1956]: 2025-06-20T18:53:29.519402Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jun 20 18:53:29.519714 waagent[1956]: 2025-06-20T18:53:29.519647Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jun 20 18:53:29.519792 waagent[1956]: 2025-06-20T18:53:29.519719Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:53:29.519968 waagent[1956]: 2025-06-20T18:53:29.519911Z INFO EnvHandler ExtHandler Configure routes Jun 20 18:53:29.520118 waagent[1956]: 2025-06-20T18:53:29.520024Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jun 20 18:53:29.520238 waagent[1956]: 2025-06-20T18:53:29.520189Z INFO EnvHandler ExtHandler Gateway:None Jun 20 18:53:29.520634 waagent[1956]: 2025-06-20T18:53:29.520560Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jun 20 18:53:29.520720 waagent[1956]: 2025-06-20T18:53:29.520651Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jun 20 18:53:29.520952 waagent[1956]: 2025-06-20T18:53:29.520902Z INFO EnvHandler ExtHandler Routes:None Jun 20 18:53:29.521047 waagent[1956]: 2025-06-20T18:53:29.521000Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jun 20 18:53:29.521421 waagent[1956]: 2025-06-20T18:53:29.521365Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jun 20 18:53:29.527103 waagent[1956]: 2025-06-20T18:53:29.525493Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jun 20 18:53:29.527103 waagent[1956]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jun 20 18:53:29.527103 waagent[1956]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jun 20 18:53:29.527103 waagent[1956]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jun 20 18:53:29.527103 waagent[1956]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:53:29.527103 waagent[1956]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:53:29.527103 waagent[1956]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jun 20 18:53:29.532076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 18:53:29.533409 waagent[1956]: 2025-06-20T18:53:29.533344Z INFO ExtHandler ExtHandler Jun 20 18:53:29.533614 waagent[1956]: 2025-06-20T18:53:29.533571Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e6841237-524f-49fa-a9a7-32825db9340a correlation fa8e0e42-9abc-4c6f-887f-0a530a3288ac created: 2025-06-20T18:51:59.643005Z] Jun 20 18:53:29.534254 waagent[1956]: 2025-06-20T18:53:29.534192Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jun 20 18:53:29.536304 waagent[1956]: 2025-06-20T18:53:29.536255Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jun 20 18:53:29.544346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:53:29.581406 waagent[1956]: 2025-06-20T18:53:29.581306Z INFO MonitorHandler ExtHandler Network interfaces: Jun 20 18:53:29.581406 waagent[1956]: Executing ['ip', '-a', '-o', 'link']: Jun 20 18:53:29.581406 waagent[1956]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jun 20 18:53:29.581406 waagent[1956]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:46:a3:4d brd ff:ff:ff:ff:ff:ff Jun 20 18:53:29.581406 waagent[1956]: 3: enP6686s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:46:a3:4d brd ff:ff:ff:ff:ff:ff\ altname enP6686p0s2 Jun 20 18:53:29.581406 waagent[1956]: Executing ['ip', '-4', '-a', '-o', 'address']: Jun 20 18:53:29.581406 waagent[1956]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jun 20 18:53:29.581406 waagent[1956]: 2: eth0 inet 10.200.8.21/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jun 20 18:53:29.581406 waagent[1956]: Executing ['ip', '-6', '-a', '-o', 'address']: Jun 20 18:53:29.581406 waagent[1956]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jun 20 18:53:29.581406 waagent[1956]: 2: eth0 inet6 fe80::7eed:8dff:fe46:a34d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:53:29.581406 waagent[1956]: 3: enP6686s1 inet6 fe80::7eed:8dff:fe46:a34d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jun 20 18:53:29.589118 waagent[1956]: 2025-06-20T18:53:29.587713Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 63D07780-74EA-4C31-85DF-314D41D71A60;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jun 20 18:53:30.366879 waagent[1956]: 2025-06-20T18:53:30.366791Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jun 20 18:53:30.366879 waagent[1956]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:53:30.366879 waagent[1956]: pkts bytes target prot opt in out source destination Jun 20 18:53:30.366879 waagent[1956]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:53:30.366879 waagent[1956]: pkts bytes target prot opt in out source destination Jun 20 18:53:30.366879 waagent[1956]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:53:30.366879 waagent[1956]: pkts bytes target prot opt in out source destination Jun 20 18:53:30.366879 waagent[1956]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:53:30.366879 waagent[1956]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:53:30.366879 waagent[1956]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:53:30.372165 waagent[1956]: 2025-06-20T18:53:30.371833Z INFO EnvHandler ExtHandler Current Firewall rules: Jun 20 18:53:30.372165 waagent[1956]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:53:30.372165 waagent[1956]: pkts bytes target prot opt in out source destination Jun 20 18:53:30.372165 waagent[1956]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:53:30.372165 waagent[1956]: pkts bytes target prot opt in out source destination Jun 20 18:53:30.372165 waagent[1956]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jun 20 18:53:30.372165 waagent[1956]: pkts bytes target prot opt in out source destination Jun 20 18:53:30.372165 waagent[1956]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jun 20 18:53:30.372165 waagent[1956]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jun 20 18:53:30.372165 waagent[1956]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jun 20 18:53:30.372988 waagent[1956]: 2025-06-20T18:53:30.372774Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jun 20 18:53:30.402073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:53:30.406739 (kubelet)[2203]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:53:30.457241 kubelet[2203]: E0620 18:53:30.457151 2203 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:53:30.460019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:53:30.460266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:53:30.460716 systemd[1]: kubelet.service: Consumed 151ms CPU time, 110.4M memory peak. Jun 20 18:53:39.859675 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jun 20 18:53:40.624620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 18:53:40.630307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:53:40.742267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:53:40.756424 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:53:40.867215 update_engine[1709]: I20250620 18:53:40.867132 1709 update_attempter.cc:509] Updating boot flags... Jun 20 18:53:41.401189 kubelet[2218]: E0620 18:53:41.401130 2218 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:53:41.403799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:53:41.403998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:53:41.404428 systemd[1]: kubelet.service: Consumed 151ms CPU time, 108.1M memory peak. Jun 20 18:53:41.481116 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2240) Jun 20 18:53:41.651867 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2230) Jun 20 18:53:46.668529 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:53:46.674362 systemd[1]: Started sshd@0-10.200.8.21:22-10.200.16.10:53178.service - OpenSSH per-connection server daemon (10.200.16.10:53178). Jun 20 18:53:47.410980 sshd[2340]: Accepted publickey for core from 10.200.16.10 port 53178 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:47.412612 sshd-session[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:47.416921 systemd-logind[1704]: New session 3 of user core. Jun 20 18:53:47.426246 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:53:47.970429 systemd[1]: Started sshd@1-10.200.8.21:22-10.200.16.10:53190.service - OpenSSH per-connection server daemon (10.200.16.10:53190). Jun 20 18:53:48.607745 sshd[2345]: Accepted publickey for core from 10.200.16.10 port 53190 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:48.609465 sshd-session[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:48.613880 systemd-logind[1704]: New session 4 of user core. Jun 20 18:53:48.625311 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:53:49.089383 sshd[2347]: Connection closed by 10.200.16.10 port 53190 Jun 20 18:53:49.090257 sshd-session[2345]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:49.093306 systemd[1]: sshd@1-10.200.8.21:22-10.200.16.10:53190.service: Deactivated successfully. Jun 20 18:53:49.095496 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:53:49.097204 systemd-logind[1704]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:53:49.098121 systemd-logind[1704]: Removed session 4. Jun 20 18:53:49.209381 systemd[1]: Started sshd@2-10.200.8.21:22-10.200.16.10:36776.service - OpenSSH per-connection server daemon (10.200.16.10:36776). Jun 20 18:53:49.836740 sshd[2353]: Accepted publickey for core from 10.200.16.10 port 36776 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:49.838467 sshd-session[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:49.844630 systemd-logind[1704]: New session 5 of user core. Jun 20 18:53:49.853223 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:53:50.278785 sshd[2355]: Connection closed by 10.200.16.10 port 36776 Jun 20 18:53:50.281240 sshd-session[2353]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:50.284826 systemd[1]: sshd@2-10.200.8.21:22-10.200.16.10:36776.service: Deactivated successfully. Jun 20 18:53:50.286773 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:53:50.287544 systemd-logind[1704]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:53:50.288476 systemd-logind[1704]: Removed session 5. Jun 20 18:53:50.394371 systemd[1]: Started sshd@3-10.200.8.21:22-10.200.16.10:36782.service - OpenSSH per-connection server daemon (10.200.16.10:36782). Jun 20 18:53:51.020265 sshd[2361]: Accepted publickey for core from 10.200.16.10 port 36782 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:51.021793 sshd-session[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:51.026412 systemd-logind[1704]: New session 6 of user core. Jun 20 18:53:51.033215 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:53:51.463675 sshd[2363]: Connection closed by 10.200.16.10 port 36782 Jun 20 18:53:51.464531 sshd-session[2361]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:51.467672 systemd[1]: sshd@3-10.200.8.21:22-10.200.16.10:36782.service: Deactivated successfully. Jun 20 18:53:51.469794 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:53:51.470989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 20 18:53:51.472729 systemd-logind[1704]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:53:51.478302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:53:51.479852 systemd-logind[1704]: Removed session 6. Jun 20 18:53:51.588619 systemd[1]: Started sshd@4-10.200.8.21:22-10.200.16.10:36794.service - OpenSSH per-connection server daemon (10.200.16.10:36794). Jun 20 18:53:51.604249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:53:51.613492 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:53:51.651101 kubelet[2378]: E0620 18:53:51.650444 2378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:53:51.652991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:53:51.653211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:53:51.653652 systemd[1]: kubelet.service: Consumed 152ms CPU time, 112.3M memory peak. Jun 20 18:53:52.372929 sshd[2374]: Accepted publickey for core from 10.200.16.10 port 36794 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:52.374469 sshd-session[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:52.379037 systemd-logind[1704]: New session 7 of user core. Jun 20 18:53:52.390223 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:53:52.838480 sudo[2387]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:53:52.838855 sudo[2387]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:53:52.863633 sudo[2387]: pam_unix(sudo:session): session closed for user root Jun 20 18:53:52.965952 sshd[2386]: Connection closed by 10.200.16.10 port 36794 Jun 20 18:53:52.967204 sshd-session[2374]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:52.971691 systemd[1]: sshd@4-10.200.8.21:22-10.200.16.10:36794.service: Deactivated successfully. Jun 20 18:53:52.973581 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:53:52.974418 systemd-logind[1704]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:53:52.975352 systemd-logind[1704]: Removed session 7. Jun 20 18:53:53.081365 systemd[1]: Started sshd@5-10.200.8.21:22-10.200.16.10:36798.service - OpenSSH per-connection server daemon (10.200.16.10:36798). Jun 20 18:53:53.707483 sshd[2393]: Accepted publickey for core from 10.200.16.10 port 36798 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:53.710106 sshd-session[2393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:53.714387 systemd-logind[1704]: New session 8 of user core. Jun 20 18:53:53.724244 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:53:54.054168 sudo[2397]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:53:54.054633 sudo[2397]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:53:54.058474 sudo[2397]: pam_unix(sudo:session): session closed for user root Jun 20 18:53:54.063896 sudo[2396]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:53:54.064319 sudo[2396]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:53:54.085917 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:53:54.113883 augenrules[2419]: No rules Jun 20 18:53:54.115384 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:53:54.115665 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:53:54.116786 sudo[2396]: pam_unix(sudo:session): session closed for user root Jun 20 18:53:54.216837 sshd[2395]: Connection closed by 10.200.16.10 port 36798 Jun 20 18:53:54.217598 sshd-session[2393]: pam_unix(sshd:session): session closed for user core Jun 20 18:53:54.220935 systemd[1]: sshd@5-10.200.8.21:22-10.200.16.10:36798.service: Deactivated successfully. Jun 20 18:53:54.222978 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:53:54.224612 systemd-logind[1704]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:53:54.225499 systemd-logind[1704]: Removed session 8. Jun 20 18:53:54.332624 systemd[1]: Started sshd@6-10.200.8.21:22-10.200.16.10:36814.service - OpenSSH per-connection server daemon (10.200.16.10:36814). Jun 20 18:53:54.956760 sshd[2428]: Accepted publickey for core from 10.200.16.10 port 36814 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:53:54.958209 sshd-session[2428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:53:54.962620 systemd-logind[1704]: New session 9 of user core. Jun 20 18:53:54.981244 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 18:53:55.301275 sudo[2431]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:53:55.301640 sudo[2431]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:53:57.052366 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:53:57.054670 (dockerd)[2448]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:53:59.178347 dockerd[2448]: time="2025-06-20T18:53:59.178276494Z" level=info msg="Starting up" Jun 20 18:53:59.605351 dockerd[2448]: time="2025-06-20T18:53:59.605223944Z" level=info msg="Loading containers: start." Jun 20 18:53:59.794253 kernel: Initializing XFRM netlink socket Jun 20 18:53:59.926210 systemd-networkd[1619]: docker0: Link UP Jun 20 18:53:59.970384 dockerd[2448]: time="2025-06-20T18:53:59.970336188Z" level=info msg="Loading containers: done." Jun 20 18:53:59.988995 dockerd[2448]: time="2025-06-20T18:53:59.988953261Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:53:59.989199 dockerd[2448]: time="2025-06-20T18:53:59.989073162Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 18:53:59.989252 dockerd[2448]: time="2025-06-20T18:53:59.989211064Z" level=info msg="Daemon has completed initialization" Jun 20 18:54:00.058129 dockerd[2448]: time="2025-06-20T18:54:00.058047672Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:54:00.058555 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:54:00.814688 containerd[1729]: time="2025-06-20T18:54:00.814640647Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 18:54:01.545266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376114797.mount: Deactivated successfully. Jun 20 18:54:01.876105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 20 18:54:01.886013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:02.056198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:02.070865 (kubelet)[2658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:54:02.671634 kubelet[2658]: E0620 18:54:02.671586 2658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:54:02.674902 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:54:02.675119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:54:02.675556 systemd[1]: kubelet.service: Consumed 183ms CPU time, 110M memory peak. Jun 20 18:54:03.882678 containerd[1729]: time="2025-06-20T18:54:03.882623655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:03.886569 containerd[1729]: time="2025-06-20T18:54:03.886503212Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079107" Jun 20 18:54:03.890322 containerd[1729]: time="2025-06-20T18:54:03.890264867Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:03.894229 containerd[1729]: time="2025-06-20T18:54:03.894196025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:03.895921 containerd[1729]: time="2025-06-20T18:54:03.895254640Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 3.080575293s" Jun 20 18:54:03.895921 containerd[1729]: time="2025-06-20T18:54:03.895298141Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jun 20 18:54:03.896344 containerd[1729]: time="2025-06-20T18:54:03.896243555Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 18:54:05.495026 containerd[1729]: time="2025-06-20T18:54:05.494964356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:05.497944 containerd[1729]: time="2025-06-20T18:54:05.497887799Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018954" Jun 20 18:54:05.501631 containerd[1729]: time="2025-06-20T18:54:05.501563753Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:05.506791 containerd[1729]: time="2025-06-20T18:54:05.506728429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:05.507937 containerd[1729]: time="2025-06-20T18:54:05.507762944Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.611484589s" Jun 20 18:54:05.507937 containerd[1729]: time="2025-06-20T18:54:05.507803244Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jun 20 18:54:05.508890 containerd[1729]: time="2025-06-20T18:54:05.508629756Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 18:54:06.853807 containerd[1729]: time="2025-06-20T18:54:06.853743844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:06.858372 containerd[1729]: time="2025-06-20T18:54:06.858281415Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155063" Jun 20 18:54:06.861524 containerd[1729]: time="2025-06-20T18:54:06.861461665Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:06.867334 containerd[1729]: time="2025-06-20T18:54:06.867199355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:06.871216 containerd[1729]: time="2025-06-20T18:54:06.871174017Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.36251256s" Jun 20 18:54:06.871216 containerd[1729]: time="2025-06-20T18:54:06.871208718Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jun 20 18:54:06.872059 containerd[1729]: time="2025-06-20T18:54:06.872024631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 18:54:08.100638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2273606099.mount: Deactivated successfully. Jun 20 18:54:08.648290 containerd[1729]: time="2025-06-20T18:54:08.648233093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:08.650269 containerd[1729]: time="2025-06-20T18:54:08.650223225Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892754" Jun 20 18:54:08.653723 containerd[1729]: time="2025-06-20T18:54:08.653669279Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:08.658760 containerd[1729]: time="2025-06-20T18:54:08.658470554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:08.659475 containerd[1729]: time="2025-06-20T18:54:08.659443269Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.787370038s" Jun 20 18:54:08.659699 containerd[1729]: time="2025-06-20T18:54:08.659579371Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jun 20 18:54:08.660294 containerd[1729]: time="2025-06-20T18:54:08.660238482Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 18:54:09.172829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount895039598.mount: Deactivated successfully. Jun 20 18:54:10.490830 containerd[1729]: time="2025-06-20T18:54:10.490718096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:10.495653 containerd[1729]: time="2025-06-20T18:54:10.495600172Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jun 20 18:54:10.498049 containerd[1729]: time="2025-06-20T18:54:10.497982510Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:10.504245 containerd[1729]: time="2025-06-20T18:54:10.504155207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:10.505808 containerd[1729]: time="2025-06-20T18:54:10.505380026Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.845107444s" Jun 20 18:54:10.505808 containerd[1729]: time="2025-06-20T18:54:10.505421626Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jun 20 18:54:10.505998 containerd[1729]: time="2025-06-20T18:54:10.505937735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:54:10.981284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763200558.mount: Deactivated successfully. Jun 20 18:54:11.004420 containerd[1729]: time="2025-06-20T18:54:11.004357553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:11.007388 containerd[1729]: time="2025-06-20T18:54:11.007318900Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 20 18:54:11.011439 containerd[1729]: time="2025-06-20T18:54:11.011378563Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:11.015942 containerd[1729]: time="2025-06-20T18:54:11.015905334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:11.016787 containerd[1729]: time="2025-06-20T18:54:11.016624546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 510.65321ms" Jun 20 18:54:11.016787 containerd[1729]: time="2025-06-20T18:54:11.016666346Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 18:54:11.017587 containerd[1729]: time="2025-06-20T18:54:11.017516259Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 18:54:11.600191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070941335.mount: Deactivated successfully. Jun 20 18:54:12.875170 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jun 20 18:54:12.884376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:13.051693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:13.054803 (kubelet)[2834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:54:13.649210 kubelet[2834]: E0620 18:54:13.649119 2834 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:54:13.652816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:54:13.653855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:54:13.654481 systemd[1]: kubelet.service: Consumed 711ms CPU time, 110.3M memory peak. Jun 20 18:54:14.169379 containerd[1729]: time="2025-06-20T18:54:14.169320201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:14.171757 containerd[1729]: time="2025-06-20T18:54:14.171685838Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247183" Jun 20 18:54:14.177361 containerd[1729]: time="2025-06-20T18:54:14.177295726Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:14.194145 containerd[1729]: time="2025-06-20T18:54:14.194020888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:14.195808 containerd[1729]: time="2025-06-20T18:54:14.195609513Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.178032753s" Jun 20 18:54:14.195808 containerd[1729]: time="2025-06-20T18:54:14.195663614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jun 20 18:54:17.214728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:17.214981 systemd[1]: kubelet.service: Consumed 711ms CPU time, 110.3M memory peak. Jun 20 18:54:17.223379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:17.260442 systemd[1]: Reload requested from client PID 2873 ('systemctl') (unit session-9.scope)... Jun 20 18:54:17.260465 systemd[1]: Reloading... Jun 20 18:54:17.423087 zram_generator::config[2920]: No configuration found. Jun 20 18:54:17.552139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:54:17.671344 systemd[1]: Reloading finished in 410 ms. Jun 20 18:54:17.726178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:17.733149 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:17.734563 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:54:17.734811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:17.734869 systemd[1]: kubelet.service: Consumed 129ms CPU time, 98.3M memory peak. Jun 20 18:54:17.738421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:18.023623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:18.029354 (kubelet)[2992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:54:18.082437 kubelet[2992]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:54:18.082437 kubelet[2992]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:54:18.082437 kubelet[2992]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:54:18.082437 kubelet[2992]: I0620 18:54:18.082194 2992 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:54:18.692105 kubelet[2992]: I0620 18:54:18.691415 2992 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:54:18.692105 kubelet[2992]: I0620 18:54:18.691450 2992 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:54:18.692105 kubelet[2992]: I0620 18:54:18.691756 2992 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:54:18.760488 kubelet[2992]: I0620 18:54:18.759887 2992 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:54:18.762078 kubelet[2992]: E0620 18:54:18.762028 2992 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 18:54:18.785430 kubelet[2992]: E0620 18:54:18.785382 2992 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:54:18.785591 kubelet[2992]: I0620 18:54:18.785582 2992 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:54:18.789857 kubelet[2992]: I0620 18:54:18.789833 2992 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:54:18.790227 kubelet[2992]: I0620 18:54:18.790195 2992 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:54:18.790419 kubelet[2992]: I0620 18:54:18.790223 2992 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.0-a-bab85c4a2e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:54:18.790576 kubelet[2992]: I0620 18:54:18.790429 2992 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:54:18.790576 kubelet[2992]: I0620 18:54:18.790443 2992 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:54:18.790654 kubelet[2992]: I0620 18:54:18.790600 2992 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:54:18.792728 kubelet[2992]: I0620 18:54:18.792705 2992 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:54:18.792728 kubelet[2992]: I0620 18:54:18.792730 2992 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:54:18.792870 kubelet[2992]: I0620 18:54:18.792757 2992 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:54:18.794766 kubelet[2992]: I0620 18:54:18.794747 2992 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:54:18.805672 kubelet[2992]: E0620 18:54:18.805002 2992 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-bab85c4a2e&limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:54:18.805672 kubelet[2992]: E0620 18:54:18.805568 2992 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:54:18.806011 kubelet[2992]: I0620 18:54:18.805989 2992 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:54:18.806669 kubelet[2992]: I0620 18:54:18.806641 2992 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:54:18.807595 kubelet[2992]: W0620 18:54:18.807290 2992 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:54:18.810426 kubelet[2992]: I0620 18:54:18.810402 2992 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:54:18.810683 kubelet[2992]: I0620 18:54:18.810466 2992 server.go:1289] "Started kubelet" Jun 20 18:54:18.812903 kubelet[2992]: I0620 18:54:18.812877 2992 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:54:18.816435 kubelet[2992]: E0620 18:54:18.814453 2992 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.0-a-bab85c4a2e.184ad50d6c051d12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.0-a-bab85c4a2e,UID:ci-4230.2.0-a-bab85c4a2e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.0-a-bab85c4a2e,},FirstTimestamp:2025-06-20 18:54:18.810424594 +0000 UTC m=+0.776261730,LastTimestamp:2025-06-20 18:54:18.810424594 +0000 UTC m=+0.776261730,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.0-a-bab85c4a2e,}" Jun 20 18:54:18.816579 kubelet[2992]: I0620 18:54:18.816455 2992 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:54:18.818419 kubelet[2992]: I0620 18:54:18.817646 2992 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:54:18.821749 kubelet[2992]: I0620 18:54:18.821684 2992 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:54:18.822125 kubelet[2992]: I0620 18:54:18.821972 2992 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:54:18.822821 kubelet[2992]: I0620 18:54:18.822360 2992 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:54:18.824478 kubelet[2992]: I0620 18:54:18.824461 2992 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:54:18.824855 kubelet[2992]: E0620 18:54:18.824834 2992 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.0-a-bab85c4a2e\" not found" Jun 20 18:54:18.826219 kubelet[2992]: E0620 18:54:18.826175 2992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-bab85c4a2e?timeout=10s\": dial tcp 10.200.8.21:6443: connect: connection refused" interval="200ms" Jun 20 18:54:18.826478 kubelet[2992]: I0620 18:54:18.826453 2992 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:54:18.826792 kubelet[2992]: I0620 18:54:18.826541 2992 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:54:18.828294 kubelet[2992]: I0620 18:54:18.828280 2992 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:54:18.828493 kubelet[2992]: I0620 18:54:18.828481 2992 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:54:18.828843 kubelet[2992]: I0620 18:54:18.828821 2992 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:54:18.842019 kubelet[2992]: E0620 18:54:18.841974 2992 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:54:18.842784 kubelet[2992]: E0620 18:54:18.842754 2992 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:54:18.858121 kubelet[2992]: I0620 18:54:18.858046 2992 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:54:18.859523 kubelet[2992]: I0620 18:54:18.859491 2992 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:54:18.859523 kubelet[2992]: I0620 18:54:18.859522 2992 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:54:18.859698 kubelet[2992]: I0620 18:54:18.859547 2992 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:54:18.859698 kubelet[2992]: I0620 18:54:18.859555 2992 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:54:18.859698 kubelet[2992]: E0620 18:54:18.859601 2992 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:54:18.861969 kubelet[2992]: E0620 18:54:18.861798 2992 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:54:18.865312 kubelet[2992]: I0620 18:54:18.865214 2992 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:54:18.865312 kubelet[2992]: I0620 18:54:18.865233 2992 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:54:18.865312 kubelet[2992]: I0620 18:54:18.865255 2992 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:54:18.871484 kubelet[2992]: I0620 18:54:18.871456 2992 policy_none.go:49] "None policy: Start" Jun 20 18:54:18.871484 kubelet[2992]: I0620 18:54:18.871488 2992 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:54:18.871681 kubelet[2992]: I0620 18:54:18.871504 2992 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:54:18.879476 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:54:18.888736 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:54:18.899047 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:54:18.901093 kubelet[2992]: E0620 18:54:18.900872 2992 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:54:18.901190 kubelet[2992]: I0620 18:54:18.901125 2992 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:54:18.901190 kubelet[2992]: I0620 18:54:18.901141 2992 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:54:18.901734 kubelet[2992]: I0620 18:54:18.901526 2992 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:54:18.903262 kubelet[2992]: E0620 18:54:18.903243 2992 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:54:18.903363 kubelet[2992]: E0620 18:54:18.903337 2992 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.0-a-bab85c4a2e\" not found" Jun 20 18:54:18.973764 systemd[1]: Created slice kubepods-burstable-podfa8e5f3f586c435c965dd68b971710be.slice - libcontainer container kubepods-burstable-podfa8e5f3f586c435c965dd68b971710be.slice. Jun 20 18:54:18.986099 kubelet[2992]: E0620 18:54:18.985673 2992 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-bab85c4a2e\" not found" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:18.991539 systemd[1]: Created slice kubepods-burstable-poda6c2a6c593c831ecf6fe4d4dadc5321a.slice - libcontainer container kubepods-burstable-poda6c2a6c593c831ecf6fe4d4dadc5321a.slice. Jun 20 18:54:18.993904 kubelet[2992]: E0620 18:54:18.993702 2992 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-bab85c4a2e\" not found" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:18.995617 systemd[1]: Created slice kubepods-burstable-pod0884253978f31bc05bc29be4b6f48392.slice - libcontainer container kubepods-burstable-pod0884253978f31bc05bc29be4b6f48392.slice. Jun 20 18:54:18.997592 kubelet[2992]: E0620 18:54:18.997558 2992 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-bab85c4a2e\" not found" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.003112 kubelet[2992]: I0620 18:54:19.003087 2992 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.003459 kubelet[2992]: E0620 18:54:19.003428 2992 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.21:6443/api/v1/nodes\": dial tcp 10.200.8.21:6443: connect: connection refused" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.027200 kubelet[2992]: E0620 18:54:19.027144 2992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-bab85c4a2e?timeout=10s\": dial tcp 10.200.8.21:6443: connect: connection refused" interval="400ms" Jun 20 18:54:19.130028 kubelet[2992]: I0620 18:54:19.129986 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6c2a6c593c831ecf6fe4d4dadc5321a-ca-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" (UID: \"a6c2a6c593c831ecf6fe4d4dadc5321a\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.130028 kubelet[2992]: I0620 18:54:19.130033 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a6c2a6c593c831ecf6fe4d4dadc5321a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" (UID: \"a6c2a6c593c831ecf6fe4d4dadc5321a\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.130550 kubelet[2992]: I0620 18:54:19.130084 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa8e5f3f586c435c965dd68b971710be-ca-certs\") pod \"kube-apiserver-ci-4230.2.0-a-bab85c4a2e\" (UID: \"fa8e5f3f586c435c965dd68b971710be\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.130550 kubelet[2992]: I0620 18:54:19.130106 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa8e5f3f586c435c965dd68b971710be-k8s-certs\") pod \"kube-apiserver-ci-4230.2.0-a-bab85c4a2e\" (UID: \"fa8e5f3f586c435c965dd68b971710be\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.130550 kubelet[2992]: I0620 18:54:19.130130 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa8e5f3f586c435c965dd68b971710be-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.0-a-bab85c4a2e\" (UID: \"fa8e5f3f586c435c965dd68b971710be\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.130550 kubelet[2992]: I0620 18:54:19.130151 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6c2a6c593c831ecf6fe4d4dadc5321a-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" (UID: \"a6c2a6c593c831ecf6fe4d4dadc5321a\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.130550 kubelet[2992]: I0620 18:54:19.130176 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6c2a6c593c831ecf6fe4d4dadc5321a-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" (UID: \"a6c2a6c593c831ecf6fe4d4dadc5321a\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.130672 kubelet[2992]: I0620 18:54:19.130199 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6c2a6c593c831ecf6fe4d4dadc5321a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" (UID: \"a6c2a6c593c831ecf6fe4d4dadc5321a\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.130672 kubelet[2992]: I0620 18:54:19.130226 2992 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0884253978f31bc05bc29be4b6f48392-kubeconfig\") pod \"kube-scheduler-ci-4230.2.0-a-bab85c4a2e\" (UID: \"0884253978f31bc05bc29be4b6f48392\") " pod="kube-system/kube-scheduler-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.206105 kubelet[2992]: I0620 18:54:19.206070 2992 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.206551 kubelet[2992]: E0620 18:54:19.206504 2992 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.21:6443/api/v1/nodes\": dial tcp 10.200.8.21:6443: connect: connection refused" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.287507 containerd[1729]: time="2025-06-20T18:54:19.287041171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.0-a-bab85c4a2e,Uid:fa8e5f3f586c435c965dd68b971710be,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:19.295338 containerd[1729]: time="2025-06-20T18:54:19.295296807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.0-a-bab85c4a2e,Uid:a6c2a6c593c831ecf6fe4d4dadc5321a,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:19.299353 containerd[1729]: time="2025-06-20T18:54:19.299022269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.0-a-bab85c4a2e,Uid:0884253978f31bc05bc29be4b6f48392,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:19.430015 kubelet[2992]: E0620 18:54:19.429537 2992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-bab85c4a2e?timeout=10s\": dial tcp 10.200.8.21:6443: connect: connection refused" interval="800ms" Jun 20 18:54:19.608677 kubelet[2992]: I0620 18:54:19.608564 2992 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.609013 kubelet[2992]: E0620 18:54:19.608955 2992 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.21:6443/api/v1/nodes\": dial tcp 10.200.8.21:6443: connect: connection refused" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:19.708558 kubelet[2992]: E0620 18:54:19.708512 2992 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:54:19.740782 kubelet[2992]: E0620 18:54:19.740741 2992 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:54:19.914419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632140888.mount: Deactivated successfully. Jun 20 18:54:19.942146 containerd[1729]: time="2025-06-20T18:54:19.942089196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:54:19.954491 containerd[1729]: time="2025-06-20T18:54:19.954424700Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jun 20 18:54:19.957705 containerd[1729]: time="2025-06-20T18:54:19.957657253Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:54:19.963531 containerd[1729]: time="2025-06-20T18:54:19.963479949Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:54:19.970527 containerd[1729]: time="2025-06-20T18:54:19.970188260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:54:19.975518 containerd[1729]: time="2025-06-20T18:54:19.975237044Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:54:19.978556 containerd[1729]: time="2025-06-20T18:54:19.978395496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:54:19.979406 containerd[1729]: time="2025-06-20T18:54:19.979370912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 683.970803ms" Jun 20 18:54:19.980539 containerd[1729]: time="2025-06-20T18:54:19.980412029Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:54:19.985806 containerd[1729]: time="2025-06-20T18:54:19.985762318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 698.579345ms" Jun 20 18:54:20.004383 containerd[1729]: time="2025-06-20T18:54:20.004314824Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 705.142553ms" Jun 20 18:54:20.128598 kubelet[2992]: E0620 18:54:20.128550 2992 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.0-a-bab85c4a2e&limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:54:20.231104 kubelet[2992]: E0620 18:54:20.230907 2992 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.0-a-bab85c4a2e?timeout=10s\": dial tcp 10.200.8.21:6443: connect: connection refused" interval="1.6s" Jun 20 18:54:20.381176 kubelet[2992]: E0620 18:54:20.381115 2992 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:54:20.411384 kubelet[2992]: I0620 18:54:20.411339 2992 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:20.411788 kubelet[2992]: E0620 18:54:20.411751 2992 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.21:6443/api/v1/nodes\": dial tcp 10.200.8.21:6443: connect: connection refused" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:20.507528 kubelet[2992]: E0620 18:54:20.507351 2992 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.0-a-bab85c4a2e.184ad50d6c051d12 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.0-a-bab85c4a2e,UID:ci-4230.2.0-a-bab85c4a2e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.0-a-bab85c4a2e,},FirstTimestamp:2025-06-20 18:54:18.810424594 +0000 UTC m=+0.776261730,LastTimestamp:2025-06-20 18:54:18.810424594 +0000 UTC m=+0.776261730,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.0-a-bab85c4a2e,}" Jun 20 18:54:20.638598 containerd[1729]: time="2025-06-20T18:54:20.635460455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:20.639129 containerd[1729]: time="2025-06-20T18:54:20.638580006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:20.639455 containerd[1729]: time="2025-06-20T18:54:20.639211217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:20.639455 containerd[1729]: time="2025-06-20T18:54:20.639397420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:20.640082 containerd[1729]: time="2025-06-20T18:54:20.639889228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:20.640082 containerd[1729]: time="2025-06-20T18:54:20.639967129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:20.640082 containerd[1729]: time="2025-06-20T18:54:20.640009430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:20.641505 containerd[1729]: time="2025-06-20T18:54:20.640622240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:20.641505 containerd[1729]: time="2025-06-20T18:54:20.640679141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:20.641505 containerd[1729]: time="2025-06-20T18:54:20.640700141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:20.641505 containerd[1729]: time="2025-06-20T18:54:20.640780543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:20.642476 containerd[1729]: time="2025-06-20T18:54:20.642278667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:20.683266 systemd[1]: Started cri-containerd-2598035f9b4202184e5b13a96c84609c90ef89b5b0499cda79baf369bae0680d.scope - libcontainer container 2598035f9b4202184e5b13a96c84609c90ef89b5b0499cda79baf369bae0680d. Jun 20 18:54:20.689477 systemd[1]: Started cri-containerd-a12199475517c0318fd92a5c50b809ec1fcafcaa0a94294bbbe7e2f5b612808f.scope - libcontainer container a12199475517c0318fd92a5c50b809ec1fcafcaa0a94294bbbe7e2f5b612808f. Jun 20 18:54:20.692143 systemd[1]: Started cri-containerd-b644237b225d1fb1a6e99d0907c7cdb642db652390ac29ee9e4f55052878d34a.scope - libcontainer container b644237b225d1fb1a6e99d0907c7cdb642db652390ac29ee9e4f55052878d34a. Jun 20 18:54:20.755317 containerd[1729]: time="2025-06-20T18:54:20.755247134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.0-a-bab85c4a2e,Uid:fa8e5f3f586c435c965dd68b971710be,Namespace:kube-system,Attempt:0,} returns sandbox id \"2598035f9b4202184e5b13a96c84609c90ef89b5b0499cda79baf369bae0680d\"" Jun 20 18:54:20.767216 containerd[1729]: time="2025-06-20T18:54:20.766941728Z" level=info msg="CreateContainer within sandbox \"2598035f9b4202184e5b13a96c84609c90ef89b5b0499cda79baf369bae0680d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:54:20.779702 containerd[1729]: time="2025-06-20T18:54:20.779656038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.0-a-bab85c4a2e,Uid:a6c2a6c593c831ecf6fe4d4dadc5321a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b644237b225d1fb1a6e99d0907c7cdb642db652390ac29ee9e4f55052878d34a\"" Jun 20 18:54:20.787591 containerd[1729]: time="2025-06-20T18:54:20.787538868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.0-a-bab85c4a2e,Uid:0884253978f31bc05bc29be4b6f48392,Namespace:kube-system,Attempt:0,} returns sandbox id \"a12199475517c0318fd92a5c50b809ec1fcafcaa0a94294bbbe7e2f5b612808f\"" Jun 20 18:54:20.791339 containerd[1729]: time="2025-06-20T18:54:20.791291030Z" level=info msg="CreateContainer within sandbox \"b644237b225d1fb1a6e99d0907c7cdb642db652390ac29ee9e4f55052878d34a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:54:20.795957 containerd[1729]: time="2025-06-20T18:54:20.795925307Z" level=info msg="CreateContainer within sandbox \"a12199475517c0318fd92a5c50b809ec1fcafcaa0a94294bbbe7e2f5b612808f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:54:20.832134 containerd[1729]: time="2025-06-20T18:54:20.832074304Z" level=info msg="CreateContainer within sandbox \"2598035f9b4202184e5b13a96c84609c90ef89b5b0499cda79baf369bae0680d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c4e43f258eb48d142e0606cfc725d9ab25fb80e242ba4ffc86e67acc2fe4d40b\"" Jun 20 18:54:20.833827 containerd[1729]: time="2025-06-20T18:54:20.833790232Z" level=info msg="StartContainer for \"c4e43f258eb48d142e0606cfc725d9ab25fb80e242ba4ffc86e67acc2fe4d40b\"" Jun 20 18:54:20.844982 containerd[1729]: time="2025-06-20T18:54:20.844932316Z" level=info msg="CreateContainer within sandbox \"b644237b225d1fb1a6e99d0907c7cdb642db652390ac29ee9e4f55052878d34a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ba664cb399ba4e457f9b14ef89d0c74683259a8a453a58071c4309777738ff35\"" Jun 20 18:54:20.846158 containerd[1729]: time="2025-06-20T18:54:20.845548027Z" level=info msg="StartContainer for \"ba664cb399ba4e457f9b14ef89d0c74683259a8a453a58071c4309777738ff35\"" Jun 20 18:54:20.860451 containerd[1729]: time="2025-06-20T18:54:20.860410372Z" level=info msg="CreateContainer within sandbox \"a12199475517c0318fd92a5c50b809ec1fcafcaa0a94294bbbe7e2f5b612808f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd8c1172cdd0aaed7b4065c5fb5fe49e2b2635a8d1c25210cb54e4539fc705fd\"" Jun 20 18:54:20.861750 containerd[1729]: time="2025-06-20T18:54:20.861716194Z" level=info msg="StartContainer for \"dd8c1172cdd0aaed7b4065c5fb5fe49e2b2635a8d1c25210cb54e4539fc705fd\"" Jun 20 18:54:20.865033 systemd[1]: Started cri-containerd-c4e43f258eb48d142e0606cfc725d9ab25fb80e242ba4ffc86e67acc2fe4d40b.scope - libcontainer container c4e43f258eb48d142e0606cfc725d9ab25fb80e242ba4ffc86e67acc2fe4d40b. Jun 20 18:54:20.896492 systemd[1]: Started cri-containerd-ba664cb399ba4e457f9b14ef89d0c74683259a8a453a58071c4309777738ff35.scope - libcontainer container ba664cb399ba4e457f9b14ef89d0c74683259a8a453a58071c4309777738ff35. Jun 20 18:54:20.935248 systemd[1]: Started cri-containerd-dd8c1172cdd0aaed7b4065c5fb5fe49e2b2635a8d1c25210cb54e4539fc705fd.scope - libcontainer container dd8c1172cdd0aaed7b4065c5fb5fe49e2b2635a8d1c25210cb54e4539fc705fd. Jun 20 18:54:20.960913 kubelet[2992]: E0620 18:54:20.960872 2992 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.21:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 18:54:20.996127 containerd[1729]: time="2025-06-20T18:54:20.995957712Z" level=info msg="StartContainer for \"c4e43f258eb48d142e0606cfc725d9ab25fb80e242ba4ffc86e67acc2fe4d40b\" returns successfully" Jun 20 18:54:21.027548 containerd[1729]: time="2025-06-20T18:54:21.027325531Z" level=info msg="StartContainer for \"ba664cb399ba4e457f9b14ef89d0c74683259a8a453a58071c4309777738ff35\" returns successfully" Jun 20 18:54:21.057861 containerd[1729]: time="2025-06-20T18:54:21.057809234Z" level=info msg="StartContainer for \"dd8c1172cdd0aaed7b4065c5fb5fe49e2b2635a8d1c25210cb54e4539fc705fd\" returns successfully" Jun 20 18:54:21.896100 kubelet[2992]: E0620 18:54:21.895813 2992 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-bab85c4a2e\" not found" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:21.899967 kubelet[2992]: E0620 18:54:21.899582 2992 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-bab85c4a2e\" not found" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:21.900999 kubelet[2992]: E0620 18:54:21.900978 2992 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-bab85c4a2e\" not found" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:22.015477 kubelet[2992]: I0620 18:54:22.014867 2992 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:22.901386 kubelet[2992]: E0620 18:54:22.901190 2992 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-bab85c4a2e\" not found" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:22.904146 kubelet[2992]: E0620 18:54:22.902511 2992 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-bab85c4a2e\" not found" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:22.904666 kubelet[2992]: E0620 18:54:22.904525 2992 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.0-a-bab85c4a2e\" not found" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:23.622662 kubelet[2992]: E0620 18:54:23.622603 2992 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.0-a-bab85c4a2e\" not found" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:24.642477 kubelet[2992]: I0620 18:54:24.642441 2992 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:24.643350 kubelet[2992]: I0620 18:54:24.643189 2992 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:24.644003 kubelet[2992]: I0620 18:54:24.643973 2992 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:24.647083 kubelet[2992]: I0620 18:54:24.645234 2992 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:24.658784 kubelet[2992]: I0620 18:54:24.658749 2992 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:54:24.661945 kubelet[2992]: I0620 18:54:24.661732 2992 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:54:24.665845 kubelet[2992]: I0620 18:54:24.665651 2992 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:54:24.725933 kubelet[2992]: I0620 18:54:24.725877 2992 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:24.736909 kubelet[2992]: I0620 18:54:24.736573 2992 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:54:24.736909 kubelet[2992]: E0620 18:54:24.736650 2992 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.0-a-bab85c4a2e\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:24.736909 kubelet[2992]: I0620 18:54:24.736682 2992 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:24.747217 kubelet[2992]: I0620 18:54:24.746911 2992 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:54:24.747217 kubelet[2992]: E0620 18:54:24.746985 2992 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:24.747217 kubelet[2992]: I0620 18:54:24.747003 2992 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:24.756348 kubelet[2992]: I0620 18:54:24.756251 2992 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:54:24.756348 kubelet[2992]: E0620 18:54:24.756322 2992 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.0-a-bab85c4a2e\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:25.639926 kubelet[2992]: I0620 18:54:25.639873 2992 apiserver.go:52] "Watching apiserver" Jun 20 18:54:25.729480 kubelet[2992]: I0620 18:54:25.729418 2992 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:54:26.032170 systemd[1]: Reload requested from client PID 3273 ('systemctl') (unit session-9.scope)... Jun 20 18:54:26.032187 systemd[1]: Reloading... Jun 20 18:54:26.134081 zram_generator::config[3316]: No configuration found. Jun 20 18:54:26.282065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:54:26.416682 systemd[1]: Reloading finished in 383 ms. Jun 20 18:54:26.450974 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:26.470743 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:54:26.471069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:26.471144 systemd[1]: kubelet.service: Consumed 1.109s CPU time, 133M memory peak. Jun 20 18:54:26.481462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:26.916334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:26.922233 (kubelet)[3387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:54:26.970085 kubelet[3387]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:54:26.970085 kubelet[3387]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:54:26.970085 kubelet[3387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:54:26.970562 kubelet[3387]: I0620 18:54:26.970137 3387 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:54:26.975513 kubelet[3387]: I0620 18:54:26.975479 3387 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:54:26.975513 kubelet[3387]: I0620 18:54:26.975504 3387 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:54:26.975744 kubelet[3387]: I0620 18:54:26.975726 3387 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:54:26.976809 kubelet[3387]: I0620 18:54:26.976785 3387 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 18:54:26.979115 kubelet[3387]: I0620 18:54:26.978763 3387 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:54:26.986115 kubelet[3387]: E0620 18:54:26.986069 3387 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:54:26.986115 kubelet[3387]: I0620 18:54:26.986117 3387 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:54:26.990795 kubelet[3387]: I0620 18:54:26.990266 3387 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:54:26.990795 kubelet[3387]: I0620 18:54:26.990485 3387 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:54:26.990795 kubelet[3387]: I0620 18:54:26.990503 3387 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.0-a-bab85c4a2e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:54:26.990795 kubelet[3387]: I0620 18:54:26.990673 3387 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:54:26.991599 kubelet[3387]: I0620 18:54:26.990682 3387 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:54:26.991599 kubelet[3387]: I0620 18:54:26.990726 3387 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:54:26.991599 kubelet[3387]: I0620 18:54:26.990882 3387 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:54:26.991599 kubelet[3387]: I0620 18:54:26.990897 3387 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:54:26.991599 kubelet[3387]: I0620 18:54:26.990923 3387 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:54:26.991599 kubelet[3387]: I0620 18:54:26.990937 3387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:54:26.994257 kubelet[3387]: I0620 18:54:26.994163 3387 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:54:26.996243 kubelet[3387]: I0620 18:54:26.996127 3387 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:54:26.999421 kubelet[3387]: I0620 18:54:26.999404 3387 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:54:27.000126 kubelet[3387]: I0620 18:54:26.999541 3387 server.go:1289] "Started kubelet" Jun 20 18:54:27.001941 kubelet[3387]: I0620 18:54:27.001927 3387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:54:27.013015 kubelet[3387]: I0620 18:54:27.012981 3387 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:54:27.014796 kubelet[3387]: I0620 18:54:27.014771 3387 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:54:27.020995 kubelet[3387]: I0620 18:54:27.019179 3387 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:54:27.020995 kubelet[3387]: I0620 18:54:27.019410 3387 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:54:27.020995 kubelet[3387]: I0620 18:54:27.019655 3387 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:54:27.025166 kubelet[3387]: I0620 18:54:27.025143 3387 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:54:27.027624 kubelet[3387]: E0620 18:54:27.027600 3387 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:54:27.027934 kubelet[3387]: I0620 18:54:27.027916 3387 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:54:27.028133 kubelet[3387]: I0620 18:54:27.028112 3387 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:54:27.028735 kubelet[3387]: I0620 18:54:27.028713 3387 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:54:27.028858 kubelet[3387]: I0620 18:54:27.028845 3387 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:54:27.031693 kubelet[3387]: I0620 18:54:27.031631 3387 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:54:27.033650 kubelet[3387]: I0620 18:54:27.033198 3387 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:54:27.033650 kubelet[3387]: I0620 18:54:27.033227 3387 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:54:27.033650 kubelet[3387]: I0620 18:54:27.033247 3387 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:54:27.033650 kubelet[3387]: I0620 18:54:27.033256 3387 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:54:27.033650 kubelet[3387]: E0620 18:54:27.033301 3387 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:54:27.040145 kubelet[3387]: I0620 18:54:27.040120 3387 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:54:27.078135 kubelet[3387]: I0620 18:54:27.078111 3387 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:54:27.078289 kubelet[3387]: I0620 18:54:27.078280 3387 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:54:27.078349 kubelet[3387]: I0620 18:54:27.078340 3387 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:54:27.078555 kubelet[3387]: I0620 18:54:27.078539 3387 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:54:27.078644 kubelet[3387]: I0620 18:54:27.078624 3387 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:54:27.078684 kubelet[3387]: I0620 18:54:27.078679 3387 policy_none.go:49] "None policy: Start" Jun 20 18:54:27.078733 kubelet[3387]: I0620 18:54:27.078727 3387 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:54:27.078776 kubelet[3387]: I0620 18:54:27.078770 3387 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:54:27.078896 kubelet[3387]: I0620 18:54:27.078888 3387 state_mem.go:75] "Updated machine memory state" Jun 20 18:54:27.082782 kubelet[3387]: E0620 18:54:27.082749 3387 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:54:27.082951 kubelet[3387]: I0620 18:54:27.082931 3387 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:54:27.083012 kubelet[3387]: I0620 18:54:27.082947 3387 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:54:27.083511 kubelet[3387]: I0620 18:54:27.083485 3387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:54:27.088632 kubelet[3387]: E0620 18:54:27.086202 3387 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:54:27.135698 kubelet[3387]: I0620 18:54:27.134763 3387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.135698 kubelet[3387]: I0620 18:54:27.134870 3387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.135698 kubelet[3387]: I0620 18:54:27.135179 3387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.146081 kubelet[3387]: I0620 18:54:27.146025 3387 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:54:27.146251 kubelet[3387]: E0620 18:54:27.146133 3387 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.0-a-bab85c4a2e\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.146785 kubelet[3387]: I0620 18:54:27.146750 3387 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:54:27.146946 kubelet[3387]: E0620 18:54:27.146809 3387 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.0-a-bab85c4a2e\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.146946 kubelet[3387]: I0620 18:54:27.146750 3387 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:54:27.146946 kubelet[3387]: E0620 18:54:27.146879 3387 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.191437 kubelet[3387]: I0620 18:54:27.190897 3387 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.202561 kubelet[3387]: I0620 18:54:27.202518 3387 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.202717 kubelet[3387]: I0620 18:54:27.202611 3387 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.320892 sudo[3421]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 18:54:27.321335 sudo[3421]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 18:54:27.330018 kubelet[3387]: I0620 18:54:27.329708 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6c2a6c593c831ecf6fe4d4dadc5321a-ca-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" (UID: \"a6c2a6c593c831ecf6fe4d4dadc5321a\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.330018 kubelet[3387]: I0620 18:54:27.329752 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a6c2a6c593c831ecf6fe4d4dadc5321a-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" (UID: \"a6c2a6c593c831ecf6fe4d4dadc5321a\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.330018 kubelet[3387]: I0620 18:54:27.329776 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6c2a6c593c831ecf6fe4d4dadc5321a-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" (UID: \"a6c2a6c593c831ecf6fe4d4dadc5321a\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.330018 kubelet[3387]: I0620 18:54:27.329799 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6c2a6c593c831ecf6fe4d4dadc5321a-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" (UID: \"a6c2a6c593c831ecf6fe4d4dadc5321a\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.330018 kubelet[3387]: I0620 18:54:27.329827 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0884253978f31bc05bc29be4b6f48392-kubeconfig\") pod \"kube-scheduler-ci-4230.2.0-a-bab85c4a2e\" (UID: \"0884253978f31bc05bc29be4b6f48392\") " pod="kube-system/kube-scheduler-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.330330 kubelet[3387]: I0620 18:54:27.329849 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa8e5f3f586c435c965dd68b971710be-k8s-certs\") pod \"kube-apiserver-ci-4230.2.0-a-bab85c4a2e\" (UID: \"fa8e5f3f586c435c965dd68b971710be\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.330330 kubelet[3387]: I0620 18:54:27.329872 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6c2a6c593c831ecf6fe4d4dadc5321a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" (UID: \"a6c2a6c593c831ecf6fe4d4dadc5321a\") " pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.330330 kubelet[3387]: I0620 18:54:27.329895 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa8e5f3f586c435c965dd68b971710be-ca-certs\") pod \"kube-apiserver-ci-4230.2.0-a-bab85c4a2e\" (UID: \"fa8e5f3f586c435c965dd68b971710be\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.330330 kubelet[3387]: I0620 18:54:27.329929 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa8e5f3f586c435c965dd68b971710be-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.0-a-bab85c4a2e\" (UID: \"fa8e5f3f586c435c965dd68b971710be\") " pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:27.873615 sudo[3421]: pam_unix(sudo:session): session closed for user root Jun 20 18:54:27.992007 kubelet[3387]: I0620 18:54:27.991948 3387 apiserver.go:52] "Watching apiserver" Jun 20 18:54:28.029915 kubelet[3387]: I0620 18:54:28.029838 3387 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:54:28.067593 kubelet[3387]: I0620 18:54:28.067509 3387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:28.071775 kubelet[3387]: I0620 18:54:28.071753 3387 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:28.088776 kubelet[3387]: I0620 18:54:28.088645 3387 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:54:28.088776 kubelet[3387]: E0620 18:54:28.088726 3387 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.0-a-bab85c4a2e\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:28.092532 kubelet[3387]: I0620 18:54:28.092237 3387 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jun 20 18:54:28.092532 kubelet[3387]: E0620 18:54:28.092300 3387 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.0-a-bab85c4a2e\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" Jun 20 18:54:28.119703 kubelet[3387]: I0620 18:54:28.119016 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.0-a-bab85c4a2e" podStartSLOduration=4.118996604 podStartE2EDuration="4.118996604s" podCreationTimestamp="2025-06-20 18:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:54:28.10707872 +0000 UTC m=+1.178142340" watchObservedRunningTime="2025-06-20 18:54:28.118996604 +0000 UTC m=+1.190060124" Jun 20 18:54:28.135022 kubelet[3387]: I0620 18:54:28.134346 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.0-a-bab85c4a2e" podStartSLOduration=4.13432754 podStartE2EDuration="4.13432754s" podCreationTimestamp="2025-06-20 18:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:54:28.120376425 +0000 UTC m=+1.191440045" watchObservedRunningTime="2025-06-20 18:54:28.13432754 +0000 UTC m=+1.205391060" Jun 20 18:54:28.135022 kubelet[3387]: I0620 18:54:28.134506 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.0-a-bab85c4a2e" podStartSLOduration=4.134501643 podStartE2EDuration="4.134501643s" podCreationTimestamp="2025-06-20 18:54:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:54:28.13306472 +0000 UTC m=+1.204128340" watchObservedRunningTime="2025-06-20 18:54:28.134501643 +0000 UTC m=+1.205565163" Jun 20 18:54:29.242696 sudo[2431]: pam_unix(sudo:session): session closed for user root Jun 20 18:54:29.344594 sshd[2430]: Connection closed by 10.200.16.10 port 36814 Jun 20 18:54:29.345348 sshd-session[2428]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:29.349596 systemd[1]: sshd@6-10.200.8.21:22-10.200.16.10:36814.service: Deactivated successfully. Jun 20 18:54:29.351991 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 18:54:29.352456 systemd[1]: session-9.scope: Consumed 5.526s CPU time, 267.6M memory peak. Jun 20 18:54:29.354122 systemd-logind[1704]: Session 9 logged out. Waiting for processes to exit. Jun 20 18:54:29.355137 systemd-logind[1704]: Removed session 9. Jun 20 18:54:32.512957 kubelet[3387]: I0620 18:54:32.512896 3387 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:54:32.513646 kubelet[3387]: I0620 18:54:32.513540 3387 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:54:32.513716 containerd[1729]: time="2025-06-20T18:54:32.513296195Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:54:33.677105 systemd[1]: Created slice kubepods-besteffort-pod74697e97_22df_42cb_aa58_2d29d1e7dd25.slice - libcontainer container kubepods-besteffort-pod74697e97_22df_42cb_aa58_2d29d1e7dd25.slice. Jun 20 18:54:33.719837 systemd[1]: Created slice kubepods-burstable-pod9a61079c_ebd9_4295_a838_7e074e1746d5.slice - libcontainer container kubepods-burstable-pod9a61079c_ebd9_4295_a838_7e074e1746d5.slice. Jun 20 18:54:33.776450 systemd[1]: Created slice kubepods-besteffort-pod2dd0d07c_9703_4ea0_b254_a96be938bb12.slice - libcontainer container kubepods-besteffort-pod2dd0d07c_9703_4ea0_b254_a96be938bb12.slice. Jun 20 18:54:33.786729 kubelet[3387]: I0620 18:54:33.786224 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-config-path\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.786729 kubelet[3387]: I0620 18:54:33.786256 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-host-proc-sys-net\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.786729 kubelet[3387]: I0620 18:54:33.786272 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-lib-modules\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.786729 kubelet[3387]: I0620 18:54:33.786289 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2dd0d07c-9703-4ea0-b254-a96be938bb12-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fzfmc\" (UID: \"2dd0d07c-9703-4ea0-b254-a96be938bb12\") " pod="kube-system/cilium-operator-6c4d7847fc-fzfmc" Jun 20 18:54:33.786729 kubelet[3387]: I0620 18:54:33.786306 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/74697e97-22df-42cb-aa58-2d29d1e7dd25-kube-proxy\") pod \"kube-proxy-mbkbj\" (UID: \"74697e97-22df-42cb-aa58-2d29d1e7dd25\") " pod="kube-system/kube-proxy-mbkbj" Jun 20 18:54:33.787287 kubelet[3387]: I0620 18:54:33.786318 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-xtables-lock\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.787287 kubelet[3387]: I0620 18:54:33.786332 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a61079c-ebd9-4295-a838-7e074e1746d5-hubble-tls\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.787287 kubelet[3387]: I0620 18:54:33.786346 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cni-path\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.787287 kubelet[3387]: I0620 18:54:33.786361 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-host-proc-sys-kernel\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.787287 kubelet[3387]: I0620 18:54:33.786376 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74697e97-22df-42cb-aa58-2d29d1e7dd25-xtables-lock\") pod \"kube-proxy-mbkbj\" (UID: \"74697e97-22df-42cb-aa58-2d29d1e7dd25\") " pod="kube-system/kube-proxy-mbkbj" Jun 20 18:54:33.787287 kubelet[3387]: I0620 18:54:33.786391 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-bpf-maps\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.787430 kubelet[3387]: I0620 18:54:33.786404 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz9bb\" (UniqueName: \"kubernetes.io/projected/2dd0d07c-9703-4ea0-b254-a96be938bb12-kube-api-access-vz9bb\") pod \"cilium-operator-6c4d7847fc-fzfmc\" (UID: \"2dd0d07c-9703-4ea0-b254-a96be938bb12\") " pod="kube-system/cilium-operator-6c4d7847fc-fzfmc" Jun 20 18:54:33.787430 kubelet[3387]: I0620 18:54:33.786417 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a61079c-ebd9-4295-a838-7e074e1746d5-clustermesh-secrets\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.787430 kubelet[3387]: I0620 18:54:33.786431 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xtm\" (UniqueName: \"kubernetes.io/projected/9a61079c-ebd9-4295-a838-7e074e1746d5-kube-api-access-j8xtm\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.787430 kubelet[3387]: I0620 18:54:33.786444 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-run\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.787430 kubelet[3387]: I0620 18:54:33.786465 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-hostproc\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.787545 kubelet[3387]: I0620 18:54:33.786486 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-cgroup\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.787545 kubelet[3387]: I0620 18:54:33.786560 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-etc-cni-netd\") pod \"cilium-vkqss\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " pod="kube-system/cilium-vkqss" Jun 20 18:54:33.787545 kubelet[3387]: I0620 18:54:33.786592 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74697e97-22df-42cb-aa58-2d29d1e7dd25-lib-modules\") pod \"kube-proxy-mbkbj\" (UID: \"74697e97-22df-42cb-aa58-2d29d1e7dd25\") " pod="kube-system/kube-proxy-mbkbj" Jun 20 18:54:33.787545 kubelet[3387]: I0620 18:54:33.786635 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j8jv\" (UniqueName: \"kubernetes.io/projected/74697e97-22df-42cb-aa58-2d29d1e7dd25-kube-api-access-8j8jv\") pod \"kube-proxy-mbkbj\" (UID: \"74697e97-22df-42cb-aa58-2d29d1e7dd25\") " pod="kube-system/kube-proxy-mbkbj" Jun 20 18:54:33.986770 containerd[1729]: time="2025-06-20T18:54:33.986610552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbkbj,Uid:74697e97-22df-42cb-aa58-2d29d1e7dd25,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:34.025830 containerd[1729]: time="2025-06-20T18:54:34.025423260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vkqss,Uid:9a61079c-ebd9-4295-a838-7e074e1746d5,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:34.032587 containerd[1729]: time="2025-06-20T18:54:34.032415869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:34.033524 containerd[1729]: time="2025-06-20T18:54:34.033363084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:34.033524 containerd[1729]: time="2025-06-20T18:54:34.033413385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:34.033866 containerd[1729]: time="2025-06-20T18:54:34.033694189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:34.059652 systemd[1]: Started cri-containerd-2171ed2c268681ea23531293ec6312829e6d2cbee9a892e6f73fb804b8a291e7.scope - libcontainer container 2171ed2c268681ea23531293ec6312829e6d2cbee9a892e6f73fb804b8a291e7. Jun 20 18:54:34.084738 containerd[1729]: time="2025-06-20T18:54:34.084175079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fzfmc,Uid:2dd0d07c-9703-4ea0-b254-a96be938bb12,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:34.100156 containerd[1729]: time="2025-06-20T18:54:34.098917010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:34.100156 containerd[1729]: time="2025-06-20T18:54:34.098973111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:34.100156 containerd[1729]: time="2025-06-20T18:54:34.098986911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:34.100156 containerd[1729]: time="2025-06-20T18:54:34.099081012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:34.113538 containerd[1729]: time="2025-06-20T18:54:34.113394336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbkbj,Uid:74697e97-22df-42cb-aa58-2d29d1e7dd25,Namespace:kube-system,Attempt:0,} returns sandbox id \"2171ed2c268681ea23531293ec6312829e6d2cbee9a892e6f73fb804b8a291e7\"" Jun 20 18:54:34.123678 systemd[1]: Started cri-containerd-760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90.scope - libcontainer container 760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90. Jun 20 18:54:34.129649 containerd[1729]: time="2025-06-20T18:54:34.129611290Z" level=info msg="CreateContainer within sandbox \"2171ed2c268681ea23531293ec6312829e6d2cbee9a892e6f73fb804b8a291e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:54:34.147841 containerd[1729]: time="2025-06-20T18:54:34.147758874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:34.148300 containerd[1729]: time="2025-06-20T18:54:34.148035878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:34.148490 containerd[1729]: time="2025-06-20T18:54:34.148437985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:34.149656 containerd[1729]: time="2025-06-20T18:54:34.149601403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:34.176277 systemd[1]: Started cri-containerd-a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29.scope - libcontainer container a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29. Jun 20 18:54:34.176777 containerd[1729]: time="2025-06-20T18:54:34.176277420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vkqss,Uid:9a61079c-ebd9-4295-a838-7e074e1746d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\"" Jun 20 18:54:34.180348 containerd[1729]: time="2025-06-20T18:54:34.180234982Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 18:54:34.189885 containerd[1729]: time="2025-06-20T18:54:34.189824632Z" level=info msg="CreateContainer within sandbox \"2171ed2c268681ea23531293ec6312829e6d2cbee9a892e6f73fb804b8a291e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc80b5775b35e7505ecc18caeafd8ccd1fe765ef58f09eccaeb6f5243ed1b15f\"" Jun 20 18:54:34.192527 containerd[1729]: time="2025-06-20T18:54:34.192457174Z" level=info msg="StartContainer for \"bc80b5775b35e7505ecc18caeafd8ccd1fe765ef58f09eccaeb6f5243ed1b15f\"" Jun 20 18:54:34.243285 systemd[1]: Started cri-containerd-bc80b5775b35e7505ecc18caeafd8ccd1fe765ef58f09eccaeb6f5243ed1b15f.scope - libcontainer container bc80b5775b35e7505ecc18caeafd8ccd1fe765ef58f09eccaeb6f5243ed1b15f. Jun 20 18:54:34.244922 containerd[1729]: time="2025-06-20T18:54:34.244795393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fzfmc,Uid:2dd0d07c-9703-4ea0-b254-a96be938bb12,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29\"" Jun 20 18:54:34.283291 containerd[1729]: time="2025-06-20T18:54:34.283248195Z" level=info msg="StartContainer for \"bc80b5775b35e7505ecc18caeafd8ccd1fe765ef58f09eccaeb6f5243ed1b15f\" returns successfully" Jun 20 18:54:37.523925 kubelet[3387]: I0620 18:54:37.523364 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mbkbj" podStartSLOduration=4.5233429019999996 podStartE2EDuration="4.523342902s" podCreationTimestamp="2025-06-20 18:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:54:35.098734557 +0000 UTC m=+8.169798177" watchObservedRunningTime="2025-06-20 18:54:37.523342902 +0000 UTC m=+10.594406522" Jun 20 18:54:39.573630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515192099.mount: Deactivated successfully. Jun 20 18:54:41.817431 containerd[1729]: time="2025-06-20T18:54:41.817367994Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:41.819456 containerd[1729]: time="2025-06-20T18:54:41.819391426Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 18:54:41.822758 containerd[1729]: time="2025-06-20T18:54:41.822706978Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:41.824276 containerd[1729]: time="2025-06-20T18:54:41.824133101Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.643852117s" Jun 20 18:54:41.824276 containerd[1729]: time="2025-06-20T18:54:41.824172901Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 18:54:41.826333 containerd[1729]: time="2025-06-20T18:54:41.825717326Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 18:54:41.830951 containerd[1729]: time="2025-06-20T18:54:41.830700504Z" level=info msg="CreateContainer within sandbox \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:54:41.869899 containerd[1729]: time="2025-06-20T18:54:41.869850924Z" level=info msg="CreateContainer within sandbox \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2\"" Jun 20 18:54:41.870604 containerd[1729]: time="2025-06-20T18:54:41.870550235Z" level=info msg="StartContainer for \"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2\"" Jun 20 18:54:41.909216 systemd[1]: Started cri-containerd-78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2.scope - libcontainer container 78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2. Jun 20 18:54:41.940382 containerd[1729]: time="2025-06-20T18:54:41.940241537Z" level=info msg="StartContainer for \"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2\" returns successfully" Jun 20 18:54:41.948588 systemd[1]: cri-containerd-78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2.scope: Deactivated successfully. Jun 20 18:54:42.855347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2-rootfs.mount: Deactivated successfully. Jun 20 18:54:45.603808 containerd[1729]: time="2025-06-20T18:54:45.603739277Z" level=info msg="shim disconnected" id=78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2 namespace=k8s.io Jun 20 18:54:45.604313 containerd[1729]: time="2025-06-20T18:54:45.603870079Z" level=warning msg="cleaning up after shim disconnected" id=78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2 namespace=k8s.io Jun 20 18:54:45.604313 containerd[1729]: time="2025-06-20T18:54:45.603887079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:46.116364 containerd[1729]: time="2025-06-20T18:54:46.116128981Z" level=info msg="CreateContainer within sandbox \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:54:46.153546 containerd[1729]: time="2025-06-20T18:54:46.153505572Z" level=info msg="CreateContainer within sandbox \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc\"" Jun 20 18:54:46.154314 containerd[1729]: time="2025-06-20T18:54:46.154278484Z" level=info msg="StartContainer for \"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc\"" Jun 20 18:54:46.194500 systemd[1]: Started cri-containerd-efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc.scope - libcontainer container efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc. Jun 20 18:54:46.231172 containerd[1729]: time="2025-06-20T18:54:46.230682492Z" level=info msg="StartContainer for \"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc\" returns successfully" Jun 20 18:54:46.249969 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:54:46.250501 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:54:46.251971 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:54:46.261397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:54:46.264802 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:54:46.265599 systemd[1]: cri-containerd-efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc.scope: Deactivated successfully. Jun 20 18:54:46.298152 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:54:46.354582 containerd[1729]: time="2025-06-20T18:54:46.354497350Z" level=info msg="shim disconnected" id=efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc namespace=k8s.io Jun 20 18:54:46.354582 containerd[1729]: time="2025-06-20T18:54:46.354574052Z" level=warning msg="cleaning up after shim disconnected" id=efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc namespace=k8s.io Jun 20 18:54:46.354582 containerd[1729]: time="2025-06-20T18:54:46.354584352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:47.062332 containerd[1729]: time="2025-06-20T18:54:47.062269928Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:47.064700 containerd[1729]: time="2025-06-20T18:54:47.064626565Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 18:54:47.069391 containerd[1729]: time="2025-06-20T18:54:47.069336439Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:47.072167 containerd[1729]: time="2025-06-20T18:54:47.072114883Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.246357256s" Jun 20 18:54:47.072277 containerd[1729]: time="2025-06-20T18:54:47.072172784Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 18:54:47.087200 containerd[1729]: time="2025-06-20T18:54:47.086662512Z" level=info msg="CreateContainer within sandbox \"a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 18:54:47.118710 containerd[1729]: time="2025-06-20T18:54:47.118636715Z" level=info msg="CreateContainer within sandbox \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:54:47.127547 containerd[1729]: time="2025-06-20T18:54:47.127501755Z" level=info msg="CreateContainer within sandbox \"a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\"" Jun 20 18:54:47.128834 containerd[1729]: time="2025-06-20T18:54:47.128345768Z" level=info msg="StartContainer for \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\"" Jun 20 18:54:47.148766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc-rootfs.mount: Deactivated successfully. Jun 20 18:54:47.195195 containerd[1729]: time="2025-06-20T18:54:47.195148120Z" level=info msg="CreateContainer within sandbox \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5\"" Jun 20 18:54:47.195681 systemd[1]: Started cri-containerd-873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d.scope - libcontainer container 873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d. Jun 20 18:54:47.197587 containerd[1729]: time="2025-06-20T18:54:47.197213352Z" level=info msg="StartContainer for \"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5\"" Jun 20 18:54:47.260279 systemd[1]: Started cri-containerd-6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5.scope - libcontainer container 6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5. Jun 20 18:54:47.310289 containerd[1729]: time="2025-06-20T18:54:47.310134530Z" level=info msg="StartContainer for \"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5\" returns successfully" Jun 20 18:54:47.315275 systemd[1]: cri-containerd-6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5.scope: Deactivated successfully. Jun 20 18:54:47.341392 containerd[1729]: time="2025-06-20T18:54:47.341336822Z" level=info msg="StartContainer for \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\" returns successfully" Jun 20 18:54:47.812578 containerd[1729]: time="2025-06-20T18:54:47.812459140Z" level=info msg="shim disconnected" id=6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5 namespace=k8s.io Jun 20 18:54:47.813209 containerd[1729]: time="2025-06-20T18:54:47.812551341Z" level=warning msg="cleaning up after shim disconnected" id=6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5 namespace=k8s.io Jun 20 18:54:47.813209 containerd[1729]: time="2025-06-20T18:54:47.812907447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:48.129097 containerd[1729]: time="2025-06-20T18:54:48.128821521Z" level=info msg="CreateContainer within sandbox \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:54:48.145665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5-rootfs.mount: Deactivated successfully. Jun 20 18:54:48.173533 containerd[1729]: time="2025-06-20T18:54:48.173478624Z" level=info msg="CreateContainer within sandbox \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3\"" Jun 20 18:54:48.174210 containerd[1729]: time="2025-06-20T18:54:48.174181535Z" level=info msg="StartContainer for \"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3\"" Jun 20 18:54:48.221076 kubelet[3387]: I0620 18:54:48.220089 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fzfmc" podStartSLOduration=2.392868869 podStartE2EDuration="15.220041957s" podCreationTimestamp="2025-06-20 18:54:33 +0000 UTC" firstStartedPulling="2025-06-20 18:54:34.246987727 +0000 UTC m=+7.318051347" lastFinishedPulling="2025-06-20 18:54:47.074160915 +0000 UTC m=+20.145224435" observedRunningTime="2025-06-20 18:54:48.153763114 +0000 UTC m=+21.224826734" watchObservedRunningTime="2025-06-20 18:54:48.220041957 +0000 UTC m=+21.291105477" Jun 20 18:54:48.243284 systemd[1]: Started cri-containerd-749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3.scope - libcontainer container 749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3. Jun 20 18:54:48.359154 containerd[1729]: time="2025-06-20T18:54:48.359102647Z" level=info msg="StartContainer for \"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3\" returns successfully" Jun 20 18:54:48.360323 systemd[1]: cri-containerd-749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3.scope: Deactivated successfully. Jun 20 18:54:48.410763 containerd[1729]: time="2025-06-20T18:54:48.410440055Z" level=info msg="shim disconnected" id=749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3 namespace=k8s.io Jun 20 18:54:48.410763 containerd[1729]: time="2025-06-20T18:54:48.410499056Z" level=warning msg="cleaning up after shim disconnected" id=749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3 namespace=k8s.io Jun 20 18:54:48.410763 containerd[1729]: time="2025-06-20T18:54:48.410510356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:54:49.133276 containerd[1729]: time="2025-06-20T18:54:49.133216436Z" level=info msg="CreateContainer within sandbox \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:54:49.144763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3-rootfs.mount: Deactivated successfully. Jun 20 18:54:49.170962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3271131357.mount: Deactivated successfully. Jun 20 18:54:49.182241 containerd[1729]: time="2025-06-20T18:54:49.182133106Z" level=info msg="CreateContainer within sandbox \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\"" Jun 20 18:54:49.182893 containerd[1729]: time="2025-06-20T18:54:49.182862317Z" level=info msg="StartContainer for \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\"" Jun 20 18:54:49.216224 systemd[1]: Started cri-containerd-e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25.scope - libcontainer container e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25. Jun 20 18:54:49.250366 containerd[1729]: time="2025-06-20T18:54:49.250318780Z" level=info msg="StartContainer for \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\" returns successfully" Jun 20 18:54:49.443937 kubelet[3387]: I0620 18:54:49.442607 3387 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 18:54:49.524877 systemd[1]: Created slice kubepods-burstable-pod3c8e83ef_69ae_4084_ae53_341567bef6a2.slice - libcontainer container kubepods-burstable-pod3c8e83ef_69ae_4084_ae53_341567bef6a2.slice. Jun 20 18:54:49.540940 systemd[1]: Created slice kubepods-burstable-pod937c7ea2_9d10_46bf_9a74_67cfca843627.slice - libcontainer container kubepods-burstable-pod937c7ea2_9d10_46bf_9a74_67cfca843627.slice. Jun 20 18:54:49.599586 kubelet[3387]: I0620 18:54:49.599529 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lxkx\" (UniqueName: \"kubernetes.io/projected/937c7ea2-9d10-46bf-9a74-67cfca843627-kube-api-access-5lxkx\") pod \"coredns-674b8bbfcf-2f699\" (UID: \"937c7ea2-9d10-46bf-9a74-67cfca843627\") " pod="kube-system/coredns-674b8bbfcf-2f699" Jun 20 18:54:49.599586 kubelet[3387]: I0620 18:54:49.599588 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c8e83ef-69ae-4084-ae53-341567bef6a2-config-volume\") pod \"coredns-674b8bbfcf-5tffm\" (UID: \"3c8e83ef-69ae-4084-ae53-341567bef6a2\") " pod="kube-system/coredns-674b8bbfcf-5tffm" Jun 20 18:54:49.599833 kubelet[3387]: I0620 18:54:49.599624 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vczx9\" (UniqueName: \"kubernetes.io/projected/3c8e83ef-69ae-4084-ae53-341567bef6a2-kube-api-access-vczx9\") pod \"coredns-674b8bbfcf-5tffm\" (UID: \"3c8e83ef-69ae-4084-ae53-341567bef6a2\") " pod="kube-system/coredns-674b8bbfcf-5tffm" Jun 20 18:54:49.599833 kubelet[3387]: I0620 18:54:49.599653 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/937c7ea2-9d10-46bf-9a74-67cfca843627-config-volume\") pod \"coredns-674b8bbfcf-2f699\" (UID: \"937c7ea2-9d10-46bf-9a74-67cfca843627\") " pod="kube-system/coredns-674b8bbfcf-2f699" Jun 20 18:54:49.832816 containerd[1729]: time="2025-06-20T18:54:49.831807435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5tffm,Uid:3c8e83ef-69ae-4084-ae53-341567bef6a2,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:49.852691 containerd[1729]: time="2025-06-20T18:54:49.852624163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2f699,Uid:937c7ea2-9d10-46bf-9a74-67cfca843627,Namespace:kube-system,Attempt:0,}" Jun 20 18:54:51.435094 systemd-networkd[1619]: cilium_host: Link UP Jun 20 18:54:51.435281 systemd-networkd[1619]: cilium_net: Link UP Jun 20 18:54:51.435478 systemd-networkd[1619]: cilium_net: Gained carrier Jun 20 18:54:51.435662 systemd-networkd[1619]: cilium_host: Gained carrier Jun 20 18:54:51.555214 systemd-networkd[1619]: cilium_host: Gained IPv6LL Jun 20 18:54:51.600755 systemd-networkd[1619]: cilium_vxlan: Link UP Jun 20 18:54:51.600765 systemd-networkd[1619]: cilium_vxlan: Gained carrier Jun 20 18:54:51.894172 kernel: NET: Registered PF_ALG protocol family Jun 20 18:54:51.915175 systemd-networkd[1619]: cilium_net: Gained IPv6LL Jun 20 18:54:52.644554 systemd-networkd[1619]: lxc_health: Link UP Jun 20 18:54:52.657227 systemd-networkd[1619]: lxc_health: Gained carrier Jun 20 18:54:52.946797 kernel: eth0: renamed from tmp2d3a6 Jun 20 18:54:52.951423 systemd-networkd[1619]: lxca10d0724a663: Link UP Jun 20 18:54:52.954882 systemd-networkd[1619]: lxca10d0724a663: Gained carrier Jun 20 18:54:52.958211 systemd-networkd[1619]: lxc501058b7de44: Link UP Jun 20 18:54:52.967247 kernel: eth0: renamed from tmp11301 Jun 20 18:54:52.980947 systemd-networkd[1619]: lxc501058b7de44: Gained carrier Jun 20 18:54:53.347269 systemd-networkd[1619]: cilium_vxlan: Gained IPv6LL Jun 20 18:54:54.054863 kubelet[3387]: I0620 18:54:54.054784 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vkqss" podStartSLOduration=13.408823378 podStartE2EDuration="21.054764528s" podCreationTimestamp="2025-06-20 18:54:33 +0000 UTC" firstStartedPulling="2025-06-20 18:54:34.179251567 +0000 UTC m=+7.250315087" lastFinishedPulling="2025-06-20 18:54:41.825192617 +0000 UTC m=+14.896256237" observedRunningTime="2025-06-20 18:54:50.156318745 +0000 UTC m=+23.227382365" watchObservedRunningTime="2025-06-20 18:54:54.054764528 +0000 UTC m=+27.125828048" Jun 20 18:54:54.435330 systemd-networkd[1619]: lxc_health: Gained IPv6LL Jun 20 18:54:55.016282 systemd-networkd[1619]: lxca10d0724a663: Gained IPv6LL Jun 20 18:54:55.016683 systemd-networkd[1619]: lxc501058b7de44: Gained IPv6LL Jun 20 18:54:56.893735 containerd[1729]: time="2025-06-20T18:54:56.893315236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:56.893735 containerd[1729]: time="2025-06-20T18:54:56.893388937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:56.893735 containerd[1729]: time="2025-06-20T18:54:56.893411737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:56.893735 containerd[1729]: time="2025-06-20T18:54:56.893520139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:56.928288 containerd[1729]: time="2025-06-20T18:54:56.927810651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:54:56.928288 containerd[1729]: time="2025-06-20T18:54:56.927889152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:54:56.928288 containerd[1729]: time="2025-06-20T18:54:56.927911352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:56.928288 containerd[1729]: time="2025-06-20T18:54:56.928011154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:54:56.941746 systemd[1]: Started cri-containerd-2d3a611df55fc00656570ecfd8a990a6cceba0b9b17607406a1db854dd692f02.scope - libcontainer container 2d3a611df55fc00656570ecfd8a990a6cceba0b9b17607406a1db854dd692f02. Jun 20 18:54:56.987254 systemd[1]: Started cri-containerd-11301ba211af372f1f96ac96109c6ddd219b219da74dcf88be108486bebca953.scope - libcontainer container 11301ba211af372f1f96ac96109c6ddd219b219da74dcf88be108486bebca953. Jun 20 18:54:57.034984 containerd[1729]: time="2025-06-20T18:54:57.034938950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5tffm,Uid:3c8e83ef-69ae-4084-ae53-341567bef6a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d3a611df55fc00656570ecfd8a990a6cceba0b9b17607406a1db854dd692f02\"" Jun 20 18:54:57.051518 containerd[1729]: time="2025-06-20T18:54:57.051467996Z" level=info msg="CreateContainer within sandbox \"2d3a611df55fc00656570ecfd8a990a6cceba0b9b17607406a1db854dd692f02\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:54:57.093995 containerd[1729]: time="2025-06-20T18:54:57.093761428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2f699,Uid:937c7ea2-9d10-46bf-9a74-67cfca843627,Namespace:kube-system,Attempt:0,} returns sandbox id \"11301ba211af372f1f96ac96109c6ddd219b219da74dcf88be108486bebca953\"" Jun 20 18:54:57.108350 containerd[1729]: time="2025-06-20T18:54:57.108244944Z" level=info msg="CreateContainer within sandbox \"11301ba211af372f1f96ac96109c6ddd219b219da74dcf88be108486bebca953\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:54:57.114598 containerd[1729]: time="2025-06-20T18:54:57.114551738Z" level=info msg="CreateContainer within sandbox \"2d3a611df55fc00656570ecfd8a990a6cceba0b9b17607406a1db854dd692f02\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"093f47a78a9d905c66b001827492ab2fcf9546452d328732089ee7ee41097636\"" Jun 20 18:54:57.115733 containerd[1729]: time="2025-06-20T18:54:57.115699855Z" level=info msg="StartContainer for \"093f47a78a9d905c66b001827492ab2fcf9546452d328732089ee7ee41097636\"" Jun 20 18:54:57.164226 systemd[1]: Started cri-containerd-093f47a78a9d905c66b001827492ab2fcf9546452d328732089ee7ee41097636.scope - libcontainer container 093f47a78a9d905c66b001827492ab2fcf9546452d328732089ee7ee41097636. Jun 20 18:54:57.170182 containerd[1729]: time="2025-06-20T18:54:57.170138268Z" level=info msg="CreateContainer within sandbox \"11301ba211af372f1f96ac96109c6ddd219b219da74dcf88be108486bebca953\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90fdb3b13dd7c6605c5cbb032de59301bc3e4f403c134d7568f331d8a2e038fb\"" Jun 20 18:54:57.170759 containerd[1729]: time="2025-06-20T18:54:57.170659475Z" level=info msg="StartContainer for \"90fdb3b13dd7c6605c5cbb032de59301bc3e4f403c134d7568f331d8a2e038fb\"" Jun 20 18:54:57.215269 systemd[1]: Started cri-containerd-90fdb3b13dd7c6605c5cbb032de59301bc3e4f403c134d7568f331d8a2e038fb.scope - libcontainer container 90fdb3b13dd7c6605c5cbb032de59301bc3e4f403c134d7568f331d8a2e038fb. Jun 20 18:54:57.222477 containerd[1729]: time="2025-06-20T18:54:57.222425948Z" level=info msg="StartContainer for \"093f47a78a9d905c66b001827492ab2fcf9546452d328732089ee7ee41097636\" returns successfully" Jun 20 18:54:57.268765 containerd[1729]: time="2025-06-20T18:54:57.268369934Z" level=info msg="StartContainer for \"90fdb3b13dd7c6605c5cbb032de59301bc3e4f403c134d7568f331d8a2e038fb\" returns successfully" Jun 20 18:54:57.905997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308139826.mount: Deactivated successfully. Jun 20 18:54:58.169986 kubelet[3387]: I0620 18:54:58.169811 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2f699" podStartSLOduration=25.169787988 podStartE2EDuration="25.169787988s" podCreationTimestamp="2025-06-20 18:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:54:58.169411082 +0000 UTC m=+31.240474702" watchObservedRunningTime="2025-06-20 18:54:58.169787988 +0000 UTC m=+31.240851508" Jun 20 18:54:58.186325 kubelet[3387]: I0620 18:54:58.185712 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5tffm" podStartSLOduration=25.185691325 podStartE2EDuration="25.185691325s" podCreationTimestamp="2025-06-20 18:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:54:58.183410691 +0000 UTC m=+31.254474211" watchObservedRunningTime="2025-06-20 18:54:58.185691325 +0000 UTC m=+31.256754945" Jun 20 18:55:03.879095 kubelet[3387]: I0620 18:55:03.878851 3387 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 18:56:05.594362 systemd[1]: Started sshd@7-10.200.8.21:22-10.200.16.10:51222.service - OpenSSH per-connection server daemon (10.200.16.10:51222). Jun 20 18:56:06.224697 sshd[4774]: Accepted publickey for core from 10.200.16.10 port 51222 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:06.226619 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:06.232111 systemd-logind[1704]: New session 10 of user core. Jun 20 18:56:06.237264 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 18:56:06.743040 sshd[4776]: Connection closed by 10.200.16.10 port 51222 Jun 20 18:56:06.745696 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:06.750496 systemd[1]: sshd@7-10.200.8.21:22-10.200.16.10:51222.service: Deactivated successfully. Jun 20 18:56:06.754652 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 18:56:06.757020 systemd-logind[1704]: Session 10 logged out. Waiting for processes to exit. Jun 20 18:56:06.758582 systemd-logind[1704]: Removed session 10. Jun 20 18:56:11.860432 systemd[1]: Started sshd@8-10.200.8.21:22-10.200.16.10:48756.service - OpenSSH per-connection server daemon (10.200.16.10:48756). Jun 20 18:56:12.533004 sshd[4790]: Accepted publickey for core from 10.200.16.10 port 48756 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:12.534552 sshd-session[4790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:12.540117 systemd-logind[1704]: New session 11 of user core. Jun 20 18:56:12.548277 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 18:56:13.065099 sshd[4792]: Connection closed by 10.200.16.10 port 48756 Jun 20 18:56:13.065975 sshd-session[4790]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:13.070179 systemd[1]: sshd@8-10.200.8.21:22-10.200.16.10:48756.service: Deactivated successfully. Jun 20 18:56:13.072232 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 18:56:13.073040 systemd-logind[1704]: Session 11 logged out. Waiting for processes to exit. Jun 20 18:56:13.074527 systemd-logind[1704]: Removed session 11. Jun 20 18:56:18.183420 systemd[1]: Started sshd@9-10.200.8.21:22-10.200.16.10:48768.service - OpenSSH per-connection server daemon (10.200.16.10:48768). Jun 20 18:56:18.810241 sshd[4805]: Accepted publickey for core from 10.200.16.10 port 48768 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:18.811755 sshd-session[4805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:18.816117 systemd-logind[1704]: New session 12 of user core. Jun 20 18:56:18.824298 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 18:56:19.313509 sshd[4807]: Connection closed by 10.200.16.10 port 48768 Jun 20 18:56:19.314332 sshd-session[4805]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:19.318611 systemd[1]: sshd@9-10.200.8.21:22-10.200.16.10:48768.service: Deactivated successfully. Jun 20 18:56:19.322619 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 18:56:19.323820 systemd-logind[1704]: Session 12 logged out. Waiting for processes to exit. Jun 20 18:56:19.325183 systemd-logind[1704]: Removed session 12. Jun 20 18:56:24.435394 systemd[1]: Started sshd@10-10.200.8.21:22-10.200.16.10:38114.service - OpenSSH per-connection server daemon (10.200.16.10:38114). Jun 20 18:56:25.062360 sshd[4819]: Accepted publickey for core from 10.200.16.10 port 38114 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:25.064092 sshd-session[4819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:25.069423 systemd-logind[1704]: New session 13 of user core. Jun 20 18:56:25.074239 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 18:56:25.571947 sshd[4821]: Connection closed by 10.200.16.10 port 38114 Jun 20 18:56:25.572814 sshd-session[4819]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:25.577473 systemd[1]: sshd@10-10.200.8.21:22-10.200.16.10:38114.service: Deactivated successfully. Jun 20 18:56:25.579910 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 18:56:25.580998 systemd-logind[1704]: Session 13 logged out. Waiting for processes to exit. Jun 20 18:56:25.582439 systemd-logind[1704]: Removed session 13. Jun 20 18:56:25.691389 systemd[1]: Started sshd@11-10.200.8.21:22-10.200.16.10:38122.service - OpenSSH per-connection server daemon (10.200.16.10:38122). Jun 20 18:56:26.317968 sshd[4834]: Accepted publickey for core from 10.200.16.10 port 38122 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:26.319625 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:26.323927 systemd-logind[1704]: New session 14 of user core. Jun 20 18:56:26.331279 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 18:56:26.851867 sshd[4836]: Connection closed by 10.200.16.10 port 38122 Jun 20 18:56:26.852614 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:26.856444 systemd[1]: sshd@11-10.200.8.21:22-10.200.16.10:38122.service: Deactivated successfully. Jun 20 18:56:26.858753 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 18:56:26.859794 systemd-logind[1704]: Session 14 logged out. Waiting for processes to exit. Jun 20 18:56:26.861236 systemd-logind[1704]: Removed session 14. Jun 20 18:56:26.967383 systemd[1]: Started sshd@12-10.200.8.21:22-10.200.16.10:38124.service - OpenSSH per-connection server daemon (10.200.16.10:38124). Jun 20 18:56:27.593093 sshd[4846]: Accepted publickey for core from 10.200.16.10 port 38124 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:27.594614 sshd-session[4846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:27.599281 systemd-logind[1704]: New session 15 of user core. Jun 20 18:56:27.611242 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 18:56:28.096195 sshd[4850]: Connection closed by 10.200.16.10 port 38124 Jun 20 18:56:28.096953 sshd-session[4846]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:28.099924 systemd[1]: sshd@12-10.200.8.21:22-10.200.16.10:38124.service: Deactivated successfully. Jun 20 18:56:28.102314 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 18:56:28.103915 systemd-logind[1704]: Session 15 logged out. Waiting for processes to exit. Jun 20 18:56:28.105399 systemd-logind[1704]: Removed session 15. Jun 20 18:56:31.837279 update_engine[1709]: I20250620 18:56:31.837213 1709 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 20 18:56:31.837279 update_engine[1709]: I20250620 18:56:31.837270 1709 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 20 18:56:31.837839 update_engine[1709]: I20250620 18:56:31.837519 1709 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 20 18:56:31.838763 update_engine[1709]: I20250620 18:56:31.838421 1709 omaha_request_params.cc:62] Current group set to stable Jun 20 18:56:31.838763 update_engine[1709]: I20250620 18:56:31.838563 1709 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 20 18:56:31.838763 update_engine[1709]: I20250620 18:56:31.838579 1709 update_attempter.cc:643] Scheduling an action processor start. Jun 20 18:56:31.838763 update_engine[1709]: I20250620 18:56:31.838603 1709 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 18:56:31.838763 update_engine[1709]: I20250620 18:56:31.838646 1709 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 20 18:56:31.839027 locksmithd[1760]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 20 18:56:31.839355 update_engine[1709]: I20250620 18:56:31.838733 1709 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 18:56:31.839355 update_engine[1709]: I20250620 18:56:31.839047 1709 omaha_request_action.cc:272] Request: Jun 20 18:56:31.839355 update_engine[1709]: Jun 20 18:56:31.839355 update_engine[1709]: Jun 20 18:56:31.839355 update_engine[1709]: Jun 20 18:56:31.839355 update_engine[1709]: Jun 20 18:56:31.839355 update_engine[1709]: Jun 20 18:56:31.839355 update_engine[1709]: Jun 20 18:56:31.839355 update_engine[1709]: Jun 20 18:56:31.839355 update_engine[1709]: Jun 20 18:56:31.839355 update_engine[1709]: I20250620 18:56:31.839092 1709 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:56:31.840770 update_engine[1709]: I20250620 18:56:31.840727 1709 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:56:31.841252 update_engine[1709]: I20250620 18:56:31.841212 1709 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:56:31.863989 update_engine[1709]: E20250620 18:56:31.863915 1709 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:56:31.864145 update_engine[1709]: I20250620 18:56:31.864048 1709 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 20 18:56:33.224392 systemd[1]: Started sshd@13-10.200.8.21:22-10.200.16.10:42370.service - OpenSSH per-connection server daemon (10.200.16.10:42370). Jun 20 18:56:33.851843 sshd[4862]: Accepted publickey for core from 10.200.16.10 port 42370 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:33.853444 sshd-session[4862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:33.857783 systemd-logind[1704]: New session 16 of user core. Jun 20 18:56:33.867234 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 18:56:34.355430 sshd[4864]: Connection closed by 10.200.16.10 port 42370 Jun 20 18:56:34.356311 sshd-session[4862]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:34.360544 systemd[1]: sshd@13-10.200.8.21:22-10.200.16.10:42370.service: Deactivated successfully. Jun 20 18:56:34.362898 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 18:56:34.364479 systemd-logind[1704]: Session 16 logged out. Waiting for processes to exit. Jun 20 18:56:34.365788 systemd-logind[1704]: Removed session 16. Jun 20 18:56:39.473370 systemd[1]: Started sshd@14-10.200.8.21:22-10.200.16.10:40190.service - OpenSSH per-connection server daemon (10.200.16.10:40190). Jun 20 18:56:40.102640 sshd[4878]: Accepted publickey for core from 10.200.16.10 port 40190 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:40.104113 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:40.109037 systemd-logind[1704]: New session 17 of user core. Jun 20 18:56:40.123242 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 18:56:40.608430 sshd[4880]: Connection closed by 10.200.16.10 port 40190 Jun 20 18:56:40.609436 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:40.614122 systemd[1]: sshd@14-10.200.8.21:22-10.200.16.10:40190.service: Deactivated successfully. Jun 20 18:56:40.616328 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 18:56:40.617302 systemd-logind[1704]: Session 17 logged out. Waiting for processes to exit. Jun 20 18:56:40.618403 systemd-logind[1704]: Removed session 17. Jun 20 18:56:40.723379 systemd[1]: Started sshd@15-10.200.8.21:22-10.200.16.10:40198.service - OpenSSH per-connection server daemon (10.200.16.10:40198). Jun 20 18:56:41.349668 sshd[4892]: Accepted publickey for core from 10.200.16.10 port 40198 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:41.351163 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:41.355483 systemd-logind[1704]: New session 18 of user core. Jun 20 18:56:41.360228 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 18:56:41.833780 update_engine[1709]: I20250620 18:56:41.833698 1709 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:56:41.834309 update_engine[1709]: I20250620 18:56:41.834038 1709 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:56:41.834468 update_engine[1709]: I20250620 18:56:41.834416 1709 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:56:41.873842 update_engine[1709]: E20250620 18:56:41.873759 1709 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:56:41.874008 update_engine[1709]: I20250620 18:56:41.873878 1709 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 20 18:56:42.001574 sshd[4894]: Connection closed by 10.200.16.10 port 40198 Jun 20 18:56:42.002340 sshd-session[4892]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:42.006357 systemd[1]: sshd@15-10.200.8.21:22-10.200.16.10:40198.service: Deactivated successfully. Jun 20 18:56:42.008534 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 18:56:42.009448 systemd-logind[1704]: Session 18 logged out. Waiting for processes to exit. Jun 20 18:56:42.010571 systemd-logind[1704]: Removed session 18. Jun 20 18:56:42.118472 systemd[1]: Started sshd@16-10.200.8.21:22-10.200.16.10:40200.service - OpenSSH per-connection server daemon (10.200.16.10:40200). Jun 20 18:56:42.746969 sshd[4904]: Accepted publickey for core from 10.200.16.10 port 40200 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:42.748525 sshd-session[4904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:42.752872 systemd-logind[1704]: New session 19 of user core. Jun 20 18:56:42.761221 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 18:56:44.079127 sshd[4906]: Connection closed by 10.200.16.10 port 40200 Jun 20 18:56:44.079855 sshd-session[4904]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:44.086676 systemd[1]: sshd@16-10.200.8.21:22-10.200.16.10:40200.service: Deactivated successfully. Jun 20 18:56:44.088811 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 18:56:44.089617 systemd-logind[1704]: Session 19 logged out. Waiting for processes to exit. Jun 20 18:56:44.090655 systemd-logind[1704]: Removed session 19. Jun 20 18:56:44.205462 systemd[1]: Started sshd@17-10.200.8.21:22-10.200.16.10:40204.service - OpenSSH per-connection server daemon (10.200.16.10:40204). Jun 20 18:56:44.861245 sshd[4923]: Accepted publickey for core from 10.200.16.10 port 40204 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:44.862692 sshd-session[4923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:44.867011 systemd-logind[1704]: New session 20 of user core. Jun 20 18:56:44.879271 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 18:56:45.486127 sshd[4925]: Connection closed by 10.200.16.10 port 40204 Jun 20 18:56:45.486652 sshd-session[4923]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:45.491775 systemd[1]: sshd@17-10.200.8.21:22-10.200.16.10:40204.service: Deactivated successfully. Jun 20 18:56:45.494561 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 18:56:45.495531 systemd-logind[1704]: Session 20 logged out. Waiting for processes to exit. Jun 20 18:56:45.496801 systemd-logind[1704]: Removed session 20. Jun 20 18:56:45.602376 systemd[1]: Started sshd@18-10.200.8.21:22-10.200.16.10:40210.service - OpenSSH per-connection server daemon (10.200.16.10:40210). Jun 20 18:56:46.230041 sshd[4935]: Accepted publickey for core from 10.200.16.10 port 40210 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:46.231991 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:46.236528 systemd-logind[1704]: New session 21 of user core. Jun 20 18:56:46.242203 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 18:56:46.730324 sshd[4937]: Connection closed by 10.200.16.10 port 40210 Jun 20 18:56:46.731203 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:46.736163 systemd[1]: sshd@18-10.200.8.21:22-10.200.16.10:40210.service: Deactivated successfully. Jun 20 18:56:46.738576 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 18:56:46.739506 systemd-logind[1704]: Session 21 logged out. Waiting for processes to exit. Jun 20 18:56:46.740534 systemd-logind[1704]: Removed session 21. Jun 20 18:56:51.836934 update_engine[1709]: I20250620 18:56:51.836841 1709 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:56:51.837533 update_engine[1709]: I20250620 18:56:51.837251 1709 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:56:51.837679 update_engine[1709]: I20250620 18:56:51.837622 1709 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:56:51.851416 systemd[1]: Started sshd@19-10.200.8.21:22-10.200.16.10:56232.service - OpenSSH per-connection server daemon (10.200.16.10:56232). Jun 20 18:56:51.872421 update_engine[1709]: E20250620 18:56:51.872368 1709 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:56:51.872554 update_engine[1709]: I20250620 18:56:51.872458 1709 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 20 18:56:52.479423 sshd[4952]: Accepted publickey for core from 10.200.16.10 port 56232 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:52.481000 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:52.486659 systemd-logind[1704]: New session 22 of user core. Jun 20 18:56:52.494238 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 18:56:52.984303 sshd[4954]: Connection closed by 10.200.16.10 port 56232 Jun 20 18:56:52.985187 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:52.989742 systemd[1]: sshd@19-10.200.8.21:22-10.200.16.10:56232.service: Deactivated successfully. Jun 20 18:56:52.991866 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 18:56:52.992816 systemd-logind[1704]: Session 22 logged out. Waiting for processes to exit. Jun 20 18:56:52.994022 systemd-logind[1704]: Removed session 22. Jun 20 18:56:58.108440 systemd[1]: Started sshd@20-10.200.8.21:22-10.200.16.10:56240.service - OpenSSH per-connection server daemon (10.200.16.10:56240). Jun 20 18:56:58.735581 sshd[4966]: Accepted publickey for core from 10.200.16.10 port 56240 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:58.737445 sshd-session[4966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:56:58.742416 systemd-logind[1704]: New session 23 of user core. Jun 20 18:56:58.747260 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 18:56:59.253075 sshd[4968]: Connection closed by 10.200.16.10 port 56240 Jun 20 18:56:59.253961 sshd-session[4966]: pam_unix(sshd:session): session closed for user core Jun 20 18:56:59.258626 systemd[1]: sshd@20-10.200.8.21:22-10.200.16.10:56240.service: Deactivated successfully. Jun 20 18:56:59.261329 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 18:56:59.262411 systemd-logind[1704]: Session 23 logged out. Waiting for processes to exit. Jun 20 18:56:59.263512 systemd-logind[1704]: Removed session 23. Jun 20 18:56:59.368383 systemd[1]: Started sshd@21-10.200.8.21:22-10.200.16.10:59300.service - OpenSSH per-connection server daemon (10.200.16.10:59300). Jun 20 18:56:59.995703 sshd[4980]: Accepted publickey for core from 10.200.16.10 port 59300 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:56:59.997190 sshd-session[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:00.002117 systemd-logind[1704]: New session 24 of user core. Jun 20 18:57:00.005212 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 18:57:01.649939 containerd[1729]: time="2025-06-20T18:57:01.649892382Z" level=info msg="StopContainer for \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\" with timeout 30 (s)" Jun 20 18:57:01.653307 containerd[1729]: time="2025-06-20T18:57:01.653270637Z" level=info msg="Stop container \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\" with signal terminated" Jun 20 18:57:01.678306 systemd[1]: cri-containerd-873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d.scope: Deactivated successfully. Jun 20 18:57:01.680756 containerd[1729]: time="2025-06-20T18:57:01.680365275Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:57:01.690189 containerd[1729]: time="2025-06-20T18:57:01.690137833Z" level=info msg="StopContainer for \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\" with timeout 2 (s)" Jun 20 18:57:01.692559 containerd[1729]: time="2025-06-20T18:57:01.692354169Z" level=info msg="Stop container \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\" with signal terminated" Jun 20 18:57:01.704168 systemd-networkd[1619]: lxc_health: Link DOWN Jun 20 18:57:01.704181 systemd-networkd[1619]: lxc_health: Lost carrier Jun 20 18:57:01.721533 systemd[1]: cri-containerd-e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25.scope: Deactivated successfully. Jun 20 18:57:01.722568 systemd[1]: cri-containerd-e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25.scope: Consumed 7.551s CPU time, 125.1M memory peak, 136K read from disk, 13.3M written to disk. Jun 20 18:57:01.728013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d-rootfs.mount: Deactivated successfully. Jun 20 18:57:01.751553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25-rootfs.mount: Deactivated successfully. Jun 20 18:57:01.797395 containerd[1729]: time="2025-06-20T18:57:01.797310967Z" level=info msg="shim disconnected" id=873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d namespace=k8s.io Jun 20 18:57:01.797395 containerd[1729]: time="2025-06-20T18:57:01.797389369Z" level=warning msg="cleaning up after shim disconnected" id=873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d namespace=k8s.io Jun 20 18:57:01.797395 containerd[1729]: time="2025-06-20T18:57:01.797403269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:01.799822 containerd[1729]: time="2025-06-20T18:57:01.797312367Z" level=info msg="shim disconnected" id=e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25 namespace=k8s.io Jun 20 18:57:01.799822 containerd[1729]: time="2025-06-20T18:57:01.797697374Z" level=warning msg="cleaning up after shim disconnected" id=e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25 namespace=k8s.io Jun 20 18:57:01.799822 containerd[1729]: time="2025-06-20T18:57:01.797708774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:01.817136 containerd[1729]: time="2025-06-20T18:57:01.817035286Z" level=warning msg="cleanup warnings time=\"2025-06-20T18:57:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 18:57:01.823423 containerd[1729]: time="2025-06-20T18:57:01.823376889Z" level=info msg="StopContainer for \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\" returns successfully" Jun 20 18:57:01.824088 containerd[1729]: time="2025-06-20T18:57:01.824037900Z" level=info msg="StopPodSandbox for \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\"" Jun 20 18:57:01.824216 containerd[1729]: time="2025-06-20T18:57:01.824104501Z" level=info msg="Container to stop \"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:01.824216 containerd[1729]: time="2025-06-20T18:57:01.824149902Z" level=info msg="Container to stop \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:01.824216 containerd[1729]: time="2025-06-20T18:57:01.824164302Z" level=info msg="Container to stop \"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:01.824216 containerd[1729]: time="2025-06-20T18:57:01.824177002Z" level=info msg="Container to stop \"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:01.824216 containerd[1729]: time="2025-06-20T18:57:01.824189102Z" level=info msg="Container to stop \"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:01.827820 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90-shm.mount: Deactivated successfully. Jun 20 18:57:01.830211 containerd[1729]: time="2025-06-20T18:57:01.829751692Z" level=info msg="StopContainer for \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\" returns successfully" Jun 20 18:57:01.831648 containerd[1729]: time="2025-06-20T18:57:01.831621122Z" level=info msg="StopPodSandbox for \"a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29\"" Jun 20 18:57:01.832023 containerd[1729]: time="2025-06-20T18:57:01.831935228Z" level=info msg="Container to stop \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 18:57:01.834183 update_engine[1709]: I20250620 18:57:01.833348 1709 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:57:01.834738 update_engine[1709]: I20250620 18:57:01.834405 1709 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:57:01.834738 update_engine[1709]: I20250620 18:57:01.834726 1709 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:57:01.836668 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29-shm.mount: Deactivated successfully. Jun 20 18:57:01.841285 systemd[1]: cri-containerd-760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90.scope: Deactivated successfully. Jun 20 18:57:01.856267 systemd[1]: cri-containerd-a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29.scope: Deactivated successfully. Jun 20 18:57:01.858014 update_engine[1709]: E20250620 18:57:01.856339 1709 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:57:01.858014 update_engine[1709]: I20250620 18:57:01.856431 1709 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 18:57:01.858014 update_engine[1709]: I20250620 18:57:01.856442 1709 omaha_request_action.cc:617] Omaha request response: Jun 20 18:57:01.858014 update_engine[1709]: E20250620 18:57:01.856529 1709 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 20 18:57:01.858014 update_engine[1709]: I20250620 18:57:01.856825 1709 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 20 18:57:01.858014 update_engine[1709]: I20250620 18:57:01.856848 1709 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:57:01.858014 update_engine[1709]: I20250620 18:57:01.856856 1709 update_attempter.cc:306] Processing Done. Jun 20 18:57:01.858014 update_engine[1709]: E20250620 18:57:01.856873 1709 update_attempter.cc:619] Update failed. Jun 20 18:57:01.858014 update_engine[1709]: I20250620 18:57:01.856881 1709 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 20 18:57:01.858014 update_engine[1709]: I20250620 18:57:01.856889 1709 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 20 18:57:01.858014 update_engine[1709]: I20250620 18:57:01.856896 1709 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 20 18:57:01.858014 update_engine[1709]: I20250620 18:57:01.856976 1709 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 18:57:01.858014 update_engine[1709]: I20250620 18:57:01.857009 1709 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 18:57:01.858014 update_engine[1709]: I20250620 18:57:01.857018 1709 omaha_request_action.cc:272] Request: Jun 20 18:57:01.858014 update_engine[1709]: Jun 20 18:57:01.858014 update_engine[1709]: Jun 20 18:57:01.858676 update_engine[1709]: Jun 20 18:57:01.858676 update_engine[1709]: Jun 20 18:57:01.858676 update_engine[1709]: Jun 20 18:57:01.858676 update_engine[1709]: Jun 20 18:57:01.858676 update_engine[1709]: I20250620 18:57:01.857026 1709 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:57:01.858676 update_engine[1709]: I20250620 18:57:01.857249 1709 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:57:01.858676 update_engine[1709]: I20250620 18:57:01.857538 1709 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:57:01.861693 locksmithd[1760]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 20 18:57:01.878933 update_engine[1709]: E20250620 18:57:01.878786 1709 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:57:01.879445 update_engine[1709]: I20250620 18:57:01.878891 1709 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 18:57:01.879445 update_engine[1709]: I20250620 18:57:01.879184 1709 omaha_request_action.cc:617] Omaha request response: Jun 20 18:57:01.879445 update_engine[1709]: I20250620 18:57:01.879204 1709 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:57:01.879445 update_engine[1709]: I20250620 18:57:01.879212 1709 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:57:01.879445 update_engine[1709]: I20250620 18:57:01.879223 1709 update_attempter.cc:306] Processing Done. Jun 20 18:57:01.879445 update_engine[1709]: I20250620 18:57:01.879339 1709 update_attempter.cc:310] Error event sent. Jun 20 18:57:01.879445 update_engine[1709]: I20250620 18:57:01.879379 1709 update_check_scheduler.cc:74] Next update check in 49m28s Jun 20 18:57:01.880346 locksmithd[1760]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 20 18:57:01.891163 containerd[1729]: time="2025-06-20T18:57:01.890976483Z" level=info msg="shim disconnected" id=760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90 namespace=k8s.io Jun 20 18:57:01.891163 containerd[1729]: time="2025-06-20T18:57:01.891044184Z" level=warning msg="cleaning up after shim disconnected" id=760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90 namespace=k8s.io Jun 20 18:57:01.891163 containerd[1729]: time="2025-06-20T18:57:01.891080885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:01.891951 containerd[1729]: time="2025-06-20T18:57:01.891571492Z" level=info msg="shim disconnected" id=a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29 namespace=k8s.io Jun 20 18:57:01.891951 containerd[1729]: time="2025-06-20T18:57:01.891926098Z" level=warning msg="cleaning up after shim disconnected" id=a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29 namespace=k8s.io Jun 20 18:57:01.892246 containerd[1729]: time="2025-06-20T18:57:01.892167902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:01.912370 containerd[1729]: time="2025-06-20T18:57:01.912112925Z" level=warning msg="cleanup warnings time=\"2025-06-20T18:57:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 18:57:01.912851 containerd[1729]: time="2025-06-20T18:57:01.912594333Z" level=info msg="TearDown network for sandbox \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" successfully" Jun 20 18:57:01.912851 containerd[1729]: time="2025-06-20T18:57:01.912623833Z" level=info msg="StopPodSandbox for \"760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90\" returns successfully" Jun 20 18:57:01.915242 containerd[1729]: time="2025-06-20T18:57:01.915213675Z" level=info msg="TearDown network for sandbox \"a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29\" successfully" Jun 20 18:57:01.915357 containerd[1729]: time="2025-06-20T18:57:01.915341977Z" level=info msg="StopPodSandbox for \"a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29\" returns successfully" Jun 20 18:57:02.022746 kubelet[3387]: I0620 18:57:02.022684 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz9bb\" (UniqueName: \"kubernetes.io/projected/2dd0d07c-9703-4ea0-b254-a96be938bb12-kube-api-access-vz9bb\") pod \"2dd0d07c-9703-4ea0-b254-a96be938bb12\" (UID: \"2dd0d07c-9703-4ea0-b254-a96be938bb12\") " Jun 20 18:57:02.023666 kubelet[3387]: I0620 18:57:02.023190 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-hostproc\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.023666 kubelet[3387]: I0620 18:57:02.023225 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-run\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.023666 kubelet[3387]: I0620 18:57:02.023270 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cni-path\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.023666 kubelet[3387]: I0620 18:57:02.023296 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-bpf-maps\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.023666 kubelet[3387]: I0620 18:57:02.023327 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a61079c-ebd9-4295-a838-7e074e1746d5-clustermesh-secrets\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.024775 kubelet[3387]: I0620 18:57:02.024013 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a61079c-ebd9-4295-a838-7e074e1746d5-hubble-tls\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.024775 kubelet[3387]: I0620 18:57:02.024087 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-etc-cni-netd\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.024775 kubelet[3387]: I0620 18:57:02.024115 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-host-proc-sys-kernel\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.024775 kubelet[3387]: I0620 18:57:02.024146 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-cgroup\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.024775 kubelet[3387]: I0620 18:57:02.024172 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-xtables-lock\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.024775 kubelet[3387]: I0620 18:57:02.024253 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-config-path\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.025208 kubelet[3387]: I0620 18:57:02.024285 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-lib-modules\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.025208 kubelet[3387]: I0620 18:57:02.024313 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-host-proc-sys-net\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.025208 kubelet[3387]: I0620 18:57:02.024351 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2dd0d07c-9703-4ea0-b254-a96be938bb12-cilium-config-path\") pod \"2dd0d07c-9703-4ea0-b254-a96be938bb12\" (UID: \"2dd0d07c-9703-4ea0-b254-a96be938bb12\") " Jun 20 18:57:02.025208 kubelet[3387]: I0620 18:57:02.024378 3387 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8xtm\" (UniqueName: \"kubernetes.io/projected/9a61079c-ebd9-4295-a838-7e074e1746d5-kube-api-access-j8xtm\") pod \"9a61079c-ebd9-4295-a838-7e074e1746d5\" (UID: \"9a61079c-ebd9-4295-a838-7e074e1746d5\") " Jun 20 18:57:02.028077 kubelet[3387]: I0620 18:57:02.026238 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:57:02.028077 kubelet[3387]: I0620 18:57:02.026300 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:57:02.028077 kubelet[3387]: I0620 18:57:02.026307 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-hostproc" (OuterVolumeSpecName: "hostproc") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:57:02.028077 kubelet[3387]: I0620 18:57:02.026326 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:57:02.028077 kubelet[3387]: I0620 18:57:02.026352 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cni-path" (OuterVolumeSpecName: "cni-path") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:57:02.028435 kubelet[3387]: I0620 18:57:02.026353 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:57:02.028435 kubelet[3387]: I0620 18:57:02.026374 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:57:02.028435 kubelet[3387]: I0620 18:57:02.026386 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:57:02.029665 kubelet[3387]: I0620 18:57:02.029591 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:57:02.030011 kubelet[3387]: I0620 18:57:02.029965 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 18:57:02.031681 kubelet[3387]: I0620 18:57:02.031645 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd0d07c-9703-4ea0-b254-a96be938bb12-kube-api-access-vz9bb" (OuterVolumeSpecName: "kube-api-access-vz9bb") pod "2dd0d07c-9703-4ea0-b254-a96be938bb12" (UID: "2dd0d07c-9703-4ea0-b254-a96be938bb12"). InnerVolumeSpecName "kube-api-access-vz9bb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:57:02.033293 kubelet[3387]: I0620 18:57:02.033258 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a61079c-ebd9-4295-a838-7e074e1746d5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 18:57:02.034772 kubelet[3387]: I0620 18:57:02.034743 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a61079c-ebd9-4295-a838-7e074e1746d5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:57:02.036150 kubelet[3387]: I0620 18:57:02.036122 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a61079c-ebd9-4295-a838-7e074e1746d5-kube-api-access-j8xtm" (OuterVolumeSpecName: "kube-api-access-j8xtm") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "kube-api-access-j8xtm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 18:57:02.036620 kubelet[3387]: I0620 18:57:02.036596 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dd0d07c-9703-4ea0-b254-a96be938bb12-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2dd0d07c-9703-4ea0-b254-a96be938bb12" (UID: "2dd0d07c-9703-4ea0-b254-a96be938bb12"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:57:02.037211 kubelet[3387]: I0620 18:57:02.037185 3387 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a61079c-ebd9-4295-a838-7e074e1746d5" (UID: "9a61079c-ebd9-4295-a838-7e074e1746d5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 18:57:02.125463 kubelet[3387]: I0620 18:57:02.125406 3387 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2dd0d07c-9703-4ea0-b254-a96be938bb12-cilium-config-path\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.125463 kubelet[3387]: I0620 18:57:02.125452 3387 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j8xtm\" (UniqueName: \"kubernetes.io/projected/9a61079c-ebd9-4295-a838-7e074e1746d5-kube-api-access-j8xtm\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.125463 kubelet[3387]: I0620 18:57:02.125473 3387 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vz9bb\" (UniqueName: \"kubernetes.io/projected/2dd0d07c-9703-4ea0-b254-a96be938bb12-kube-api-access-vz9bb\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.125774 kubelet[3387]: I0620 18:57:02.125488 3387 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-hostproc\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.125774 kubelet[3387]: I0620 18:57:02.125503 3387 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-run\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.125774 kubelet[3387]: I0620 18:57:02.125524 3387 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cni-path\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.125774 kubelet[3387]: I0620 18:57:02.125536 3387 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-bpf-maps\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.125774 kubelet[3387]: I0620 18:57:02.125549 3387 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a61079c-ebd9-4295-a838-7e074e1746d5-clustermesh-secrets\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.125774 kubelet[3387]: I0620 18:57:02.125561 3387 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a61079c-ebd9-4295-a838-7e074e1746d5-hubble-tls\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.125774 kubelet[3387]: I0620 18:57:02.125578 3387 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-etc-cni-netd\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.125774 kubelet[3387]: I0620 18:57:02.125591 3387 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-host-proc-sys-kernel\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.126019 kubelet[3387]: I0620 18:57:02.125609 3387 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-cgroup\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.126019 kubelet[3387]: I0620 18:57:02.125623 3387 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-xtables-lock\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.126019 kubelet[3387]: I0620 18:57:02.125637 3387 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a61079c-ebd9-4295-a838-7e074e1746d5-cilium-config-path\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.126019 kubelet[3387]: I0620 18:57:02.125650 3387 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-lib-modules\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.126019 kubelet[3387]: I0620 18:57:02.125666 3387 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a61079c-ebd9-4295-a838-7e074e1746d5-host-proc-sys-net\") on node \"ci-4230.2.0-a-bab85c4a2e\" DevicePath \"\"" Jun 20 18:57:02.139313 kubelet[3387]: E0620 18:57:02.139245 3387 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:57:02.416721 kubelet[3387]: I0620 18:57:02.416684 3387 scope.go:117] "RemoveContainer" containerID="e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25" Jun 20 18:57:02.419907 containerd[1729]: time="2025-06-20T18:57:02.419394233Z" level=info msg="RemoveContainer for \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\"" Jun 20 18:57:02.424919 systemd[1]: Removed slice kubepods-burstable-pod9a61079c_ebd9_4295_a838_7e074e1746d5.slice - libcontainer container kubepods-burstable-pod9a61079c_ebd9_4295_a838_7e074e1746d5.slice. Jun 20 18:57:02.425079 systemd[1]: kubepods-burstable-pod9a61079c_ebd9_4295_a838_7e074e1746d5.slice: Consumed 7.650s CPU time, 125.6M memory peak, 136K read from disk, 13.3M written to disk. Jun 20 18:57:02.432358 systemd[1]: Removed slice kubepods-besteffort-pod2dd0d07c_9703_4ea0_b254_a96be938bb12.slice - libcontainer container kubepods-besteffort-pod2dd0d07c_9703_4ea0_b254_a96be938bb12.slice. Jun 20 18:57:02.435480 containerd[1729]: time="2025-06-20T18:57:02.435440292Z" level=info msg="RemoveContainer for \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\" returns successfully" Jun 20 18:57:02.436012 kubelet[3387]: I0620 18:57:02.435982 3387 scope.go:117] "RemoveContainer" containerID="749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3" Jun 20 18:57:02.443289 containerd[1729]: time="2025-06-20T18:57:02.442936214Z" level=info msg="RemoveContainer for \"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3\"" Jun 20 18:57:02.454329 containerd[1729]: time="2025-06-20T18:57:02.454155495Z" level=info msg="RemoveContainer for \"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3\" returns successfully" Jun 20 18:57:02.454537 kubelet[3387]: I0620 18:57:02.454506 3387 scope.go:117] "RemoveContainer" containerID="6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5" Jun 20 18:57:02.455803 containerd[1729]: time="2025-06-20T18:57:02.455772621Z" level=info msg="RemoveContainer for \"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5\"" Jun 20 18:57:02.468855 containerd[1729]: time="2025-06-20T18:57:02.468791932Z" level=info msg="RemoveContainer for \"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5\" returns successfully" Jun 20 18:57:02.469173 kubelet[3387]: I0620 18:57:02.469139 3387 scope.go:117] "RemoveContainer" containerID="efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc" Jun 20 18:57:02.470411 containerd[1729]: time="2025-06-20T18:57:02.470384858Z" level=info msg="RemoveContainer for \"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc\"" Jun 20 18:57:02.482379 containerd[1729]: time="2025-06-20T18:57:02.482331451Z" level=info msg="RemoveContainer for \"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc\" returns successfully" Jun 20 18:57:02.482627 kubelet[3387]: I0620 18:57:02.482597 3387 scope.go:117] "RemoveContainer" containerID="78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2" Jun 20 18:57:02.483788 containerd[1729]: time="2025-06-20T18:57:02.483748974Z" level=info msg="RemoveContainer for \"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2\"" Jun 20 18:57:02.492815 containerd[1729]: time="2025-06-20T18:57:02.492772220Z" level=info msg="RemoveContainer for \"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2\" returns successfully" Jun 20 18:57:02.493076 kubelet[3387]: I0620 18:57:02.493020 3387 scope.go:117] "RemoveContainer" containerID="e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25" Jun 20 18:57:02.493383 containerd[1729]: time="2025-06-20T18:57:02.493344829Z" level=error msg="ContainerStatus for \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\": not found" Jun 20 18:57:02.493552 kubelet[3387]: E0620 18:57:02.493509 3387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\": not found" containerID="e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25" Jun 20 18:57:02.493640 kubelet[3387]: I0620 18:57:02.493553 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25"} err="failed to get container status \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4fda692d58ce4fb6da1f9045c3ac147f509595cc29278644121403c92785c25\": not found" Jun 20 18:57:02.493640 kubelet[3387]: I0620 18:57:02.493614 3387 scope.go:117] "RemoveContainer" containerID="749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3" Jun 20 18:57:02.493870 containerd[1729]: time="2025-06-20T18:57:02.493837137Z" level=error msg="ContainerStatus for \"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3\": not found" Jun 20 18:57:02.494029 kubelet[3387]: E0620 18:57:02.493975 3387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3\": not found" containerID="749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3" Jun 20 18:57:02.494029 kubelet[3387]: I0620 18:57:02.494005 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3"} err="failed to get container status \"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"749be05d929685bde38d4befa1664d81f458873d3b5dd0ead79baac38f4027c3\": not found" Jun 20 18:57:02.494029 kubelet[3387]: I0620 18:57:02.494028 3387 scope.go:117] "RemoveContainer" containerID="6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5" Jun 20 18:57:02.494344 containerd[1729]: time="2025-06-20T18:57:02.494239044Z" level=error msg="ContainerStatus for \"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5\": not found" Jun 20 18:57:02.494416 kubelet[3387]: E0620 18:57:02.494371 3387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5\": not found" containerID="6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5" Jun 20 18:57:02.494416 kubelet[3387]: I0620 18:57:02.494394 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5"} err="failed to get container status \"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5\": rpc error: code = NotFound desc = an error occurred when try to find container \"6c33949c37ebb001569ef3736c69315294d161f440a5d82b2d6aed36c8693bc5\": not found" Jun 20 18:57:02.494416 kubelet[3387]: I0620 18:57:02.494412 3387 scope.go:117] "RemoveContainer" containerID="efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc" Jun 20 18:57:02.494658 containerd[1729]: time="2025-06-20T18:57:02.494599050Z" level=error msg="ContainerStatus for \"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc\": not found" Jun 20 18:57:02.494789 kubelet[3387]: E0620 18:57:02.494762 3387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc\": not found" containerID="efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc" Jun 20 18:57:02.494857 kubelet[3387]: I0620 18:57:02.494791 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc"} err="failed to get container status \"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"efe1b6ad70f3caab1196f203c51a01974191a9c9520dfe95f0a569ef4178f8cc\": not found" Jun 20 18:57:02.494857 kubelet[3387]: I0620 18:57:02.494810 3387 scope.go:117] "RemoveContainer" containerID="78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2" Jun 20 18:57:02.495036 containerd[1729]: time="2025-06-20T18:57:02.494980256Z" level=error msg="ContainerStatus for \"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2\": not found" Jun 20 18:57:02.495292 kubelet[3387]: E0620 18:57:02.495186 3387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2\": not found" containerID="78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2" Jun 20 18:57:02.495292 kubelet[3387]: I0620 18:57:02.495214 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2"} err="failed to get container status \"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2\": rpc error: code = NotFound desc = an error occurred when try to find container \"78974461900f61ee786ffdecb951c85b9fd671b2dc2eeeccb677fbc3518bfdb2\": not found" Jun 20 18:57:02.495292 kubelet[3387]: I0620 18:57:02.495234 3387 scope.go:117] "RemoveContainer" containerID="873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d" Jun 20 18:57:02.496532 containerd[1729]: time="2025-06-20T18:57:02.496252076Z" level=info msg="RemoveContainer for \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\"" Jun 20 18:57:02.505713 containerd[1729]: time="2025-06-20T18:57:02.505677429Z" level=info msg="RemoveContainer for \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\" returns successfully" Jun 20 18:57:02.505887 kubelet[3387]: I0620 18:57:02.505863 3387 scope.go:117] "RemoveContainer" containerID="873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d" Jun 20 18:57:02.506150 containerd[1729]: time="2025-06-20T18:57:02.506079835Z" level=error msg="ContainerStatus for \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\": not found" Jun 20 18:57:02.506250 kubelet[3387]: E0620 18:57:02.506224 3387 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\": not found" containerID="873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d" Jun 20 18:57:02.506310 kubelet[3387]: I0620 18:57:02.506257 3387 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d"} err="failed to get container status \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\": rpc error: code = NotFound desc = an error occurred when try to find container \"873c5ad24c3911f35b97b1285344df3fc1b944aaffceace8c14ad4c83364547d\": not found" Jun 20 18:57:02.657789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7802efa9735c2e9b41ee04c0550cfbf99acf100f972bbcc9e612b5b7f61fe29-rootfs.mount: Deactivated successfully. Jun 20 18:57:02.658192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-760d77e296747729e5b46fcfa07354fb9c0ff5b6165ad2da453f1beb78a2fe90-rootfs.mount: Deactivated successfully. Jun 20 18:57:02.658442 systemd[1]: var-lib-kubelet-pods-9a61079c\x2debd9\x2d4295\x2da838\x2d7e074e1746d5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj8xtm.mount: Deactivated successfully. Jun 20 18:57:02.658566 systemd[1]: var-lib-kubelet-pods-2dd0d07c\x2d9703\x2d4ea0\x2db254\x2da96be938bb12-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvz9bb.mount: Deactivated successfully. Jun 20 18:57:02.658680 systemd[1]: var-lib-kubelet-pods-9a61079c\x2debd9\x2d4295\x2da838\x2d7e074e1746d5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 18:57:02.658810 systemd[1]: var-lib-kubelet-pods-9a61079c\x2debd9\x2d4295\x2da838\x2d7e074e1746d5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 18:57:03.036414 kubelet[3387]: I0620 18:57:03.036363 3387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd0d07c-9703-4ea0-b254-a96be938bb12" path="/var/lib/kubelet/pods/2dd0d07c-9703-4ea0-b254-a96be938bb12/volumes" Jun 20 18:57:03.037120 kubelet[3387]: I0620 18:57:03.037040 3387 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a61079c-ebd9-4295-a838-7e074e1746d5" path="/var/lib/kubelet/pods/9a61079c-ebd9-4295-a838-7e074e1746d5/volumes" Jun 20 18:57:03.688945 sshd[4982]: Connection closed by 10.200.16.10 port 59300 Jun 20 18:57:03.689740 sshd-session[4980]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:03.694024 systemd[1]: sshd@21-10.200.8.21:22-10.200.16.10:59300.service: Deactivated successfully. Jun 20 18:57:03.696414 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 18:57:03.697312 systemd-logind[1704]: Session 24 logged out. Waiting for processes to exit. Jun 20 18:57:03.698532 systemd-logind[1704]: Removed session 24. Jun 20 18:57:03.813446 systemd[1]: Started sshd@22-10.200.8.21:22-10.200.16.10:59314.service - OpenSSH per-connection server daemon (10.200.16.10:59314). Jun 20 18:57:04.441919 sshd[5148]: Accepted publickey for core from 10.200.16.10 port 59314 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:57:04.443411 sshd-session[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:04.448602 systemd-logind[1704]: New session 25 of user core. Jun 20 18:57:04.451241 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 18:57:05.357035 systemd[1]: Created slice kubepods-burstable-podd98dae4b_7497_466f_acaa_1c8475cd2e59.slice - libcontainer container kubepods-burstable-podd98dae4b_7497_466f_acaa_1c8475cd2e59.slice. Jun 20 18:57:05.412263 sshd[5152]: Connection closed by 10.200.16.10 port 59314 Jun 20 18:57:05.413026 sshd-session[5148]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:05.418305 systemd-logind[1704]: Session 25 logged out. Waiting for processes to exit. Jun 20 18:57:05.418941 systemd[1]: sshd@22-10.200.8.21:22-10.200.16.10:59314.service: Deactivated successfully. Jun 20 18:57:05.422186 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 18:57:05.423738 systemd-logind[1704]: Removed session 25. Jun 20 18:57:05.445213 kubelet[3387]: I0620 18:57:05.445155 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d98dae4b-7497-466f-acaa-1c8475cd2e59-cilium-ipsec-secrets\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.445213 kubelet[3387]: I0620 18:57:05.445210 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d98dae4b-7497-466f-acaa-1c8475cd2e59-cilium-run\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.445791 kubelet[3387]: I0620 18:57:05.445239 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d98dae4b-7497-466f-acaa-1c8475cd2e59-cni-path\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.445791 kubelet[3387]: I0620 18:57:05.445257 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d98dae4b-7497-466f-acaa-1c8475cd2e59-etc-cni-netd\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.445791 kubelet[3387]: I0620 18:57:05.445274 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d98dae4b-7497-466f-acaa-1c8475cd2e59-lib-modules\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.445791 kubelet[3387]: I0620 18:57:05.445293 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d98dae4b-7497-466f-acaa-1c8475cd2e59-xtables-lock\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.445791 kubelet[3387]: I0620 18:57:05.445318 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d98dae4b-7497-466f-acaa-1c8475cd2e59-clustermesh-secrets\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.445791 kubelet[3387]: I0620 18:57:05.445337 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d98dae4b-7497-466f-acaa-1c8475cd2e59-hubble-tls\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.446107 kubelet[3387]: I0620 18:57:05.445364 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d98dae4b-7497-466f-acaa-1c8475cd2e59-hostproc\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.446107 kubelet[3387]: I0620 18:57:05.445386 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d98dae4b-7497-466f-acaa-1c8475cd2e59-cilium-cgroup\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.446107 kubelet[3387]: I0620 18:57:05.445410 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l472v\" (UniqueName: \"kubernetes.io/projected/d98dae4b-7497-466f-acaa-1c8475cd2e59-kube-api-access-l472v\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.446107 kubelet[3387]: I0620 18:57:05.445434 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d98dae4b-7497-466f-acaa-1c8475cd2e59-bpf-maps\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.446107 kubelet[3387]: I0620 18:57:05.445464 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d98dae4b-7497-466f-acaa-1c8475cd2e59-cilium-config-path\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.446107 kubelet[3387]: I0620 18:57:05.445488 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d98dae4b-7497-466f-acaa-1c8475cd2e59-host-proc-sys-net\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.446359 kubelet[3387]: I0620 18:57:05.445512 3387 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d98dae4b-7497-466f-acaa-1c8475cd2e59-host-proc-sys-kernel\") pod \"cilium-zk65q\" (UID: \"d98dae4b-7497-466f-acaa-1c8475cd2e59\") " pod="kube-system/cilium-zk65q" Jun 20 18:57:05.532640 systemd[1]: Started sshd@23-10.200.8.21:22-10.200.16.10:59318.service - OpenSSH per-connection server daemon (10.200.16.10:59318). Jun 20 18:57:05.661917 containerd[1729]: time="2025-06-20T18:57:05.661866880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zk65q,Uid:d98dae4b-7497-466f-acaa-1c8475cd2e59,Namespace:kube-system,Attempt:0,}" Jun 20 18:57:05.707839 containerd[1729]: time="2025-06-20T18:57:05.707743741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:57:05.707839 containerd[1729]: time="2025-06-20T18:57:05.707790842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:57:05.707839 containerd[1729]: time="2025-06-20T18:57:05.707804942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:57:05.708208 containerd[1729]: time="2025-06-20T18:57:05.707888742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:57:05.727240 systemd[1]: Started cri-containerd-0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7.scope - libcontainer container 0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7. Jun 20 18:57:05.750965 containerd[1729]: time="2025-06-20T18:57:05.750834681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zk65q,Uid:d98dae4b-7497-466f-acaa-1c8475cd2e59,Namespace:kube-system,Attempt:0,} returns sandbox id \"0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7\"" Jun 20 18:57:05.763771 containerd[1729]: time="2025-06-20T18:57:05.763728982Z" level=info msg="CreateContainer within sandbox \"0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:57:05.801735 containerd[1729]: time="2025-06-20T18:57:05.801684581Z" level=info msg="CreateContainer within sandbox \"0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7e93efb69799f3e01685fd3b67a0e01cf4eedafbc4cd7f2a19a9bb3384e07545\"" Jun 20 18:57:05.802501 containerd[1729]: time="2025-06-20T18:57:05.802470587Z" level=info msg="StartContainer for \"7e93efb69799f3e01685fd3b67a0e01cf4eedafbc4cd7f2a19a9bb3384e07545\"" Jun 20 18:57:05.832271 systemd[1]: Started cri-containerd-7e93efb69799f3e01685fd3b67a0e01cf4eedafbc4cd7f2a19a9bb3384e07545.scope - libcontainer container 7e93efb69799f3e01685fd3b67a0e01cf4eedafbc4cd7f2a19a9bb3384e07545. Jun 20 18:57:05.861406 containerd[1729]: time="2025-06-20T18:57:05.861351551Z" level=info msg="StartContainer for \"7e93efb69799f3e01685fd3b67a0e01cf4eedafbc4cd7f2a19a9bb3384e07545\" returns successfully" Jun 20 18:57:05.867730 systemd[1]: cri-containerd-7e93efb69799f3e01685fd3b67a0e01cf4eedafbc4cd7f2a19a9bb3384e07545.scope: Deactivated successfully. Jun 20 18:57:05.967655 containerd[1729]: time="2025-06-20T18:57:05.967490786Z" level=info msg="shim disconnected" id=7e93efb69799f3e01685fd3b67a0e01cf4eedafbc4cd7f2a19a9bb3384e07545 namespace=k8s.io Jun 20 18:57:05.969071 containerd[1729]: time="2025-06-20T18:57:05.968779497Z" level=warning msg="cleaning up after shim disconnected" id=7e93efb69799f3e01685fd3b67a0e01cf4eedafbc4cd7f2a19a9bb3384e07545 namespace=k8s.io Jun 20 18:57:05.969071 containerd[1729]: time="2025-06-20T18:57:05.968810397Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:06.168160 sshd[5163]: Accepted publickey for core from 10.200.16.10 port 59318 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:57:06.170045 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:06.176333 systemd-logind[1704]: New session 26 of user core. Jun 20 18:57:06.182233 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 18:57:06.442251 containerd[1729]: time="2025-06-20T18:57:06.442136524Z" level=info msg="CreateContainer within sandbox \"0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:57:06.476169 containerd[1729]: time="2025-06-20T18:57:06.476115691Z" level=info msg="CreateContainer within sandbox \"0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e1d55dfe5ee629cf28581a3024ff017243dc6e3892afbdbe8097b668285e120d\"" Jun 20 18:57:06.476943 containerd[1729]: time="2025-06-20T18:57:06.476826797Z" level=info msg="StartContainer for \"e1d55dfe5ee629cf28581a3024ff017243dc6e3892afbdbe8097b668285e120d\"" Jun 20 18:57:06.507261 systemd[1]: Started cri-containerd-e1d55dfe5ee629cf28581a3024ff017243dc6e3892afbdbe8097b668285e120d.scope - libcontainer container e1d55dfe5ee629cf28581a3024ff017243dc6e3892afbdbe8097b668285e120d. Jun 20 18:57:06.537476 containerd[1729]: time="2025-06-20T18:57:06.537322473Z" level=info msg="StartContainer for \"e1d55dfe5ee629cf28581a3024ff017243dc6e3892afbdbe8097b668285e120d\" returns successfully" Jun 20 18:57:06.542339 systemd[1]: cri-containerd-e1d55dfe5ee629cf28581a3024ff017243dc6e3892afbdbe8097b668285e120d.scope: Deactivated successfully. Jun 20 18:57:06.572403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1d55dfe5ee629cf28581a3024ff017243dc6e3892afbdbe8097b668285e120d-rootfs.mount: Deactivated successfully. Jun 20 18:57:06.589418 containerd[1729]: time="2025-06-20T18:57:06.589344383Z" level=info msg="shim disconnected" id=e1d55dfe5ee629cf28581a3024ff017243dc6e3892afbdbe8097b668285e120d namespace=k8s.io Jun 20 18:57:06.589620 containerd[1729]: time="2025-06-20T18:57:06.589449184Z" level=warning msg="cleaning up after shim disconnected" id=e1d55dfe5ee629cf28581a3024ff017243dc6e3892afbdbe8097b668285e120d namespace=k8s.io Jun 20 18:57:06.589620 containerd[1729]: time="2025-06-20T18:57:06.589465784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:06.611184 sshd[5271]: Connection closed by 10.200.16.10 port 59318 Jun 20 18:57:06.611961 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:06.616114 systemd[1]: sshd@23-10.200.8.21:22-10.200.16.10:59318.service: Deactivated successfully. Jun 20 18:57:06.618406 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 18:57:06.619295 systemd-logind[1704]: Session 26 logged out. Waiting for processes to exit. Jun 20 18:57:06.620283 systemd-logind[1704]: Removed session 26. Jun 20 18:57:06.732378 systemd[1]: Started sshd@24-10.200.8.21:22-10.200.16.10:59326.service - OpenSSH per-connection server daemon (10.200.16.10:59326). Jun 20 18:57:07.140541 kubelet[3387]: E0620 18:57:07.140491 3387 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 18:57:07.359185 sshd[5340]: Accepted publickey for core from 10.200.16.10 port 59326 ssh2: RSA SHA256:f2nnG+MkggVlEspzlkcUBZlnT5JphdiP61MyRrRbeVs Jun 20 18:57:07.361184 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:57:07.366466 systemd-logind[1704]: New session 27 of user core. Jun 20 18:57:07.376341 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 18:57:07.453166 containerd[1729]: time="2025-06-20T18:57:07.449290093Z" level=info msg="CreateContainer within sandbox \"0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:57:07.501116 containerd[1729]: time="2025-06-20T18:57:07.500017171Z" level=info msg="CreateContainer within sandbox \"0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"711b3c353703ed7cc945e3918c2f15b7817ac5e6fd53b012cba22b434d0fff09\"" Jun 20 18:57:07.501554 containerd[1729]: time="2025-06-20T18:57:07.501514794Z" level=info msg="StartContainer for \"711b3c353703ed7cc945e3918c2f15b7817ac5e6fd53b012cba22b434d0fff09\"" Jun 20 18:57:07.536251 systemd[1]: Started cri-containerd-711b3c353703ed7cc945e3918c2f15b7817ac5e6fd53b012cba22b434d0fff09.scope - libcontainer container 711b3c353703ed7cc945e3918c2f15b7817ac5e6fd53b012cba22b434d0fff09. Jun 20 18:57:07.575300 systemd[1]: cri-containerd-711b3c353703ed7cc945e3918c2f15b7817ac5e6fd53b012cba22b434d0fff09.scope: Deactivated successfully. Jun 20 18:57:07.576835 containerd[1729]: time="2025-06-20T18:57:07.576795949Z" level=info msg="StartContainer for \"711b3c353703ed7cc945e3918c2f15b7817ac5e6fd53b012cba22b434d0fff09\" returns successfully" Jun 20 18:57:07.602301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-711b3c353703ed7cc945e3918c2f15b7817ac5e6fd53b012cba22b434d0fff09-rootfs.mount: Deactivated successfully. Jun 20 18:57:07.615353 containerd[1729]: time="2025-06-20T18:57:07.615283140Z" level=info msg="shim disconnected" id=711b3c353703ed7cc945e3918c2f15b7817ac5e6fd53b012cba22b434d0fff09 namespace=k8s.io Jun 20 18:57:07.615353 containerd[1729]: time="2025-06-20T18:57:07.615348141Z" level=warning msg="cleaning up after shim disconnected" id=711b3c353703ed7cc945e3918c2f15b7817ac5e6fd53b012cba22b434d0fff09 namespace=k8s.io Jun 20 18:57:07.615353 containerd[1729]: time="2025-06-20T18:57:07.615358841Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:08.452615 containerd[1729]: time="2025-06-20T18:57:08.452570188Z" level=info msg="CreateContainer within sandbox \"0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:57:08.487194 containerd[1729]: time="2025-06-20T18:57:08.487145118Z" level=info msg="CreateContainer within sandbox \"0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"414928a5580e15c7bd124a0d8937ce8eb533eac69eeba6f1227b393e780804ba\"" Jun 20 18:57:08.488390 containerd[1729]: time="2025-06-20T18:57:08.488138234Z" level=info msg="StartContainer for \"414928a5580e15c7bd124a0d8937ce8eb533eac69eeba6f1227b393e780804ba\"" Jun 20 18:57:08.525274 systemd[1]: Started cri-containerd-414928a5580e15c7bd124a0d8937ce8eb533eac69eeba6f1227b393e780804ba.scope - libcontainer container 414928a5580e15c7bd124a0d8937ce8eb533eac69eeba6f1227b393e780804ba. Jun 20 18:57:08.553675 systemd[1]: cri-containerd-414928a5580e15c7bd124a0d8937ce8eb533eac69eeba6f1227b393e780804ba.scope: Deactivated successfully. Jun 20 18:57:08.561116 containerd[1729]: time="2025-06-20T18:57:08.560386142Z" level=info msg="StartContainer for \"414928a5580e15c7bd124a0d8937ce8eb533eac69eeba6f1227b393e780804ba\" returns successfully" Jun 20 18:57:08.584020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-414928a5580e15c7bd124a0d8937ce8eb533eac69eeba6f1227b393e780804ba-rootfs.mount: Deactivated successfully. Jun 20 18:57:08.595125 containerd[1729]: time="2025-06-20T18:57:08.595025174Z" level=info msg="shim disconnected" id=414928a5580e15c7bd124a0d8937ce8eb533eac69eeba6f1227b393e780804ba namespace=k8s.io Jun 20 18:57:08.595125 containerd[1729]: time="2025-06-20T18:57:08.595123275Z" level=warning msg="cleaning up after shim disconnected" id=414928a5580e15c7bd124a0d8937ce8eb533eac69eeba6f1227b393e780804ba namespace=k8s.io Jun 20 18:57:08.595399 containerd[1729]: time="2025-06-20T18:57:08.595139276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:57:09.465808 containerd[1729]: time="2025-06-20T18:57:09.465530832Z" level=info msg="CreateContainer within sandbox \"0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:57:09.506351 containerd[1729]: time="2025-06-20T18:57:09.506301057Z" level=info msg="CreateContainer within sandbox \"0697a2108013eb24986d50bc28b09b729ddfb1191417d0cdf172f9bbb25f7bd7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7d1aa7aefaf896e1277a7d230eeddef3bc64e0745c40d5ae11bdc1e0674e4cb5\"" Jun 20 18:57:09.508290 containerd[1729]: time="2025-06-20T18:57:09.507045469Z" level=info msg="StartContainer for \"7d1aa7aefaf896e1277a7d230eeddef3bc64e0745c40d5ae11bdc1e0674e4cb5\"" Jun 20 18:57:09.539239 systemd[1]: Started cri-containerd-7d1aa7aefaf896e1277a7d230eeddef3bc64e0745c40d5ae11bdc1e0674e4cb5.scope - libcontainer container 7d1aa7aefaf896e1277a7d230eeddef3bc64e0745c40d5ae11bdc1e0674e4cb5. Jun 20 18:57:09.579625 containerd[1729]: time="2025-06-20T18:57:09.579578982Z" level=info msg="StartContainer for \"7d1aa7aefaf896e1277a7d230eeddef3bc64e0745c40d5ae11bdc1e0674e4cb5\" returns successfully" Jun 20 18:57:09.612427 systemd[1]: run-containerd-runc-k8s.io-7d1aa7aefaf896e1277a7d230eeddef3bc64e0745c40d5ae11bdc1e0674e4cb5-runc.4Exusk.mount: Deactivated successfully. Jun 20 18:57:10.059107 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 20 18:57:10.649074 kubelet[3387]: I0620 18:57:10.649008 3387 setters.go:618] "Node became not ready" node="ci-4230.2.0-a-bab85c4a2e" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T18:57:10Z","lastTransitionTime":"2025-06-20T18:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 18:57:11.932809 systemd[1]: run-containerd-runc-k8s.io-7d1aa7aefaf896e1277a7d230eeddef3bc64e0745c40d5ae11bdc1e0674e4cb5-runc.ktKpWH.mount: Deactivated successfully. Jun 20 18:57:12.986887 systemd-networkd[1619]: lxc_health: Link UP Jun 20 18:57:12.996274 systemd-networkd[1619]: lxc_health: Gained carrier Jun 20 18:57:13.689104 kubelet[3387]: I0620 18:57:13.689012 3387 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zk65q" podStartSLOduration=8.688989541 podStartE2EDuration="8.688989541s" podCreationTimestamp="2025-06-20 18:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:57:10.47485762 +0000 UTC m=+163.545921240" watchObservedRunningTime="2025-06-20 18:57:13.688989541 +0000 UTC m=+166.760053061" Jun 20 18:57:14.467272 systemd-networkd[1619]: lxc_health: Gained IPv6LL Jun 20 18:57:16.308272 systemd[1]: run-containerd-runc-k8s.io-7d1aa7aefaf896e1277a7d230eeddef3bc64e0745c40d5ae11bdc1e0674e4cb5-runc.ZukHR5.mount: Deactivated successfully. Jun 20 18:57:18.593726 sshd[5342]: Connection closed by 10.200.16.10 port 59326 Jun 20 18:57:18.594627 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Jun 20 18:57:18.598021 systemd[1]: sshd@24-10.200.8.21:22-10.200.16.10:59326.service: Deactivated successfully. Jun 20 18:57:18.600498 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 18:57:18.602463 systemd-logind[1704]: Session 27 logged out. Waiting for processes to exit. Jun 20 18:57:18.603701 systemd-logind[1704]: Removed session 27.