Aug 13 00:00:25.140812 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:47:31 -00 2025 Aug 13 00:00:25.140849 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:00:25.140865 kernel: BIOS-provided physical RAM map: Aug 13 00:00:25.140876 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 13 00:00:25.140887 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 13 00:00:25.140898 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 13 00:00:25.140912 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc4fff] reserved Aug 13 00:00:25.140923 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd0fff] usable Aug 13 00:00:25.140937 kernel: BIOS-e820: [mem 0x000000003ffd1000-0x000000003fffafff] ACPI data Aug 13 00:00:25.140949 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 13 00:00:25.140960 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 13 00:00:25.140971 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 13 00:00:25.140982 kernel: printk: bootconsole [earlyser0] enabled Aug 13 00:00:25.140994 kernel: NX (Execute Disable) protection: active Aug 13 00:00:25.141024 kernel: APIC: Static calls initialized Aug 13 00:00:25.141036 kernel: efi: EFI v2.7 by Microsoft Aug 13 00:00:25.141048 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ebf5a98 RNG=0x3ffd2018 Aug 13 00:00:25.141059 kernel: random: crng init done Aug 13 00:00:25.141070 kernel: secureboot: Secure boot disabled Aug 13 00:00:25.141082 kernel: SMBIOS 3.1.0 present. Aug 13 00:00:25.141094 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Aug 13 00:00:25.141106 kernel: Hypervisor detected: Microsoft Hyper-V Aug 13 00:00:25.141118 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 13 00:00:25.141130 kernel: Hyper-V: Host Build 10.0.26100.1293-1-0 Aug 13 00:00:25.141145 kernel: Hyper-V: Nested features: 0x1e0101 Aug 13 00:00:25.141157 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 13 00:00:25.141168 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 13 00:00:25.141181 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 00:00:25.141193 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 13 00:00:25.141206 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 13 00:00:25.141218 kernel: tsc: Detected 2593.904 MHz processor Aug 13 00:00:25.141248 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 00:00:25.141261 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 00:00:25.141274 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 13 00:00:25.141291 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 13 00:00:25.141304 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 00:00:25.141318 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 13 00:00:25.141331 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 13 00:00:25.141345 kernel: Using GB pages for direct mapping Aug 13 00:00:25.141358 kernel: ACPI: Early table checksum verification disabled Aug 13 00:00:25.141378 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 13 00:00:25.141395 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:25.141409 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:25.141423 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Aug 13 00:00:25.141437 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 13 00:00:25.141452 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:25.141466 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:25.141483 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:25.141519 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:25.141534 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:25.141548 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 13 00:00:25.141563 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 13 00:00:25.141576 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Aug 13 00:00:25.141590 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 13 00:00:25.141604 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 13 00:00:25.141619 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 13 00:00:25.141638 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 13 00:00:25.141652 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 13 00:00:25.141666 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 13 00:00:25.141680 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 13 00:00:25.141695 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 00:00:25.141708 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 00:00:25.141723 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 13 00:00:25.141737 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 13 00:00:25.141751 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 13 00:00:25.141769 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 13 00:00:25.141783 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 13 00:00:25.141797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 13 00:00:25.141811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 13 00:00:25.141825 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 13 00:00:25.141839 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 13 00:00:25.141854 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 13 00:00:25.141868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 13 00:00:25.141882 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 13 00:00:25.141900 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 13 00:00:25.141914 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 13 00:00:25.141929 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 13 00:00:25.141943 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 13 00:00:25.141957 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 13 00:00:25.141972 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 13 00:00:25.141987 kernel: Zone ranges: Aug 13 00:00:25.142001 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 00:00:25.142018 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 00:00:25.142032 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 00:00:25.142047 kernel: Movable zone start for each node Aug 13 00:00:25.142061 kernel: Early memory node ranges Aug 13 00:00:25.142074 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 13 00:00:25.142087 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 13 00:00:25.142101 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd0fff] Aug 13 00:00:25.142115 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 13 00:00:25.142129 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 13 00:00:25.142142 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 13 00:00:25.142159 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 00:00:25.142173 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 13 00:00:25.142185 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Aug 13 00:00:25.142199 kernel: On node 0, zone DMA32: 46 pages in unavailable ranges Aug 13 00:00:25.142212 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 13 00:00:25.142226 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 13 00:00:25.142239 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 13 00:00:25.142253 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 00:00:25.142266 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 00:00:25.142283 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 13 00:00:25.142296 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 13 00:00:25.142309 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 13 00:00:25.142322 kernel: Booting paravirtualized kernel on Hyper-V Aug 13 00:00:25.142337 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 00:00:25.142351 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 00:00:25.142365 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Aug 13 00:00:25.142379 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Aug 13 00:00:25.142396 kernel: pcpu-alloc: [0] 0 1 Aug 13 00:00:25.142410 kernel: Hyper-V: PV spinlocks enabled Aug 13 00:00:25.142425 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 00:00:25.142442 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:00:25.142456 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:00:25.142470 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 13 00:00:25.142485 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:00:25.142517 kernel: Fallback order for Node 0: 0 Aug 13 00:00:25.142535 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062374 Aug 13 00:00:25.142559 kernel: Policy zone: Normal Aug 13 00:00:25.142574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:00:25.142591 kernel: software IO TLB: area num 2. Aug 13 00:00:25.142606 kernel: Memory: 8072560K/8387508K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43504K init, 1572K bss, 314692K reserved, 0K cma-reserved) Aug 13 00:00:25.142620 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:00:25.142635 kernel: ftrace: allocating 37942 entries in 149 pages Aug 13 00:00:25.142649 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 00:00:25.142663 kernel: Dynamic Preempt: voluntary Aug 13 00:00:25.142677 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:00:25.142692 kernel: rcu: RCU event tracing is enabled. Aug 13 00:00:25.142709 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:00:25.142725 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:00:25.142740 kernel: Rude variant of Tasks RCU enabled. Aug 13 00:00:25.142755 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:00:25.142769 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:00:25.142782 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:00:25.142799 kernel: Using NULL legacy PIC Aug 13 00:00:25.142814 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 13 00:00:25.142828 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:00:25.142842 kernel: Console: colour dummy device 80x25 Aug 13 00:00:25.142857 kernel: printk: console [tty1] enabled Aug 13 00:00:25.142871 kernel: printk: console [ttyS0] enabled Aug 13 00:00:25.142886 kernel: printk: bootconsole [earlyser0] disabled Aug 13 00:00:25.142900 kernel: ACPI: Core revision 20230628 Aug 13 00:00:25.142914 kernel: Failed to register legacy timer interrupt Aug 13 00:00:25.142932 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 00:00:25.142947 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 13 00:00:25.142961 kernel: Hyper-V: Using IPI hypercalls Aug 13 00:00:25.142976 kernel: APIC: send_IPI() replaced with hv_send_ipi() Aug 13 00:00:25.142990 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Aug 13 00:00:25.143005 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Aug 13 00:00:25.143020 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Aug 13 00:00:25.143035 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Aug 13 00:00:25.143049 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Aug 13 00:00:25.143067 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Aug 13 00:00:25.143082 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 00:00:25.143096 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 00:00:25.143111 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 00:00:25.143125 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 00:00:25.143139 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 00:00:25.143154 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 13 00:00:25.143168 kernel: RETBleed: Vulnerable Aug 13 00:00:25.143182 kernel: Speculative Store Bypass: Vulnerable Aug 13 00:00:25.143197 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:00:25.143214 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 00:00:25.143228 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 00:00:25.143242 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 00:00:25.143257 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 00:00:25.143271 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 00:00:25.143286 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 13 00:00:25.143300 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 13 00:00:25.143315 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 13 00:00:25.143329 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 00:00:25.143344 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 13 00:00:25.143357 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 13 00:00:25.143375 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 13 00:00:25.143390 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 13 00:00:25.143404 kernel: Freeing SMP alternatives memory: 32K Aug 13 00:00:25.143419 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:00:25.143434 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:00:25.143448 kernel: landlock: Up and running. Aug 13 00:00:25.143462 kernel: SELinux: Initializing. Aug 13 00:00:25.143477 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:00:25.143491 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 00:00:25.147085 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 13 00:00:25.147102 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:00:25.147122 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:00:25.147136 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:00:25.147151 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 13 00:00:25.147165 kernel: signal: max sigframe size: 3632 Aug 13 00:00:25.147179 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:00:25.147193 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:00:25.147207 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 00:00:25.147221 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:00:25.147235 kernel: smpboot: x86: Booting SMP configuration: Aug 13 00:00:25.147253 kernel: .... node #0, CPUs: #1 Aug 13 00:00:25.147268 kernel: Transient Scheduler Attacks: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 13 00:00:25.147283 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 13 00:00:25.147297 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:00:25.147311 kernel: smpboot: Max logical packages: 1 Aug 13 00:00:25.147325 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Aug 13 00:00:25.147339 kernel: devtmpfs: initialized Aug 13 00:00:25.147353 kernel: x86/mm: Memory block size: 128MB Aug 13 00:00:25.147367 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 13 00:00:25.147385 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:00:25.147399 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:00:25.147413 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:00:25.147427 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:00:25.147442 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:00:25.147456 kernel: audit: type=2000 audit(1755043223.029:1): state=initialized audit_enabled=0 res=1 Aug 13 00:00:25.147469 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:00:25.147483 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 00:00:25.147509 kernel: cpuidle: using governor menu Aug 13 00:00:25.147524 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:00:25.147538 kernel: dca service started, version 1.12.1 Aug 13 00:00:25.147552 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Aug 13 00:00:25.147566 kernel: e820: reserve RAM buffer [mem 0x3ffd1000-0x3fffffff] Aug 13 00:00:25.147579 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 00:00:25.147593 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:00:25.147607 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:00:25.147621 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:00:25.147639 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:00:25.147652 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:00:25.147666 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:00:25.147680 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:00:25.147695 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:00:25.147709 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 00:00:25.147723 kernel: ACPI: Interpreter enabled Aug 13 00:00:25.147737 kernel: ACPI: PM: (supports S0 S5) Aug 13 00:00:25.147751 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 00:00:25.147767 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 00:00:25.147782 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 13 00:00:25.147796 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 13 00:00:25.147810 kernel: iommu: Default domain type: Translated Aug 13 00:00:25.147823 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 00:00:25.147838 kernel: efivars: Registered efivars operations Aug 13 00:00:25.147851 kernel: PCI: Using ACPI for IRQ routing Aug 13 00:00:25.147865 kernel: PCI: System does not support PCI Aug 13 00:00:25.147879 kernel: vgaarb: loaded Aug 13 00:00:25.147893 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 13 00:00:25.147910 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:00:25.147924 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:00:25.147938 kernel: pnp: PnP ACPI init Aug 13 00:00:25.147952 kernel: pnp: PnP ACPI: found 3 devices Aug 13 00:00:25.147966 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 00:00:25.147980 kernel: NET: Registered PF_INET protocol family Aug 13 00:00:25.147994 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 00:00:25.148009 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 13 00:00:25.148025 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:00:25.148040 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:00:25.148054 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 13 00:00:25.148068 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 13 00:00:25.148082 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:00:25.148096 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 13 00:00:25.148110 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:00:25.148124 kernel: NET: Registered PF_XDP protocol family Aug 13 00:00:25.148138 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:00:25.148155 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 00:00:25.148169 kernel: software IO TLB: mapped [mem 0x000000003abf5000-0x000000003ebf5000] (64MB) Aug 13 00:00:25.148183 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 00:00:25.148197 kernel: Initialise system trusted keyrings Aug 13 00:00:25.148211 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 13 00:00:25.148224 kernel: Key type asymmetric registered Aug 13 00:00:25.148238 kernel: Asymmetric key parser 'x509' registered Aug 13 00:00:25.148252 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 00:00:25.148266 kernel: io scheduler mq-deadline registered Aug 13 00:00:25.148282 kernel: io scheduler kyber registered Aug 13 00:00:25.148296 kernel: io scheduler bfq registered Aug 13 00:00:25.148310 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 00:00:25.148324 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:00:25.148338 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 00:00:25.148352 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 13 00:00:25.148366 kernel: i8042: PNP: No PS/2 controller found. Aug 13 00:00:25.154484 kernel: rtc_cmos 00:02: registered as rtc0 Aug 13 00:00:25.154662 kernel: rtc_cmos 00:02: setting system clock to 2025-08-13T00:00:24 UTC (1755043224) Aug 13 00:00:25.154794 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 13 00:00:25.154814 kernel: intel_pstate: CPU model not supported Aug 13 00:00:25.154829 kernel: efifb: probing for efifb Aug 13 00:00:25.154844 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 13 00:00:25.154858 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 13 00:00:25.154873 kernel: efifb: scrolling: redraw Aug 13 00:00:25.154888 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 13 00:00:25.154903 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:00:25.154922 kernel: fb0: EFI VGA frame buffer device Aug 13 00:00:25.154935 kernel: pstore: Using crash dump compression: deflate Aug 13 00:00:25.154952 kernel: pstore: Registered efi_pstore as persistent store backend Aug 13 00:00:25.154977 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:00:25.154992 kernel: Segment Routing with IPv6 Aug 13 00:00:25.155006 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:00:25.155020 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:00:25.155034 kernel: Key type dns_resolver registered Aug 13 00:00:25.155048 kernel: IPI shorthand broadcast: enabled Aug 13 00:00:25.155067 kernel: sched_clock: Marking stable (947002800, 58247700)->(1247792900, -242542400) Aug 13 00:00:25.155082 kernel: registered taskstats version 1 Aug 13 00:00:25.155097 kernel: Loading compiled-in X.509 certificates Aug 13 00:00:25.155111 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: dfd2b306eb54324ea79eea0261f8d493924aeeeb' Aug 13 00:00:25.155125 kernel: Key type .fscrypt registered Aug 13 00:00:25.155140 kernel: Key type fscrypt-provisioning registered Aug 13 00:00:25.155155 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:00:25.155168 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:00:25.155182 kernel: ima: No architecture policies found Aug 13 00:00:25.155200 kernel: clk: Disabling unused clocks Aug 13 00:00:25.155215 kernel: Freeing unused kernel image (initmem) memory: 43504K Aug 13 00:00:25.155230 kernel: Write protecting the kernel read-only data: 38912k Aug 13 00:00:25.155244 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Aug 13 00:00:25.155258 kernel: Run /init as init process Aug 13 00:00:25.155272 kernel: with arguments: Aug 13 00:00:25.155287 kernel: /init Aug 13 00:00:25.155301 kernel: with environment: Aug 13 00:00:25.155314 kernel: HOME=/ Aug 13 00:00:25.155332 kernel: TERM=linux Aug 13 00:00:25.155348 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:00:25.155364 systemd[1]: Successfully made /usr/ read-only. Aug 13 00:00:25.155384 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:00:25.155401 systemd[1]: Detected virtualization microsoft. Aug 13 00:00:25.155416 systemd[1]: Detected architecture x86-64. Aug 13 00:00:25.155432 systemd[1]: Running in initrd. Aug 13 00:00:25.155450 systemd[1]: No hostname configured, using default hostname. Aug 13 00:00:25.155466 systemd[1]: Hostname set to . Aug 13 00:00:25.155481 systemd[1]: Initializing machine ID from random generator. Aug 13 00:00:25.155508 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:00:25.155524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:00:25.155540 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:00:25.155557 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:00:25.155572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:00:25.155592 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:00:25.155608 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:00:25.155625 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:00:25.155641 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:00:25.155657 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:00:25.155673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:00:25.155690 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:00:25.155710 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:00:25.155727 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:00:25.155742 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:00:25.155757 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:00:25.155772 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:00:25.155788 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:00:25.155805 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 00:00:25.155822 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:00:25.155837 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:00:25.155856 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:00:25.155872 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:00:25.155888 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:00:25.155904 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:00:25.155920 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:00:25.155936 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:00:25.155953 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:00:25.155970 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:00:25.156016 systemd-journald[177]: Collecting audit messages is disabled. Aug 13 00:00:25.156056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:00:25.156073 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:00:25.156090 systemd-journald[177]: Journal started Aug 13 00:00:25.156129 systemd-journald[177]: Runtime Journal (/run/log/journal/40e0377a49cb46c88613b31ff3ffd3f5) is 8M, max 158.8M, 150.8M free. Aug 13 00:00:25.160371 systemd-modules-load[179]: Inserted module 'overlay' Aug 13 00:00:25.169457 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:00:25.170178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:00:25.178257 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:00:25.183918 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:00:25.207733 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:00:25.213638 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:00:25.218473 kernel: Bridge firewalling registered Aug 13 00:00:25.218343 systemd-modules-load[179]: Inserted module 'br_netfilter' Aug 13 00:00:25.221669 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:00:25.224637 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:00:25.225641 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:00:25.232704 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:00:25.260491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:00:25.273790 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:00:25.282249 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:00:25.290666 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:00:25.300717 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:00:25.312640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:00:25.319512 dracut-cmdline[210]: dracut-dracut-053 Aug 13 00:00:25.321264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:00:25.330090 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ca71ea747c3f0d1de8a5ffcd0cfb9d0a1a4c4755719a09093b0248fa3902b433 Aug 13 00:00:25.368335 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:00:25.409524 systemd-resolved[214]: Positive Trust Anchors: Aug 13 00:00:25.409539 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:00:25.409595 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:00:25.440754 systemd-resolved[214]: Defaulting to hostname 'linux'. Aug 13 00:00:25.449133 kernel: SCSI subsystem initialized Aug 13 00:00:25.441810 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:00:25.452591 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:00:25.461378 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:00:25.473518 kernel: iscsi: registered transport (tcp) Aug 13 00:00:25.494905 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:00:25.494974 kernel: QLogic iSCSI HBA Driver Aug 13 00:00:25.531072 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:00:25.539705 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:00:25.574125 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:00:25.574224 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:00:25.578097 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:00:25.619529 kernel: raid6: avx512x4 gen() 18442 MB/s Aug 13 00:00:25.638517 kernel: raid6: avx512x2 gen() 18432 MB/s Aug 13 00:00:25.657513 kernel: raid6: avx512x1 gen() 18369 MB/s Aug 13 00:00:25.677516 kernel: raid6: avx2x4 gen() 18361 MB/s Aug 13 00:00:25.696512 kernel: raid6: avx2x2 gen() 18429 MB/s Aug 13 00:00:25.716926 kernel: raid6: avx2x1 gen() 14148 MB/s Aug 13 00:00:25.716959 kernel: raid6: using algorithm avx512x4 gen() 18442 MB/s Aug 13 00:00:25.740219 kernel: raid6: .... xor() 7883 MB/s, rmw enabled Aug 13 00:00:25.740249 kernel: raid6: using avx512x2 recovery algorithm Aug 13 00:00:25.763524 kernel: xor: automatically using best checksumming function avx Aug 13 00:00:25.906525 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:00:25.915741 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:00:25.925644 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:00:25.945662 systemd-udevd[398]: Using default interface naming scheme 'v255'. Aug 13 00:00:25.950960 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:00:25.968670 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:00:25.981774 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Aug 13 00:00:26.008871 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:00:26.018672 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:00:26.061671 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:00:26.075655 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:00:26.109643 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:00:26.117924 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:00:26.126191 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:00:26.133747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:00:26.147731 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:00:26.168514 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 00:00:26.176718 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:00:26.191080 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:00:26.193706 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:00:26.201539 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:00:26.208779 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:00:26.220104 kernel: hv_vmbus: Vmbus version:5.2 Aug 13 00:00:26.208983 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:00:26.218193 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:00:26.235947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:00:26.245916 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 00:00:26.245945 kernel: AES CTR mode by8 optimization enabled Aug 13 00:00:26.247849 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:00:26.273817 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:00:26.278663 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:00:26.290670 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:00:26.304384 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 13 00:00:26.311354 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 00:00:26.311446 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 00:00:26.316384 kernel: hv_vmbus: registering driver hv_netvsc Aug 13 00:00:26.316423 kernel: hv_vmbus: registering driver hid_hyperv Aug 13 00:00:26.337081 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 13 00:00:26.337132 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 13 00:00:26.337148 kernel: PTP clock support registered Aug 13 00:00:26.352089 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 13 00:00:26.362942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:00:26.379180 kernel: hv_utils: Registering HyperV Utility Driver Aug 13 00:00:26.379244 kernel: hv_vmbus: registering driver hv_utils Aug 13 00:00:26.386099 kernel: hv_utils: Heartbeat IC version 3.0 Aug 13 00:00:26.386138 kernel: hv_vmbus: registering driver hv_storvsc Aug 13 00:00:26.386151 kernel: hv_utils: Shutdown IC version 3.2 Aug 13 00:00:26.391678 kernel: hv_utils: TimeSync IC version 4.0 Aug 13 00:00:26.817805 systemd-resolved[214]: Clock change detected. Flushing caches. Aug 13 00:00:26.827391 kernel: scsi host1: storvsc_host_t Aug 13 00:00:26.827670 kernel: scsi host0: storvsc_host_t Aug 13 00:00:26.827876 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 13 00:00:26.832115 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Aug 13 00:00:26.850483 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 13 00:00:26.850742 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 00:00:26.852111 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 13 00:00:26.863977 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 13 00:00:26.864345 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 13 00:00:26.866573 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 00:00:26.868122 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 13 00:00:26.868287 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 13 00:00:26.878770 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:00:26.878807 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#100 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Aug 13 00:00:26.878960 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 00:00:26.905136 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#37 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Aug 13 00:00:26.976113 kernel: hv_netvsc 7ced8d77-53c3-7ced-8d77-53c37ced8d77 eth0: VF slot 1 added Aug 13 00:00:26.986111 kernel: hv_vmbus: registering driver hv_pci Aug 13 00:00:26.992109 kernel: hv_pci a20f91a3-346f-4188-8277-7b09a864a2dc: PCI VMBus probing: Using version 0x10004 Aug 13 00:00:26.999384 kernel: hv_pci a20f91a3-346f-4188-8277-7b09a864a2dc: PCI host bridge to bus 346f:00 Aug 13 00:00:26.999675 kernel: pci_bus 346f:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 13 00:00:27.002944 kernel: pci_bus 346f:00: No busn resource found for root bus, will use [bus 00-ff] Aug 13 00:00:27.009208 kernel: pci 346f:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 13 00:00:27.016192 kernel: pci 346f:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 00:00:27.021221 kernel: pci 346f:00:02.0: enabling Extended Tags Aug 13 00:00:27.036215 kernel: pci 346f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 346f:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 13 00:00:27.045261 kernel: pci_bus 346f:00: busn_res: [bus 00-ff] end is updated to 00 Aug 13 00:00:27.045596 kernel: pci 346f:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 13 00:00:27.211471 kernel: mlx5_core 346f:00:02.0: enabling device (0000 -> 0002) Aug 13 00:00:27.217116 kernel: mlx5_core 346f:00:02.0: firmware version: 14.30.5000 Aug 13 00:00:27.370554 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 13 00:00:27.440612 kernel: hv_netvsc 7ced8d77-53c3-7ced-8d77-53c37ced8d77 eth0: VF registering: eth1 Aug 13 00:00:27.440879 kernel: mlx5_core 346f:00:02.0 eth1: joined to eth0 Aug 13 00:00:27.449743 kernel: mlx5_core 346f:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 13 00:00:27.449964 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (453) Aug 13 00:00:27.475110 kernel: mlx5_core 346f:00:02.0 enP13423s1: renamed from eth1 Aug 13 00:00:27.485109 kernel: BTRFS: device fsid 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 devid 1 transid 45 /dev/sda3 scanned by (udev-worker) (466) Aug 13 00:00:27.502822 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 00:00:27.520981 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 13 00:00:27.533102 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 13 00:00:27.541811 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 13 00:00:27.558218 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:00:27.574956 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:00:27.581115 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:00:28.586663 disk-uuid[607]: The operation has completed successfully. Aug 13 00:00:28.592436 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 00:00:28.663920 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:00:28.664035 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:00:28.710246 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:00:28.719482 sh[693]: Success Aug 13 00:00:28.752198 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 00:00:28.944865 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:00:28.964321 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:00:28.971615 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:00:28.994104 kernel: BTRFS info (device dm-0): first mount of filesystem 88a9bed3-d26b-40c9-82ba-dbb7d44acae7 Aug 13 00:00:28.994145 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:00:29.000878 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:00:29.004110 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:00:29.007240 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:00:29.237693 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:00:29.244342 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:00:29.254245 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:00:29.264300 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:00:29.287472 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:00:29.287551 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:00:29.290081 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:00:29.309143 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:00:29.317117 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:00:29.320768 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:00:29.333318 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:00:29.370732 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:00:29.382262 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:00:29.409013 systemd-networkd[874]: lo: Link UP Aug 13 00:00:29.409026 systemd-networkd[874]: lo: Gained carrier Aug 13 00:00:29.411291 systemd-networkd[874]: Enumeration completed Aug 13 00:00:29.411551 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:00:29.413405 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:00:29.413410 systemd-networkd[874]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:00:29.417537 systemd[1]: Reached target network.target - Network. Aug 13 00:00:29.484794 kernel: mlx5_core 346f:00:02.0 enP13423s1: Link up Aug 13 00:00:29.523507 kernel: hv_netvsc 7ced8d77-53c3-7ced-8d77-53c37ced8d77 eth0: Data path switched to VF: enP13423s1 Aug 13 00:00:29.523063 systemd-networkd[874]: enP13423s1: Link UP Aug 13 00:00:29.523212 systemd-networkd[874]: eth0: Link UP Aug 13 00:00:29.523411 systemd-networkd[874]: eth0: Gained carrier Aug 13 00:00:29.523424 systemd-networkd[874]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:00:29.536803 systemd-networkd[874]: enP13423s1: Gained carrier Aug 13 00:00:29.565146 systemd-networkd[874]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 13 00:00:30.169017 ignition[823]: Ignition 2.20.0 Aug 13 00:00:30.169029 ignition[823]: Stage: fetch-offline Aug 13 00:00:30.170465 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:00:30.169073 ignition[823]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:30.169083 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:30.169212 ignition[823]: parsed url from cmdline: "" Aug 13 00:00:30.169217 ignition[823]: no config URL provided Aug 13 00:00:30.169224 ignition[823]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:00:30.169236 ignition[823]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:00:30.169243 ignition[823]: failed to fetch config: resource requires networking Aug 13 00:00:30.192285 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:00:30.169502 ignition[823]: Ignition finished successfully Aug 13 00:00:30.210948 ignition[884]: Ignition 2.20.0 Aug 13 00:00:30.210960 ignition[884]: Stage: fetch Aug 13 00:00:30.211188 ignition[884]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:30.211201 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:30.211306 ignition[884]: parsed url from cmdline: "" Aug 13 00:00:30.211310 ignition[884]: no config URL provided Aug 13 00:00:30.211314 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:00:30.211321 ignition[884]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:00:30.211346 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 13 00:00:30.306429 ignition[884]: GET result: OK Aug 13 00:00:30.306563 ignition[884]: config has been read from IMDS userdata Aug 13 00:00:30.306594 ignition[884]: parsing config with SHA512: 18850998ecd4b560c1eeedb06a861208299f3edc83a0555c628e618ba2ac5fa60a431406f6b29f42d578189269cf4e727b7564a1e8042b74819a653c5d1b51f3 Aug 13 00:00:30.311547 unknown[884]: fetched base config from "system" Aug 13 00:00:30.311554 unknown[884]: fetched base config from "system" Aug 13 00:00:30.311965 ignition[884]: fetch: fetch complete Aug 13 00:00:30.311560 unknown[884]: fetched user config from "azure" Aug 13 00:00:30.311970 ignition[884]: fetch: fetch passed Aug 13 00:00:30.313684 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:00:30.312013 ignition[884]: Ignition finished successfully Aug 13 00:00:30.330630 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:00:30.349541 ignition[891]: Ignition 2.20.0 Aug 13 00:00:30.349553 ignition[891]: Stage: kargs Aug 13 00:00:30.349768 ignition[891]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:30.349781 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:30.350672 ignition[891]: kargs: kargs passed Aug 13 00:00:30.350715 ignition[891]: Ignition finished successfully Aug 13 00:00:30.363587 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:00:30.372379 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:00:30.384681 ignition[897]: Ignition 2.20.0 Aug 13 00:00:30.384694 ignition[897]: Stage: disks Aug 13 00:00:30.386685 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:00:30.384925 ignition[897]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:30.384940 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:30.385831 ignition[897]: disks: disks passed Aug 13 00:00:30.385877 ignition[897]: Ignition finished successfully Aug 13 00:00:30.403887 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:00:30.407244 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:00:30.414315 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:00:30.423894 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:00:30.426950 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:00:30.443327 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:00:30.496500 systemd-fsck[905]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 13 00:00:30.502781 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:00:30.518238 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:00:30.613116 kernel: EXT4-fs (sda9): mounted filesystem 27db109b-2440-48a3-909e-fd8973275523 r/w with ordered data mode. Quota mode: none. Aug 13 00:00:30.614317 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:00:30.617878 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:00:30.657204 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:00:30.665082 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:00:30.672313 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 13 00:00:30.685773 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (916) Aug 13 00:00:30.685556 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:00:30.704556 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:00:30.704583 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:00:30.704606 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:00:30.704618 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:00:30.685610 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:00:30.713721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:00:30.715100 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:00:30.725395 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:00:31.090339 systemd-networkd[874]: eth0: Gained IPv6LL Aug 13 00:00:31.255042 coreos-metadata[918]: Aug 13 00:00:31.254 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:00:31.262135 coreos-metadata[918]: Aug 13 00:00:31.262 INFO Fetch successful Aug 13 00:00:31.265079 coreos-metadata[918]: Aug 13 00:00:31.265 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:00:31.280405 coreos-metadata[918]: Aug 13 00:00:31.280 INFO Fetch successful Aug 13 00:00:31.297018 coreos-metadata[918]: Aug 13 00:00:31.296 INFO wrote hostname ci-4230.2.2-a-03132a7374 to /sysroot/etc/hostname Aug 13 00:00:31.298803 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:00:31.337668 initrd-setup-root[946]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:00:31.358638 initrd-setup-root[953]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:00:31.363795 initrd-setup-root[960]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:00:31.372016 initrd-setup-root[967]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:00:32.084158 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:00:32.093273 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:00:32.109942 kernel: BTRFS info (device sda6): last unmount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:00:32.101335 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:00:32.109126 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:00:32.139534 ignition[1035]: INFO : Ignition 2.20.0 Aug 13 00:00:32.139534 ignition[1035]: INFO : Stage: mount Aug 13 00:00:32.143438 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:32.143438 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:32.143438 ignition[1035]: INFO : mount: mount passed Aug 13 00:00:32.143438 ignition[1035]: INFO : Ignition finished successfully Aug 13 00:00:32.142000 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:00:32.159232 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:00:32.168373 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:00:32.180273 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:00:32.195141 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (1046) Aug 13 00:00:32.195184 kernel: BTRFS info (device sda6): first mount of filesystem fdf7217d-4a76-4a93-98b1-684d9c141517 Aug 13 00:00:32.200106 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 00:00:32.205506 kernel: BTRFS info (device sda6): using free space tree Aug 13 00:00:32.211111 kernel: BTRFS info (device sda6): auto enabling async discard Aug 13 00:00:32.212790 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:00:32.236377 ignition[1063]: INFO : Ignition 2.20.0 Aug 13 00:00:32.236377 ignition[1063]: INFO : Stage: files Aug 13 00:00:32.241249 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:32.241249 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:32.241249 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:00:32.254665 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:00:32.254665 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:00:32.327352 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:00:32.332188 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:00:32.332188 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:00:32.327807 unknown[1063]: wrote ssh authorized keys file for user: core Aug 13 00:00:32.345596 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:00:32.352005 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 00:00:32.415266 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:00:32.475678 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 00:00:32.481967 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:00:32.481967 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Aug 13 00:00:32.774830 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:00:33.036297 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 13 00:00:33.036297 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:00:33.048654 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 00:00:33.496256 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:00:34.307936 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 00:00:34.307936 ignition[1063]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 13 00:00:34.323076 ignition[1063]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:00:34.332748 ignition[1063]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:00:34.332748 ignition[1063]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 13 00:00:34.332748 ignition[1063]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:00:34.332748 ignition[1063]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:00:34.332748 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:00:34.332748 ignition[1063]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:00:34.332748 ignition[1063]: INFO : files: files passed Aug 13 00:00:34.332748 ignition[1063]: INFO : Ignition finished successfully Aug 13 00:00:34.324804 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:00:34.361672 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:00:34.368490 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:00:34.379706 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:00:34.379795 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:00:34.401187 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:00:34.401187 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:00:34.414958 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:00:34.405904 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:00:34.410350 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:00:34.431262 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:00:34.457226 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:00:34.457339 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:00:34.465058 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:00:34.471806 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:00:34.481238 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:00:34.493298 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:00:34.508263 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:00:34.517356 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:00:34.529345 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:00:34.537186 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:00:34.538558 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:00:34.539127 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:00:34.539272 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:00:34.540185 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:00:34.540745 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:00:34.541304 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:00:34.542003 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:00:34.542571 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:00:34.543175 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:00:34.543734 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:00:34.544327 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:00:34.544905 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:00:34.545476 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:00:34.546070 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:00:34.546224 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:00:34.547241 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:00:34.547832 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:00:34.548338 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:00:34.564674 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:00:34.599784 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:00:34.606042 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:00:34.614839 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:00:34.615022 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:00:34.624883 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:00:34.634286 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:00:34.682488 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 13 00:00:34.682658 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 13 00:00:34.700397 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:00:34.706308 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:00:34.708826 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:00:34.709043 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:00:34.711382 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:00:34.711514 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:00:34.717173 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:00:34.717261 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:00:34.744085 ignition[1115]: INFO : Ignition 2.20.0 Aug 13 00:00:34.744085 ignition[1115]: INFO : Stage: umount Aug 13 00:00:34.744085 ignition[1115]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:00:34.744085 ignition[1115]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 13 00:00:34.744085 ignition[1115]: INFO : umount: umount passed Aug 13 00:00:34.744085 ignition[1115]: INFO : Ignition finished successfully Aug 13 00:00:34.744231 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:00:34.744363 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:00:34.749604 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:00:34.749657 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:00:34.755925 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:00:34.755973 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:00:34.760366 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:00:34.760412 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:00:34.765177 systemd[1]: Stopped target network.target - Network. Aug 13 00:00:34.765618 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:00:34.765658 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:00:34.768970 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:00:34.770020 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:00:34.785709 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:00:34.786829 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:00:34.787955 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:00:34.789002 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:00:34.789043 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:00:34.789559 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:00:34.789590 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:00:34.790066 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:00:34.790124 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:00:34.790617 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:00:34.790651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:00:34.791249 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:00:34.791694 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:00:34.793234 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:00:34.819438 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:00:34.819541 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:00:34.828938 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 00:00:34.829225 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:00:34.829317 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:00:34.834166 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 00:00:34.834392 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:00:34.834482 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:00:34.840119 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:00:34.840194 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:00:34.843211 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:00:34.843262 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:00:34.850241 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:00:34.854310 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:00:34.857169 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:00:34.864273 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:00:34.864331 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:00:34.961288 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:00:34.961373 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:00:34.967892 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:00:34.967943 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:00:34.981892 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:00:34.986755 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 00:00:34.986818 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:00:35.003725 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:00:35.003897 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:00:35.010414 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:00:35.010458 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:00:35.022912 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:00:35.022962 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:00:35.028874 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:00:35.028934 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:00:35.040882 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:00:35.051964 kernel: hv_netvsc 7ced8d77-53c3-7ced-8d77-53c37ced8d77 eth0: Data path switched from VF: enP13423s1 Aug 13 00:00:35.040932 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:00:35.052190 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:00:35.052249 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:00:35.068282 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:00:35.079010 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:00:35.079118 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:00:35.086708 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:00:35.086778 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:00:35.102043 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:00:35.102131 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:00:35.108744 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:00:35.108802 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:00:35.123731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 00:00:35.123813 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:00:35.124170 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:00:35.124258 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:00:35.131513 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:00:35.131601 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:00:35.151894 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:00:35.164283 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:00:35.173809 systemd[1]: Switching root. Aug 13 00:00:35.220478 systemd-journald[177]: Journal stopped Aug 13 00:00:39.763444 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Aug 13 00:00:39.763484 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:00:39.763496 kernel: SELinux: policy capability open_perms=1 Aug 13 00:00:39.763507 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:00:39.763519 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:00:39.763528 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:00:39.763542 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:00:39.763556 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:00:39.763566 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:00:39.763576 kernel: audit: type=1403 audit(1755043236.824:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:00:39.763588 systemd[1]: Successfully loaded SELinux policy in 109.614ms. Aug 13 00:00:39.763598 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.793ms. Aug 13 00:00:39.763609 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 00:00:39.763621 systemd[1]: Detected virtualization microsoft. Aug 13 00:00:39.763635 systemd[1]: Detected architecture x86-64. Aug 13 00:00:39.763648 systemd[1]: Detected first boot. Aug 13 00:00:39.763658 systemd[1]: Hostname set to . Aug 13 00:00:39.763669 systemd[1]: Initializing machine ID from random generator. Aug 13 00:00:39.763681 zram_generator::config[1160]: No configuration found. Aug 13 00:00:39.763694 kernel: Guest personality initialized and is inactive Aug 13 00:00:39.763706 kernel: VMCI host device registered (name=vmci, major=10, minor=124) Aug 13 00:00:39.763715 kernel: Initialized host personality Aug 13 00:00:39.763725 kernel: NET: Registered PF_VSOCK protocol family Aug 13 00:00:39.763737 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:00:39.763747 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 00:00:39.763759 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:00:39.763770 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:00:39.763785 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:00:39.763796 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:00:39.763807 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:00:39.763820 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:00:39.763830 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:00:39.763842 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:00:39.763853 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:00:39.763866 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:00:39.763879 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:00:39.763889 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:00:39.763902 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:00:39.763912 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:00:39.763922 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:00:39.763939 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:00:39.763950 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:00:39.763963 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:00:39.763976 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:00:39.763989 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:00:39.763999 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:00:39.764010 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:00:39.764022 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:00:39.764034 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:00:39.764047 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:00:39.764062 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:00:39.764073 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:00:39.764094 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:00:39.764109 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:00:39.764119 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 00:00:39.764130 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:00:39.764144 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:00:39.764157 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:00:39.764167 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:00:39.764180 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:00:39.764191 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:00:39.764203 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:00:39.764215 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:00:39.764229 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:00:39.764241 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:00:39.764251 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:00:39.764265 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:00:39.764276 systemd[1]: Reached target machines.target - Containers. Aug 13 00:00:39.764289 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:00:39.764301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:00:39.764313 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:00:39.764327 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:00:39.764339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:00:39.764351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:00:39.764361 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:00:39.764374 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:00:39.764385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:00:39.764397 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:00:39.764409 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:00:39.764422 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:00:39.764435 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:00:39.764445 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:00:39.764459 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:00:39.764470 kernel: fuse: init (API version 7.39) Aug 13 00:00:39.764481 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:00:39.764493 kernel: loop: module loaded Aug 13 00:00:39.764503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:00:39.764518 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:00:39.764529 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:00:39.764541 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 00:00:39.764575 systemd-journald[1267]: Collecting audit messages is disabled. Aug 13 00:00:39.764603 systemd-journald[1267]: Journal started Aug 13 00:00:39.764628 systemd-journald[1267]: Runtime Journal (/run/log/journal/40335f925e344ce2b0a2bc4d53cdb919) is 8M, max 158.8M, 150.8M free. Aug 13 00:00:39.790549 kernel: ACPI: bus type drm_connector registered Aug 13 00:00:39.790608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:00:39.079394 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:00:39.087107 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 00:00:39.087499 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:00:39.804103 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:00:39.809940 systemd[1]: Stopped verity-setup.service. Aug 13 00:00:39.809983 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:00:39.825865 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:00:39.826476 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:00:39.829859 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:00:39.833358 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:00:39.836471 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:00:39.839884 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:00:39.843297 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:00:39.846698 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:00:39.850781 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:00:39.854940 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:00:39.855259 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:00:39.859433 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:00:39.859624 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:00:39.863774 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:00:39.863962 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:00:39.867491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:00:39.867679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:00:39.871636 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:00:39.871821 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:00:39.875349 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:00:39.875534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:00:39.879449 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:00:39.884678 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:00:39.889722 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:00:39.897622 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 00:00:39.913558 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:00:39.925929 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:00:39.936636 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:00:39.941576 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:00:39.941754 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:00:39.946972 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 00:00:39.968289 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:00:39.973385 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:00:39.976677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:00:39.982904 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:00:39.987198 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:00:39.990903 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:00:39.992060 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:00:39.995676 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:00:39.997027 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:00:40.002292 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:00:40.016306 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:00:40.021150 systemd-journald[1267]: Time spent on flushing to /var/log/journal/40335f925e344ce2b0a2bc4d53cdb919 is 25.007ms for 974 entries. Aug 13 00:00:40.021150 systemd-journald[1267]: System Journal (/var/log/journal/40335f925e344ce2b0a2bc4d53cdb919) is 8M, max 2.6G, 2.6G free. Aug 13 00:00:40.068407 systemd-journald[1267]: Received client request to flush runtime journal. Aug 13 00:00:40.031531 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:00:40.035868 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:00:40.039865 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:00:40.043951 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:00:40.048459 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:00:40.060538 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:00:40.071528 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 00:00:40.076473 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:00:40.082071 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:00:40.095136 kernel: loop0: detected capacity change from 0 to 138176 Aug 13 00:00:40.114867 udevadm[1313]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:00:40.165521 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Aug 13 00:00:40.165547 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Aug 13 00:00:40.169162 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:00:40.174346 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:00:40.183255 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:00:40.188016 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:00:40.192409 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 00:00:40.579444 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:00:40.593254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:00:40.608462 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Aug 13 00:00:40.608488 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Aug 13 00:00:40.613129 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:00:40.786212 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:00:40.834118 kernel: loop1: detected capacity change from 0 to 28272 Aug 13 00:00:41.357111 kernel: loop2: detected capacity change from 0 to 147912 Aug 13 00:00:42.586271 kernel: loop3: detected capacity change from 0 to 229808 Aug 13 00:00:42.606487 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:00:42.613173 kernel: loop4: detected capacity change from 0 to 138176 Aug 13 00:00:42.617289 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:00:42.638180 kernel: loop5: detected capacity change from 0 to 28272 Aug 13 00:00:42.656112 kernel: loop6: detected capacity change from 0 to 147912 Aug 13 00:00:42.657007 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Aug 13 00:00:42.670109 kernel: loop7: detected capacity change from 0 to 229808 Aug 13 00:00:42.674519 (sd-merge)[1332]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Aug 13 00:00:42.675108 (sd-merge)[1332]: Merged extensions into '/usr'. Aug 13 00:00:42.678731 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:00:42.678747 systemd[1]: Reloading... Aug 13 00:00:42.745182 zram_generator::config[1360]: No configuration found. Aug 13 00:00:42.982677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:00:43.086113 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 00:00:43.102113 kernel: hv_vmbus: registering driver hyperv_fb Aug 13 00:00:43.106472 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 13 00:00:43.118521 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 13 00:00:43.118602 kernel: hv_vmbus: registering driver hv_balloon Aug 13 00:00:43.128483 kernel: Console: switching to colour dummy device 80x25 Aug 13 00:00:43.128558 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 13 00:00:43.149116 kernel: Console: switching to colour frame buffer device 128x48 Aug 13 00:00:43.279559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#120 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Aug 13 00:00:43.257282 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:00:43.259949 systemd[1]: Reloading finished in 580 ms. Aug 13 00:00:43.291760 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:00:43.297953 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:00:43.362283 systemd[1]: Starting ensure-sysext.service... Aug 13 00:00:43.380329 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:00:43.467821 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:00:43.479300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:00:43.503103 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1434) Aug 13 00:00:43.540890 systemd-tmpfiles[1469]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:00:43.541759 systemd-tmpfiles[1469]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:00:43.542755 systemd[1]: Reload requested from client PID 1463 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:00:43.542768 systemd[1]: Reloading... Aug 13 00:00:43.543064 systemd-tmpfiles[1469]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:00:43.543862 systemd-tmpfiles[1469]: ACLs are not supported, ignoring. Aug 13 00:00:43.544039 systemd-tmpfiles[1469]: ACLs are not supported, ignoring. Aug 13 00:00:43.646114 zram_generator::config[1547]: No configuration found. Aug 13 00:00:43.698060 systemd-tmpfiles[1469]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:00:43.698079 systemd-tmpfiles[1469]: Skipping /boot Aug 13 00:00:43.729767 systemd-tmpfiles[1469]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:00:43.729789 systemd-tmpfiles[1469]: Skipping /boot Aug 13 00:00:43.784173 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Aug 13 00:00:43.886488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:00:44.000277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 13 00:00:44.001888 systemd[1]: Reloading finished in 458 ms. Aug 13 00:00:44.031112 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:00:44.048920 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:00:44.063969 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:00:44.071552 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:00:44.074819 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:00:44.078855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:00:44.082229 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:00:44.088196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:00:44.093441 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:00:44.099270 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:00:44.100741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:00:44.109646 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:00:44.110969 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:00:44.114416 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:00:44.118378 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:00:44.125495 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:00:44.131414 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:00:44.134437 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:00:44.138894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:00:44.141152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:00:44.145709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:00:44.146781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:00:44.152471 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:00:44.152614 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:00:44.163118 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:00:44.163432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:00:44.170474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:00:44.178386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:00:44.191782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:00:44.195029 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:00:44.195214 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:00:44.195360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:00:44.196857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:00:44.197665 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:00:44.201996 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:00:44.202251 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:00:44.206648 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:00:44.207045 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:00:44.213529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:00:44.213845 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:00:44.218864 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:00:44.219244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:00:44.225546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:00:44.233390 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:00:44.240324 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:00:44.248379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:00:44.252426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:00:44.252587 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 00:00:44.252827 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:00:44.256154 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 00:00:44.260733 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:00:44.263351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:00:44.263535 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:00:44.264279 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:00:44.264427 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:00:44.264767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:00:44.264961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:00:44.281938 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 00:00:44.294455 systemd[1]: Finished ensure-sysext.service. Aug 13 00:00:44.299276 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:00:44.306657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:00:44.307200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:00:44.321723 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:00:44.322138 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:00:44.336235 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:00:44.336582 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:00:44.345398 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:00:44.353772 lvm[1617]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:00:44.360370 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:00:44.393558 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:00:44.398157 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:00:44.413996 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:00:44.421962 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:00:44.440264 lvm[1672]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:00:44.479934 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:00:44.490273 systemd-networkd[1466]: lo: Link UP Aug 13 00:00:44.490282 systemd-networkd[1466]: lo: Gained carrier Aug 13 00:00:44.493536 systemd-networkd[1466]: Enumeration completed Aug 13 00:00:44.493738 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:00:44.493953 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:00:44.493956 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:00:44.503689 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 00:00:44.518240 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:00:44.521651 systemd-resolved[1623]: Positive Trust Anchors: Aug 13 00:00:44.521917 systemd-resolved[1623]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:00:44.522032 systemd-resolved[1623]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:00:44.523551 augenrules[1684]: No rules Aug 13 00:00:44.526902 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:00:44.527195 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:00:44.533396 systemd-resolved[1623]: Using system hostname 'ci-4230.2.2-a-03132a7374'. Aug 13 00:00:44.573123 kernel: mlx5_core 346f:00:02.0 enP13423s1: Link up Aug 13 00:00:44.593712 kernel: hv_netvsc 7ced8d77-53c3-7ced-8d77-53c37ced8d77 eth0: Data path switched to VF: enP13423s1 Aug 13 00:00:44.593279 systemd-networkd[1466]: enP13423s1: Link UP Aug 13 00:00:44.593414 systemd-networkd[1466]: eth0: Link UP Aug 13 00:00:44.593419 systemd-networkd[1466]: eth0: Gained carrier Aug 13 00:00:44.593443 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:00:44.597236 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:00:44.599631 systemd-networkd[1466]: enP13423s1: Gained carrier Aug 13 00:00:44.600407 systemd[1]: Reached target network.target - Network. Aug 13 00:00:44.603858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:00:44.607702 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 00:00:44.632157 systemd-networkd[1466]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 13 00:00:44.872900 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:00:45.549171 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:00:45.553486 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:00:46.450253 systemd-networkd[1466]: eth0: Gained IPv6LL Aug 13 00:00:46.453125 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:00:46.457488 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:00:47.838070 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:00:47.854657 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:00:47.864330 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:00:47.874748 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:00:47.878723 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:00:47.882205 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:00:47.885790 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:00:47.889996 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:00:47.893354 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:00:47.897139 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:00:47.900848 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:00:47.900914 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:00:47.903754 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:00:47.908001 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:00:47.912872 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:00:47.918687 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 00:00:47.922952 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 00:00:47.926878 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 00:00:47.932356 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:00:47.936471 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 00:00:47.940746 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:00:47.944223 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:00:47.947029 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:00:47.949897 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:00:47.949928 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:00:47.955181 systemd[1]: Starting chronyd.service - NTP client/server... Aug 13 00:00:47.961202 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:00:47.970271 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:00:47.976270 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:00:47.985252 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:00:47.990565 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:00:47.995541 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:00:47.995603 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Aug 13 00:00:48.003245 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Aug 13 00:00:48.007129 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Aug 13 00:00:48.014114 jq[1708]: false Aug 13 00:00:48.016264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:00:48.022286 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:00:48.029289 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:00:48.035687 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:00:48.040852 KVP[1710]: KVP starting; pid is:1710 Aug 13 00:00:48.043960 (chronyd)[1701]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Aug 13 00:00:48.052654 chronyd[1718]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Aug 13 00:00:48.064444 kernel: hv_utils: KVP IC version 4.0 Aug 13 00:00:48.054291 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:00:48.063180 KVP[1710]: KVP LIC Version: 3.1 Aug 13 00:00:48.059276 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:00:48.063447 chronyd[1718]: Timezone right/UTC failed leap second check, ignoring Aug 13 00:00:48.063647 chronyd[1718]: Loaded seccomp filter (level 2) Aug 13 00:00:48.076332 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:00:48.084686 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:00:48.086496 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:00:48.094364 extend-filesystems[1709]: Found loop4 Aug 13 00:00:48.098111 extend-filesystems[1709]: Found loop5 Aug 13 00:00:48.098111 extend-filesystems[1709]: Found loop6 Aug 13 00:00:48.098111 extend-filesystems[1709]: Found loop7 Aug 13 00:00:48.098111 extend-filesystems[1709]: Found sda Aug 13 00:00:48.098111 extend-filesystems[1709]: Found sda1 Aug 13 00:00:48.098111 extend-filesystems[1709]: Found sda2 Aug 13 00:00:48.098111 extend-filesystems[1709]: Found sda3 Aug 13 00:00:48.098111 extend-filesystems[1709]: Found usr Aug 13 00:00:48.098111 extend-filesystems[1709]: Found sda4 Aug 13 00:00:48.098111 extend-filesystems[1709]: Found sda6 Aug 13 00:00:48.098111 extend-filesystems[1709]: Found sda7 Aug 13 00:00:48.098111 extend-filesystems[1709]: Found sda9 Aug 13 00:00:48.098111 extend-filesystems[1709]: Checking size of /dev/sda9 Aug 13 00:00:48.095428 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:00:48.126916 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:00:48.134294 systemd[1]: Started chronyd.service - NTP client/server. Aug 13 00:00:48.153462 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:00:48.157967 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:00:48.166433 jq[1725]: true Aug 13 00:00:48.171241 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:00:48.171514 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:00:48.195974 jq[1736]: true Aug 13 00:00:48.254675 extend-filesystems[1709]: Old size kept for /dev/sda9 Aug 13 00:00:48.261568 extend-filesystems[1709]: Found sr0 Aug 13 00:00:48.258614 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:00:48.261588 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:00:48.271599 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:00:48.272147 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:00:48.277607 update_engine[1722]: I20250813 00:00:48.277516 1722 main.cc:92] Flatcar Update Engine starting Aug 13 00:00:48.281067 systemd-logind[1721]: New seat seat0. Aug 13 00:00:48.284701 dbus-daemon[1704]: [system] SELinux support is enabled Aug 13 00:00:48.289881 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:00:48.293614 (ntainerd)[1757]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:00:48.295176 systemd-logind[1721]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Aug 13 00:00:48.301369 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:00:48.305524 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:00:48.305568 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:00:48.312031 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:00:48.312066 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:00:48.325830 dbus-daemon[1704]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:00:48.332813 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:00:48.342552 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:00:48.348203 update_engine[1722]: I20250813 00:00:48.345961 1722 update_check_scheduler.cc:74] Next update check in 3m25s Aug 13 00:00:48.358456 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:00:48.369117 tar[1733]: linux-amd64/LICENSE Aug 13 00:00:48.369117 tar[1733]: linux-amd64/helm Aug 13 00:00:48.377497 bash[1772]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:00:48.377630 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:00:48.386251 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 00:00:48.456114 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (1776) Aug 13 00:00:48.499995 coreos-metadata[1703]: Aug 13 00:00:48.498 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 13 00:00:48.504143 coreos-metadata[1703]: Aug 13 00:00:48.504 INFO Fetch successful Aug 13 00:00:48.509222 coreos-metadata[1703]: Aug 13 00:00:48.506 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 13 00:00:48.517146 coreos-metadata[1703]: Aug 13 00:00:48.515 INFO Fetch successful Aug 13 00:00:48.517146 coreos-metadata[1703]: Aug 13 00:00:48.516 INFO Fetching http://168.63.129.16/machine/5b2faadc-7581-4a4d-bce3-e5bfc4912d66/d1849d69%2Df8bb%2D46a0%2D8230%2D496fe6b83744.%5Fci%2D4230.2.2%2Da%2D03132a7374?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 13 00:00:48.519946 coreos-metadata[1703]: Aug 13 00:00:48.519 INFO Fetch successful Aug 13 00:00:48.521906 coreos-metadata[1703]: Aug 13 00:00:48.521 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 13 00:00:48.544866 coreos-metadata[1703]: Aug 13 00:00:48.543 INFO Fetch successful Aug 13 00:00:48.620250 sshd_keygen[1731]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:00:48.630320 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:00:48.644231 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:00:48.739017 locksmithd[1783]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:00:48.752011 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:00:48.764427 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:00:48.770922 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Aug 13 00:00:48.791830 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:00:48.792167 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:00:48.805402 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:00:48.841381 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Aug 13 00:00:48.932106 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:00:48.943508 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:00:48.951235 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:00:48.954939 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:00:49.185805 tar[1733]: linux-amd64/README.md Aug 13 00:00:49.198192 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:00:49.669329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:00:49.682549 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:00:50.178284 kubelet[1884]: E0813 00:00:50.178235 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:00:50.180702 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:00:50.180896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:00:50.181463 systemd[1]: kubelet.service: Consumed 984ms CPU time, 266.8M memory peak. Aug 13 00:00:50.416556 containerd[1757]: time="2025-08-13T00:00:50.416461800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Aug 13 00:00:50.439177 containerd[1757]: time="2025-08-13T00:00:50.438148100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:00:50.440048 containerd[1757]: time="2025-08-13T00:00:50.440011800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:00:50.440048 containerd[1757]: time="2025-08-13T00:00:50.440046400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:00:50.440227 containerd[1757]: time="2025-08-13T00:00:50.440066200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:00:50.440266 containerd[1757]: time="2025-08-13T00:00:50.440246900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:00:50.440308 containerd[1757]: time="2025-08-13T00:00:50.440272500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:00:50.441341 containerd[1757]: time="2025-08-13T00:00:50.440350200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:00:50.441341 containerd[1757]: time="2025-08-13T00:00:50.440371600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:00:50.441341 containerd[1757]: time="2025-08-13T00:00:50.440633300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:00:50.441341 containerd[1757]: time="2025-08-13T00:00:50.440655700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:00:50.441341 containerd[1757]: time="2025-08-13T00:00:50.440673400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:00:50.441341 containerd[1757]: time="2025-08-13T00:00:50.440687000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:00:50.441341 containerd[1757]: time="2025-08-13T00:00:50.440788600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:00:50.441341 containerd[1757]: time="2025-08-13T00:00:50.441023300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:00:50.441341 containerd[1757]: time="2025-08-13T00:00:50.441243400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:00:50.441341 containerd[1757]: time="2025-08-13T00:00:50.441266600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:00:50.441701 containerd[1757]: time="2025-08-13T00:00:50.441394300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:00:50.441701 containerd[1757]: time="2025-08-13T00:00:50.441458000Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:00:50.460308 containerd[1757]: time="2025-08-13T00:00:50.460270500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:00:50.460402 containerd[1757]: time="2025-08-13T00:00:50.460325000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:00:50.460402 containerd[1757]: time="2025-08-13T00:00:50.460345400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:00:50.460402 containerd[1757]: time="2025-08-13T00:00:50.460365800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:00:50.460402 containerd[1757]: time="2025-08-13T00:00:50.460383800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:00:50.460566 containerd[1757]: time="2025-08-13T00:00:50.460535900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:00:50.461339 containerd[1757]: time="2025-08-13T00:00:50.461123300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:00:50.462942 containerd[1757]: time="2025-08-13T00:00:50.462911500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:00:50.463000 containerd[1757]: time="2025-08-13T00:00:50.462952100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:00:50.463045 containerd[1757]: time="2025-08-13T00:00:50.462999700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:00:50.463045 containerd[1757]: time="2025-08-13T00:00:50.463027000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:00:50.463132 containerd[1757]: time="2025-08-13T00:00:50.463047500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:00:50.463132 containerd[1757]: time="2025-08-13T00:00:50.463071800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:00:50.463132 containerd[1757]: time="2025-08-13T00:00:50.463120600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:00:50.463259 containerd[1757]: time="2025-08-13T00:00:50.463148900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:00:50.463259 containerd[1757]: time="2025-08-13T00:00:50.463191600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:00:50.463259 containerd[1757]: time="2025-08-13T00:00:50.463214600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:00:50.463259 containerd[1757]: time="2025-08-13T00:00:50.463237300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:00:50.463392 containerd[1757]: time="2025-08-13T00:00:50.463270600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463392 containerd[1757]: time="2025-08-13T00:00:50.463296800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463392 containerd[1757]: time="2025-08-13T00:00:50.463319400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463392 containerd[1757]: time="2025-08-13T00:00:50.463344100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463392 containerd[1757]: time="2025-08-13T00:00:50.463367200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463560 containerd[1757]: time="2025-08-13T00:00:50.463390600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463560 containerd[1757]: time="2025-08-13T00:00:50.463410900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463560 containerd[1757]: time="2025-08-13T00:00:50.463434100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463560 containerd[1757]: time="2025-08-13T00:00:50.463510500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463560 containerd[1757]: time="2025-08-13T00:00:50.463538900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463722 containerd[1757]: time="2025-08-13T00:00:50.463561800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463722 containerd[1757]: time="2025-08-13T00:00:50.463586100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463722 containerd[1757]: time="2025-08-13T00:00:50.463609100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463722 containerd[1757]: time="2025-08-13T00:00:50.463633900Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:00:50.463722 containerd[1757]: time="2025-08-13T00:00:50.463668200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.463722 containerd[1757]: time="2025-08-13T00:00:50.463703700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.464976 containerd[1757]: time="2025-08-13T00:00:50.463727600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:00:50.464976 containerd[1757]: time="2025-08-13T00:00:50.463797400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:00:50.464976 containerd[1757]: time="2025-08-13T00:00:50.463825000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:00:50.464976 containerd[1757]: time="2025-08-13T00:00:50.463841100Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:00:50.464976 containerd[1757]: time="2025-08-13T00:00:50.463862800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:00:50.464976 containerd[1757]: time="2025-08-13T00:00:50.463880500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.464976 containerd[1757]: time="2025-08-13T00:00:50.463901700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:00:50.464976 containerd[1757]: time="2025-08-13T00:00:50.463920100Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:00:50.464976 containerd[1757]: time="2025-08-13T00:00:50.463934000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:00:50.465320 containerd[1757]: time="2025-08-13T00:00:50.464345500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:00:50.465320 containerd[1757]: time="2025-08-13T00:00:50.464415100Z" level=info msg="Connect containerd service" Aug 13 00:00:50.465320 containerd[1757]: time="2025-08-13T00:00:50.464466800Z" level=info msg="using legacy CRI server" Aug 13 00:00:50.465320 containerd[1757]: time="2025-08-13T00:00:50.464476400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:00:50.465320 containerd[1757]: time="2025-08-13T00:00:50.464634500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:00:50.467161 containerd[1757]: time="2025-08-13T00:00:50.466434700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:00:50.467161 containerd[1757]: time="2025-08-13T00:00:50.466814500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:00:50.467161 containerd[1757]: time="2025-08-13T00:00:50.466877100Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:00:50.467161 containerd[1757]: time="2025-08-13T00:00:50.466943200Z" level=info msg="Start subscribing containerd event" Aug 13 00:00:50.467161 containerd[1757]: time="2025-08-13T00:00:50.467000900Z" level=info msg="Start recovering state" Aug 13 00:00:50.468027 containerd[1757]: time="2025-08-13T00:00:50.467512200Z" level=info msg="Start event monitor" Aug 13 00:00:50.468027 containerd[1757]: time="2025-08-13T00:00:50.467547200Z" level=info msg="Start snapshots syncer" Aug 13 00:00:50.468027 containerd[1757]: time="2025-08-13T00:00:50.467559800Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:00:50.468027 containerd[1757]: time="2025-08-13T00:00:50.467569700Z" level=info msg="Start streaming server" Aug 13 00:00:50.468027 containerd[1757]: time="2025-08-13T00:00:50.467643100Z" level=info msg="containerd successfully booted in 0.052175s" Aug 13 00:00:50.468225 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:00:50.472209 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:00:50.475578 systemd[1]: Startup finished in 1.098s (kernel) + 11.568s (initrd) + 13.758s (userspace) = 26.426s. Aug 13 00:00:50.973756 login[1870]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Aug 13 00:00:50.975457 login[1871]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:00:50.985766 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:00:50.994368 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:00:50.998182 systemd-logind[1721]: New session 1 of user core. Aug 13 00:00:51.007294 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:00:51.016397 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:00:51.034358 (systemd)[1900]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:00:51.037197 systemd-logind[1721]: New session c1 of user core. Aug 13 00:00:51.228097 systemd[1900]: Queued start job for default target default.target. Aug 13 00:00:51.236220 systemd[1900]: Created slice app.slice - User Application Slice. Aug 13 00:00:51.236257 systemd[1900]: Reached target paths.target - Paths. Aug 13 00:00:51.236310 systemd[1900]: Reached target timers.target - Timers. Aug 13 00:00:51.237584 systemd[1900]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:00:51.248166 systemd[1900]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:00:51.248432 systemd[1900]: Reached target sockets.target - Sockets. Aug 13 00:00:51.248495 systemd[1900]: Reached target basic.target - Basic System. Aug 13 00:00:51.248544 systemd[1900]: Reached target default.target - Main User Target. Aug 13 00:00:51.248587 systemd[1900]: Startup finished in 205ms. Aug 13 00:00:51.249005 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:00:51.259421 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:00:51.538019 waagent[1868]: 2025-08-13T00:00:51.537847Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.539364Z INFO Daemon Daemon OS: flatcar 4230.2.2 Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.540447Z INFO Daemon Daemon Python: 3.11.11 Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.541594Z INFO Daemon Daemon Run daemon Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.542021Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4230.2.2' Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.542428Z INFO Daemon Daemon Using waagent for provisioning Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.543748Z INFO Daemon Daemon Activate resource disk Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.544642Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.550437Z INFO Daemon Daemon Found device: None Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.550756Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.551744Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.552647Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:00:51.582410 waagent[1868]: 2025-08-13T00:00:51.552839Z INFO Daemon Daemon Running default provisioning handler Aug 13 00:00:51.585886 waagent[1868]: 2025-08-13T00:00:51.585705Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Aug 13 00:00:51.601986 waagent[1868]: 2025-08-13T00:00:51.587490Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 13 00:00:51.601986 waagent[1868]: 2025-08-13T00:00:51.588352Z INFO Daemon Daemon cloud-init is enabled: False Aug 13 00:00:51.601986 waagent[1868]: 2025-08-13T00:00:51.588798Z INFO Daemon Daemon Copying ovf-env.xml Aug 13 00:00:51.642261 waagent[1868]: 2025-08-13T00:00:51.639337Z INFO Daemon Daemon Successfully mounted dvd Aug 13 00:00:51.667731 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 13 00:00:51.670275 waagent[1868]: 2025-08-13T00:00:51.670189Z INFO Daemon Daemon Detect protocol endpoint Aug 13 00:00:51.687601 waagent[1868]: 2025-08-13T00:00:51.671988Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 13 00:00:51.687601 waagent[1868]: 2025-08-13T00:00:51.673793Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 13 00:00:51.687601 waagent[1868]: 2025-08-13T00:00:51.675027Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 13 00:00:51.687601 waagent[1868]: 2025-08-13T00:00:51.675667Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 13 00:00:51.687601 waagent[1868]: 2025-08-13T00:00:51.676510Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 13 00:00:51.737356 waagent[1868]: 2025-08-13T00:00:51.737293Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 13 00:00:51.745704 waagent[1868]: 2025-08-13T00:00:51.738772Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 13 00:00:51.745704 waagent[1868]: 2025-08-13T00:00:51.739555Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 13 00:00:51.817248 waagent[1868]: 2025-08-13T00:00:51.817050Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 13 00:00:51.827871 waagent[1868]: 2025-08-13T00:00:51.818513Z INFO Daemon Daemon Forcing an update of the goal state. Aug 13 00:00:51.827871 waagent[1868]: 2025-08-13T00:00:51.823108Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:00:51.841785 waagent[1868]: 2025-08-13T00:00:51.841726Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.175 Aug 13 00:00:51.845684 waagent[1868]: 2025-08-13T00:00:51.843442Z INFO Daemon Aug 13 00:00:51.845684 waagent[1868]: 2025-08-13T00:00:51.845639Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 5b6bf662-5772-43a4-836b-c026c2de3ba7 eTag: 11797628957602476083 source: Fabric] Aug 13 00:00:51.863913 waagent[1868]: 2025-08-13T00:00:51.846019Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Aug 13 00:00:51.863913 waagent[1868]: 2025-08-13T00:00:51.847341Z INFO Daemon Aug 13 00:00:51.863913 waagent[1868]: 2025-08-13T00:00:51.847937Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:00:51.863913 waagent[1868]: 2025-08-13T00:00:51.852872Z INFO Daemon Daemon Downloading artifacts profile blob Aug 13 00:00:51.974182 login[1870]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 00:00:51.980497 systemd-logind[1721]: New session 2 of user core. Aug 13 00:00:51.985336 waagent[1868]: 2025-08-13T00:00:51.985252Z INFO Daemon Downloaded certificate {'thumbprint': '3B2F17DE6A6815CA7499E72072025EB8EC1DA27B', 'hasPrivateKey': True} Aug 13 00:00:51.992235 waagent[1868]: 2025-08-13T00:00:51.987143Z INFO Daemon Fetch goal state completed Aug 13 00:00:51.992311 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:00:51.998159 waagent[1868]: 2025-08-13T00:00:51.997471Z INFO Daemon Daemon Starting provisioning Aug 13 00:00:52.002100 waagent[1868]: 2025-08-13T00:00:52.000273Z INFO Daemon Daemon Handle ovf-env.xml. Aug 13 00:00:52.002100 waagent[1868]: 2025-08-13T00:00:52.001300Z INFO Daemon Daemon Set hostname [ci-4230.2.2-a-03132a7374] Aug 13 00:00:52.031809 waagent[1868]: 2025-08-13T00:00:52.031707Z INFO Daemon Daemon Publish hostname [ci-4230.2.2-a-03132a7374] Aug 13 00:00:52.033379 waagent[1868]: 2025-08-13T00:00:52.033314Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 13 00:00:52.033909 waagent[1868]: 2025-08-13T00:00:52.033870Z INFO Daemon Daemon Primary interface is [eth0] Aug 13 00:00:52.042738 systemd-networkd[1466]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:00:52.042749 systemd-networkd[1466]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:00:52.042797 systemd-networkd[1466]: eth0: DHCP lease lost Aug 13 00:00:52.043888 waagent[1868]: 2025-08-13T00:00:52.043811Z INFO Daemon Daemon Create user account if not exists Aug 13 00:00:52.046468 waagent[1868]: 2025-08-13T00:00:52.045143Z INFO Daemon Daemon User core already exists, skip useradd Aug 13 00:00:52.046468 waagent[1868]: 2025-08-13T00:00:52.045975Z INFO Daemon Daemon Configure sudoer Aug 13 00:00:52.047210 waagent[1868]: 2025-08-13T00:00:52.047164Z INFO Daemon Daemon Configure sshd Aug 13 00:00:52.047982 waagent[1868]: 2025-08-13T00:00:52.047938Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Aug 13 00:00:52.048636 waagent[1868]: 2025-08-13T00:00:52.048599Z INFO Daemon Daemon Deploy ssh public key. Aug 13 00:00:52.105147 systemd-networkd[1466]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 13 00:01:00.431479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:01:00.436674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:00.540289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:00.544496 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:01:01.222024 kubelet[1957]: E0813 00:01:01.221965 1957 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:01.225931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:01.226169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:01.226607 systemd[1]: kubelet.service: Consumed 207ms CPU time, 108.7M memory peak. Aug 13 00:01:11.368607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:01:11.376656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:11.476013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:11.480234 (kubelet)[1972]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:01:11.863209 chronyd[1718]: Selected source PHC0 Aug 13 00:01:12.173156 kubelet[1972]: E0813 00:01:12.173026 1972 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:12.175802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:12.176012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:12.176434 systemd[1]: kubelet.service: Consumed 144ms CPU time, 110.6M memory peak. Aug 13 00:01:22.132880 waagent[1868]: 2025-08-13T00:01:22.132808Z INFO Daemon Daemon Provisioning complete Aug 13 00:01:22.145871 waagent[1868]: 2025-08-13T00:01:22.145813Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 13 00:01:22.154520 waagent[1868]: 2025-08-13T00:01:22.147423Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 13 00:01:22.154520 waagent[1868]: 2025-08-13T00:01:22.148489Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Aug 13 00:01:22.272073 waagent[1979]: 2025-08-13T00:01:22.271975Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Aug 13 00:01:22.272516 waagent[1979]: 2025-08-13T00:01:22.272156Z INFO ExtHandler ExtHandler OS: flatcar 4230.2.2 Aug 13 00:01:22.272516 waagent[1979]: 2025-08-13T00:01:22.272246Z INFO ExtHandler ExtHandler Python: 3.11.11 Aug 13 00:01:22.294913 waagent[1979]: 2025-08-13T00:01:22.294834Z INFO ExtHandler ExtHandler Distro: flatcar-4230.2.2; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 13 00:01:22.295141 waagent[1979]: 2025-08-13T00:01:22.295074Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:01:22.295250 waagent[1979]: 2025-08-13T00:01:22.295206Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:01:22.302836 waagent[1979]: 2025-08-13T00:01:22.302772Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 13 00:01:22.320044 waagent[1979]: 2025-08-13T00:01:22.319989Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.175 Aug 13 00:01:22.320532 waagent[1979]: 2025-08-13T00:01:22.320475Z INFO ExtHandler Aug 13 00:01:22.320614 waagent[1979]: 2025-08-13T00:01:22.320571Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 10dd59a1-97ce-4eda-a2a5-23dfd054acc9 eTag: 11797628957602476083 source: Fabric] Aug 13 00:01:22.320934 waagent[1979]: 2025-08-13T00:01:22.320881Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 13 00:01:22.321530 waagent[1979]: 2025-08-13T00:01:22.321472Z INFO ExtHandler Aug 13 00:01:22.321603 waagent[1979]: 2025-08-13T00:01:22.321566Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 13 00:01:22.325613 waagent[1979]: 2025-08-13T00:01:22.325571Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 13 00:01:22.368400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:01:22.375531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:22.408492 waagent[1979]: 2025-08-13T00:01:22.408384Z INFO ExtHandler Downloaded certificate {'thumbprint': '3B2F17DE6A6815CA7499E72072025EB8EC1DA27B', 'hasPrivateKey': True} Aug 13 00:01:22.409327 waagent[1979]: 2025-08-13T00:01:22.409248Z INFO ExtHandler Fetch goal state completed Aug 13 00:01:22.427336 waagent[1979]: 2025-08-13T00:01:22.424935Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1979 Aug 13 00:01:22.427336 waagent[1979]: 2025-08-13T00:01:22.425403Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Aug 13 00:01:22.427948 waagent[1979]: 2025-08-13T00:01:22.427887Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4230.2.2', '', 'Flatcar Container Linux by Kinvolk'] Aug 13 00:01:22.428470 waagent[1979]: 2025-08-13T00:01:22.428414Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 13 00:01:22.586720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:22.590921 (kubelet)[1998]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:01:23.156765 kubelet[1998]: E0813 00:01:23.156707 1998 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:23.159074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:23.159309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:23.159730 systemd[1]: kubelet.service: Consumed 146ms CPU time, 108.7M memory peak. Aug 13 00:01:23.182489 waagent[1979]: 2025-08-13T00:01:23.182442Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 13 00:01:23.182726 waagent[1979]: 2025-08-13T00:01:23.182671Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 13 00:01:23.189390 waagent[1979]: 2025-08-13T00:01:23.189137Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 13 00:01:23.196029 systemd[1]: Reload requested from client PID 2007 ('systemctl') (unit waagent.service)... Aug 13 00:01:23.196045 systemd[1]: Reloading... Aug 13 00:01:23.278105 zram_generator::config[2042]: No configuration found. Aug 13 00:01:23.421404 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:23.534933 systemd[1]: Reloading finished in 338 ms. Aug 13 00:01:23.551122 waagent[1979]: 2025-08-13T00:01:23.549657Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Aug 13 00:01:23.559126 systemd[1]: Reload requested from client PID 2106 ('systemctl') (unit waagent.service)... Aug 13 00:01:23.559141 systemd[1]: Reloading... Aug 13 00:01:23.652187 zram_generator::config[2141]: No configuration found. Aug 13 00:01:23.775064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:01:23.886018 systemd[1]: Reloading finished in 326 ms. Aug 13 00:01:23.903732 waagent[1979]: 2025-08-13T00:01:23.902669Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Aug 13 00:01:23.903732 waagent[1979]: 2025-08-13T00:01:23.902856Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Aug 13 00:01:24.191709 waagent[1979]: 2025-08-13T00:01:24.191623Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Aug 13 00:01:24.192300 waagent[1979]: 2025-08-13T00:01:24.192233Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Aug 13 00:01:24.193051 waagent[1979]: 2025-08-13T00:01:24.192987Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 13 00:01:24.193588 waagent[1979]: 2025-08-13T00:01:24.193539Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:01:24.193674 waagent[1979]: 2025-08-13T00:01:24.193591Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 13 00:01:24.193907 waagent[1979]: 2025-08-13T00:01:24.193857Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 13 00:01:24.194012 waagent[1979]: 2025-08-13T00:01:24.193960Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 13 00:01:24.194437 waagent[1979]: 2025-08-13T00:01:24.194386Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 13 00:01:24.194671 waagent[1979]: 2025-08-13T00:01:24.194627Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 13 00:01:24.194735 waagent[1979]: 2025-08-13T00:01:24.194680Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:01:24.194811 waagent[1979]: 2025-08-13T00:01:24.194766Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 13 00:01:24.195237 waagent[1979]: 2025-08-13T00:01:24.195185Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 13 00:01:24.195657 waagent[1979]: 2025-08-13T00:01:24.195615Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 13 00:01:24.196022 waagent[1979]: 2025-08-13T00:01:24.195974Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 13 00:01:24.196623 waagent[1979]: 2025-08-13T00:01:24.196567Z INFO EnvHandler ExtHandler Configure routes Aug 13 00:01:24.197565 waagent[1979]: 2025-08-13T00:01:24.197523Z INFO EnvHandler ExtHandler Gateway:None Aug 13 00:01:24.198484 waagent[1979]: 2025-08-13T00:01:24.198429Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 13 00:01:24.198484 waagent[1979]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 13 00:01:24.198484 waagent[1979]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Aug 13 00:01:24.198484 waagent[1979]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 13 00:01:24.198484 waagent[1979]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:01:24.198484 waagent[1979]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:01:24.198484 waagent[1979]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 13 00:01:24.198767 waagent[1979]: 2025-08-13T00:01:24.198484Z INFO EnvHandler ExtHandler Routes:None Aug 13 00:01:24.203219 waagent[1979]: 2025-08-13T00:01:24.203164Z INFO ExtHandler ExtHandler Aug 13 00:01:24.203313 waagent[1979]: 2025-08-13T00:01:24.203270Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 8ecf9448-e4b1-4313-b895-d72b69e092b4 correlation 4747ce4c-6d98-4fea-a65a-d2203ae094e9 created: 2025-08-12T23:59:45.872681Z] Aug 13 00:01:24.204175 waagent[1979]: 2025-08-13T00:01:24.204127Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 13 00:01:24.206283 waagent[1979]: 2025-08-13T00:01:24.206237Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Aug 13 00:01:24.243451 waagent[1979]: 2025-08-13T00:01:24.243285Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A065222C-22EF-4A18-8556-FC0EA88A7B11;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Aug 13 00:01:24.272629 waagent[1979]: 2025-08-13T00:01:24.272543Z INFO MonitorHandler ExtHandler Network interfaces: Aug 13 00:01:24.272629 waagent[1979]: Executing ['ip', '-a', '-o', 'link']: Aug 13 00:01:24.272629 waagent[1979]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 13 00:01:24.272629 waagent[1979]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:77:53:c3 brd ff:ff:ff:ff:ff:ff Aug 13 00:01:24.272629 waagent[1979]: 3: enP13423s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:77:53:c3 brd ff:ff:ff:ff:ff:ff\ altname enP13423p0s2 Aug 13 00:01:24.272629 waagent[1979]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 13 00:01:24.272629 waagent[1979]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 13 00:01:24.272629 waagent[1979]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 13 00:01:24.272629 waagent[1979]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 13 00:01:24.272629 waagent[1979]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Aug 13 00:01:24.272629 waagent[1979]: 2: eth0 inet6 fe80::7eed:8dff:fe77:53c3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 13 00:01:24.294409 waagent[1979]: 2025-08-13T00:01:24.294338Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Aug 13 00:01:24.294409 waagent[1979]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:01:24.294409 waagent[1979]: pkts bytes target prot opt in out source destination Aug 13 00:01:24.294409 waagent[1979]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:01:24.294409 waagent[1979]: pkts bytes target prot opt in out source destination Aug 13 00:01:24.294409 waagent[1979]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:01:24.294409 waagent[1979]: pkts bytes target prot opt in out source destination Aug 13 00:01:24.294409 waagent[1979]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:01:24.294409 waagent[1979]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:01:24.294409 waagent[1979]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:01:24.298934 waagent[1979]: 2025-08-13T00:01:24.298876Z INFO EnvHandler ExtHandler Current Firewall rules: Aug 13 00:01:24.298934 waagent[1979]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:01:24.298934 waagent[1979]: pkts bytes target prot opt in out source destination Aug 13 00:01:24.298934 waagent[1979]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:01:24.298934 waagent[1979]: pkts bytes target prot opt in out source destination Aug 13 00:01:24.298934 waagent[1979]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 13 00:01:24.298934 waagent[1979]: pkts bytes target prot opt in out source destination Aug 13 00:01:24.298934 waagent[1979]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 13 00:01:24.298934 waagent[1979]: 5 467 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 13 00:01:24.298934 waagent[1979]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 13 00:01:24.299434 waagent[1979]: 2025-08-13T00:01:24.299257Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 13 00:01:31.234126 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Aug 13 00:01:32.789382 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:01:32.794378 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:49790.service - OpenSSH per-connection server daemon (10.200.16.10:49790). Aug 13 00:01:33.368484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 00:01:33.374327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:33.479069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:33.486406 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:01:33.526370 sshd[2234]: Accepted publickey for core from 10.200.16.10 port 49790 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:01:33.528111 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:33.532855 systemd-logind[1721]: New session 3 of user core. Aug 13 00:01:33.538245 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:01:34.049336 update_engine[1722]: I20250813 00:01:34.049258 1722 update_attempter.cc:509] Updating boot flags... Aug 13 00:01:34.076342 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:49794.service - OpenSSH per-connection server daemon (10.200.16.10:49794). Aug 13 00:01:34.130294 kubelet[2244]: E0813 00:01:34.130198 2244 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:34.132530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:34.132749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:34.133243 systemd[1]: kubelet.service: Consumed 162ms CPU time, 110.3M memory peak. Aug 13 00:01:34.208120 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (2271) Aug 13 00:01:34.369106 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 45 scanned by (udev-worker) (2275) Aug 13 00:01:34.718341 sshd[2254]: Accepted publickey for core from 10.200.16.10 port 49794 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:01:34.719764 sshd-session[2254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:34.724150 systemd-logind[1721]: New session 4 of user core. Aug 13 00:01:34.734251 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:01:35.169618 sshd[2370]: Connection closed by 10.200.16.10 port 49794 Aug 13 00:01:35.170403 sshd-session[2254]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:35.173362 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:49794.service: Deactivated successfully. Aug 13 00:01:35.175403 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:01:35.176893 systemd-logind[1721]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:01:35.177851 systemd-logind[1721]: Removed session 4. Aug 13 00:01:35.284654 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:49806.service - OpenSSH per-connection server daemon (10.200.16.10:49806). Aug 13 00:01:35.909717 sshd[2376]: Accepted publickey for core from 10.200.16.10 port 49806 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:01:35.911129 sshd-session[2376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:35.915377 systemd-logind[1721]: New session 5 of user core. Aug 13 00:01:35.921252 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:01:36.349152 sshd[2378]: Connection closed by 10.200.16.10 port 49806 Aug 13 00:01:36.350184 sshd-session[2376]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:36.353048 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:49806.service: Deactivated successfully. Aug 13 00:01:36.355114 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:01:36.356712 systemd-logind[1721]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:01:36.357648 systemd-logind[1721]: Removed session 5. Aug 13 00:01:36.467416 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:49812.service - OpenSSH per-connection server daemon (10.200.16.10:49812). Aug 13 00:01:37.090925 sshd[2384]: Accepted publickey for core from 10.200.16.10 port 49812 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:01:37.092357 sshd-session[2384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:37.096610 systemd-logind[1721]: New session 6 of user core. Aug 13 00:01:37.106257 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:01:37.533413 sshd[2386]: Connection closed by 10.200.16.10 port 49812 Aug 13 00:01:37.534195 sshd-session[2384]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:37.537085 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:49812.service: Deactivated successfully. Aug 13 00:01:37.539171 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:01:37.540581 systemd-logind[1721]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:01:37.541665 systemd-logind[1721]: Removed session 6. Aug 13 00:01:37.650595 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:49814.service - OpenSSH per-connection server daemon (10.200.16.10:49814). Aug 13 00:01:38.276357 sshd[2392]: Accepted publickey for core from 10.200.16.10 port 49814 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:01:38.277770 sshd-session[2392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:38.282759 systemd-logind[1721]: New session 7 of user core. Aug 13 00:01:38.285235 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:01:38.754111 sudo[2395]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:01:38.754482 sudo[2395]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:38.769544 sudo[2395]: pam_unix(sudo:session): session closed for user root Aug 13 00:01:38.878696 sshd[2394]: Connection closed by 10.200.16.10 port 49814 Aug 13 00:01:38.879688 sshd-session[2392]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:38.882878 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:49814.service: Deactivated successfully. Aug 13 00:01:38.884900 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:01:38.886412 systemd-logind[1721]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:01:38.887557 systemd-logind[1721]: Removed session 7. Aug 13 00:01:38.993385 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:49818.service - OpenSSH per-connection server daemon (10.200.16.10:49818). Aug 13 00:01:39.617904 sshd[2401]: Accepted publickey for core from 10.200.16.10 port 49818 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:01:39.619380 sshd-session[2401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:39.624307 systemd-logind[1721]: New session 8 of user core. Aug 13 00:01:39.630251 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:01:39.962451 sudo[2405]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:01:39.962795 sudo[2405]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:39.966132 sudo[2405]: pam_unix(sudo:session): session closed for user root Aug 13 00:01:39.971290 sudo[2404]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 00:01:39.971625 sudo[2404]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:39.985490 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 00:01:40.011150 augenrules[2427]: No rules Aug 13 00:01:40.012535 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:01:40.012785 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 00:01:40.014320 sudo[2404]: pam_unix(sudo:session): session closed for user root Aug 13 00:01:40.114523 sshd[2403]: Connection closed by 10.200.16.10 port 49818 Aug 13 00:01:40.115234 sshd-session[2401]: pam_unix(sshd:session): session closed for user core Aug 13 00:01:40.118187 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:49818.service: Deactivated successfully. Aug 13 00:01:40.120199 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:01:40.121667 systemd-logind[1721]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:01:40.122664 systemd-logind[1721]: Removed session 8. Aug 13 00:01:40.242392 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:39466.service - OpenSSH per-connection server daemon (10.200.16.10:39466). Aug 13 00:01:40.866538 sshd[2436]: Accepted publickey for core from 10.200.16.10 port 39466 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:01:40.867932 sshd-session[2436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:01:40.872187 systemd-logind[1721]: New session 9 of user core. Aug 13 00:01:40.876233 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:01:41.210166 sudo[2439]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:01:41.210521 sudo[2439]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:01:43.302441 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:01:43.303501 (dockerd)[2456]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:01:44.368481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 13 00:01:44.373606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:44.564308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:44.568724 (kubelet)[2469]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:01:44.605000 kubelet[2469]: E0813 00:01:44.604901 2469 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:44.607179 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:44.607404 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:44.607825 systemd[1]: kubelet.service: Consumed 144ms CPU time, 109.8M memory peak. Aug 13 00:01:46.138550 dockerd[2456]: time="2025-08-13T00:01:46.138480753Z" level=info msg="Starting up" Aug 13 00:01:46.817319 systemd[1]: var-lib-docker-metacopy\x2dcheck1481626276-merged.mount: Deactivated successfully. Aug 13 00:01:46.860853 dockerd[2456]: time="2025-08-13T00:01:46.860806313Z" level=info msg="Loading containers: start." Aug 13 00:01:47.046430 kernel: Initializing XFRM netlink socket Aug 13 00:01:47.155397 systemd-networkd[1466]: docker0: Link UP Aug 13 00:01:47.226304 dockerd[2456]: time="2025-08-13T00:01:47.226256228Z" level=info msg="Loading containers: done." Aug 13 00:01:47.253134 dockerd[2456]: time="2025-08-13T00:01:47.253020749Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:01:47.253330 dockerd[2456]: time="2025-08-13T00:01:47.253177751Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Aug 13 00:01:47.253330 dockerd[2456]: time="2025-08-13T00:01:47.253309852Z" level=info msg="Daemon has completed initialization" Aug 13 00:01:47.357874 dockerd[2456]: time="2025-08-13T00:01:47.357717113Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:01:47.358404 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:01:48.251786 containerd[1757]: time="2025-08-13T00:01:48.251735090Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 00:01:49.065837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3848258890.mount: Deactivated successfully. Aug 13 00:01:50.967638 containerd[1757]: time="2025-08-13T00:01:50.967572299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:50.973364 containerd[1757]: time="2025-08-13T00:01:50.973299346Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30078245" Aug 13 00:01:50.977848 containerd[1757]: time="2025-08-13T00:01:50.977812483Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:50.985343 containerd[1757]: time="2025-08-13T00:01:50.985284145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:50.986557 containerd[1757]: time="2025-08-13T00:01:50.986366854Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 2.734580264s" Aug 13 00:01:50.986557 containerd[1757]: time="2025-08-13T00:01:50.986408154Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 00:01:50.987351 containerd[1757]: time="2025-08-13T00:01:50.987325762Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 00:01:52.787754 containerd[1757]: time="2025-08-13T00:01:52.787693282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:52.791625 containerd[1757]: time="2025-08-13T00:01:52.791564740Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26019369" Aug 13 00:01:52.796211 containerd[1757]: time="2025-08-13T00:01:52.796159209Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:52.803552 containerd[1757]: time="2025-08-13T00:01:52.803500620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:52.804809 containerd[1757]: time="2025-08-13T00:01:52.804628037Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 1.817213575s" Aug 13 00:01:52.804809 containerd[1757]: time="2025-08-13T00:01:52.804664637Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 00:01:52.805562 containerd[1757]: time="2025-08-13T00:01:52.805296347Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 00:01:54.283494 containerd[1757]: time="2025-08-13T00:01:54.283430377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:54.288723 containerd[1757]: time="2025-08-13T00:01:54.288660955Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20155021" Aug 13 00:01:54.296157 containerd[1757]: time="2025-08-13T00:01:54.296078567Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:54.308110 containerd[1757]: time="2025-08-13T00:01:54.307992146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:54.309307 containerd[1757]: time="2025-08-13T00:01:54.309072362Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 1.503741515s" Aug 13 00:01:54.309608 containerd[1757]: time="2025-08-13T00:01:54.309469768Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 00:01:54.310277 containerd[1757]: time="2025-08-13T00:01:54.310236680Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 00:01:54.618607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Aug 13 00:01:54.625323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:01:54.733896 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:01:54.738007 (kubelet)[2726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:01:55.411061 kubelet[2726]: E0813 00:01:55.410959 2726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:01:55.413481 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:01:55.413705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:01:55.414294 systemd[1]: kubelet.service: Consumed 143ms CPU time, 110.6M memory peak. Aug 13 00:01:56.339439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66438651.mount: Deactivated successfully. Aug 13 00:01:56.914910 containerd[1757]: time="2025-08-13T00:01:56.914842051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:56.917586 containerd[1757]: time="2025-08-13T00:01:56.917524691Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892674" Aug 13 00:01:56.922295 containerd[1757]: time="2025-08-13T00:01:56.922238662Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:56.928892 containerd[1757]: time="2025-08-13T00:01:56.928838362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:01:56.929891 containerd[1757]: time="2025-08-13T00:01:56.929427570Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 2.61914479s" Aug 13 00:01:56.929891 containerd[1757]: time="2025-08-13T00:01:56.929465471Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 00:01:56.930154 containerd[1757]: time="2025-08-13T00:01:56.930128781Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 00:01:57.570075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1127424580.mount: Deactivated successfully. Aug 13 00:02:05.618557 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Aug 13 00:02:05.629306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:09.812030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:09.816887 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:02:09.852772 kubelet[2781]: E0813 00:02:09.852668 2781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:09.855016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:09.855250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:09.855704 systemd[1]: kubelet.service: Consumed 146ms CPU time, 108.2M memory peak. Aug 13 00:02:19.868555 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Aug 13 00:02:19.874325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:20.430783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:20.435185 (kubelet)[2804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:02:20.635748 kubelet[2804]: E0813 00:02:20.635637 2804 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:20.638168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:20.638390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:20.638786 systemd[1]: kubelet.service: Consumed 141ms CPU time, 108.9M memory peak. Aug 13 00:02:26.167159 containerd[1757]: time="2025-08-13T00:02:26.167085527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:26.172151 containerd[1757]: time="2025-08-13T00:02:26.172114789Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Aug 13 00:02:26.214584 containerd[1757]: time="2025-08-13T00:02:26.214516510Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:26.261255 containerd[1757]: time="2025-08-13T00:02:26.261155083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:26.262798 containerd[1757]: time="2025-08-13T00:02:26.262477799Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 29.332312818s" Aug 13 00:02:26.262798 containerd[1757]: time="2025-08-13T00:02:26.262521600Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 00:02:26.263532 containerd[1757]: time="2025-08-13T00:02:26.263495712Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:02:28.084770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2003294605.mount: Deactivated successfully. Aug 13 00:02:28.271404 containerd[1757]: time="2025-08-13T00:02:28.271342184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:28.551446 containerd[1757]: time="2025-08-13T00:02:28.551346324Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Aug 13 00:02:28.615985 containerd[1757]: time="2025-08-13T00:02:28.615908218Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:28.660712 containerd[1757]: time="2025-08-13T00:02:28.659784957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:28.661077 containerd[1757]: time="2025-08-13T00:02:28.661035672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.39750286s" Aug 13 00:02:28.661235 containerd[1757]: time="2025-08-13T00:02:28.661214474Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 00:02:28.661942 containerd[1757]: time="2025-08-13T00:02:28.661910583Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 00:02:30.868539 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Aug 13 00:02:30.875313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:30.977132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:30.981292 (kubelet)[2835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:02:31.613939 kubelet[2835]: E0813 00:02:31.613853 2835 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:31.616395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:31.616617 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:31.617129 systemd[1]: kubelet.service: Consumed 148ms CPU time, 110.1M memory peak. Aug 13 00:02:35.525160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3122255844.mount: Deactivated successfully. Aug 13 00:02:40.282063 containerd[1757]: time="2025-08-13T00:02:40.282001051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:40.286615 containerd[1757]: time="2025-08-13T00:02:40.286567080Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247183" Aug 13 00:02:40.289983 containerd[1757]: time="2025-08-13T00:02:40.289930202Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:40.296557 containerd[1757]: time="2025-08-13T00:02:40.296506243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:02:40.297763 containerd[1757]: time="2025-08-13T00:02:40.297615550Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 11.635668267s" Aug 13 00:02:40.297763 containerd[1757]: time="2025-08-13T00:02:40.297652851Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 00:02:41.618647 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Aug 13 00:02:41.629793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:41.781261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:41.790399 (kubelet)[2926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:02:42.482335 kubelet[2926]: E0813 00:02:42.482278 2926 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:02:42.485613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:02:42.485825 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:02:42.487128 systemd[1]: kubelet.service: Consumed 181ms CPU time, 109.7M memory peak. Aug 13 00:02:43.879529 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:43.879974 systemd[1]: kubelet.service: Consumed 181ms CPU time, 109.7M memory peak. Aug 13 00:02:43.885391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:43.912048 systemd[1]: Reload requested from client PID 2940 ('systemctl') (unit session-9.scope)... Aug 13 00:02:43.912079 systemd[1]: Reloading... Aug 13 00:02:44.077117 zram_generator::config[2988]: No configuration found. Aug 13 00:02:44.197806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:02:44.315231 systemd[1]: Reloading finished in 402 ms. Aug 13 00:02:44.376836 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:44.380634 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:02:44.380877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:44.380938 systemd[1]: kubelet.service: Consumed 131ms CPU time, 98.4M memory peak. Aug 13 00:02:44.386414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:02:48.646661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:02:48.652155 (kubelet)[3059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:02:48.690064 kubelet[3059]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:02:48.690064 kubelet[3059]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:02:48.690064 kubelet[3059]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:02:48.690542 kubelet[3059]: I0813 00:02:48.690121 3059 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:02:49.572261 kubelet[3059]: I0813 00:02:49.572214 3059 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:02:49.572261 kubelet[3059]: I0813 00:02:49.572245 3059 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:02:49.572523 kubelet[3059]: I0813 00:02:49.572502 3059 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:02:49.597164 kubelet[3059]: E0813 00:02:49.597122 3059 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:02:49.597984 kubelet[3059]: I0813 00:02:49.597820 3059 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:02:49.607171 kubelet[3059]: E0813 00:02:49.607120 3059 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:02:49.607171 kubelet[3059]: I0813 00:02:49.607170 3059 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:02:49.611306 kubelet[3059]: I0813 00:02:49.611282 3059 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:02:49.611574 kubelet[3059]: I0813 00:02:49.611546 3059 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:02:49.611749 kubelet[3059]: I0813 00:02:49.611571 3059 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-a-03132a7374","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:02:49.611904 kubelet[3059]: I0813 00:02:49.611756 3059 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:02:49.611904 kubelet[3059]: I0813 00:02:49.611770 3059 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:02:49.611979 kubelet[3059]: I0813 00:02:49.611924 3059 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:02:49.614995 kubelet[3059]: I0813 00:02:49.614873 3059 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:02:49.614995 kubelet[3059]: I0813 00:02:49.614901 3059 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:02:49.614995 kubelet[3059]: I0813 00:02:49.614928 3059 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:02:49.617023 kubelet[3059]: I0813 00:02:49.616730 3059 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:02:49.623259 kubelet[3059]: E0813 00:02:49.622951 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-a-03132a7374&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:02:49.623457 kubelet[3059]: E0813 00:02:49.623425 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:02:49.624334 kubelet[3059]: I0813 00:02:49.623536 3059 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:02:49.624334 kubelet[3059]: I0813 00:02:49.624260 3059 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:02:49.625437 kubelet[3059]: W0813 00:02:49.625405 3059 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:02:49.628960 kubelet[3059]: I0813 00:02:49.628936 3059 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:02:49.629036 kubelet[3059]: I0813 00:02:49.628990 3059 server.go:1289] "Started kubelet" Aug 13 00:02:49.631564 kubelet[3059]: I0813 00:02:49.630781 3059 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:02:49.632746 kubelet[3059]: I0813 00:02:49.632701 3059 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:02:49.633784 kubelet[3059]: I0813 00:02:49.633759 3059 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:02:49.637834 kubelet[3059]: I0813 00:02:49.637776 3059 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:02:49.638051 kubelet[3059]: I0813 00:02:49.638029 3059 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:02:49.641673 kubelet[3059]: I0813 00:02:49.641653 3059 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:02:49.641927 kubelet[3059]: I0813 00:02:49.641904 3059 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:02:49.642115 kubelet[3059]: E0813 00:02:49.642079 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:49.644478 kubelet[3059]: E0813 00:02:49.643006 3059 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-a-03132a7374.185b2aa564238563 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-a-03132a7374,UID:ci-4230.2.2-a-03132a7374,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-a-03132a7374,},FirstTimestamp:2025-08-13 00:02:49.628960099 +0000 UTC m=+0.973061137,LastTimestamp:2025-08-13 00:02:49.628960099 +0000 UTC m=+0.973061137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-a-03132a7374,}" Aug 13 00:02:49.644606 kubelet[3059]: E0813 00:02:49.644585 3059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-03132a7374?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Aug 13 00:02:49.645983 kubelet[3059]: I0813 00:02:49.645079 3059 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:02:49.645983 kubelet[3059]: I0813 00:02:49.645176 3059 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:02:49.646520 kubelet[3059]: I0813 00:02:49.646506 3059 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:02:49.646700 kubelet[3059]: I0813 00:02:49.646689 3059 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:02:49.647129 kubelet[3059]: E0813 00:02:49.647105 3059 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:02:49.647245 kubelet[3059]: I0813 00:02:49.647228 3059 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:02:49.651191 kubelet[3059]: E0813 00:02:49.651159 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:02:49.665396 kubelet[3059]: I0813 00:02:49.665372 3059 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:02:49.665396 kubelet[3059]: I0813 00:02:49.665393 3059 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:02:49.665542 kubelet[3059]: I0813 00:02:49.665413 3059 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:02:49.742942 kubelet[3059]: E0813 00:02:49.742804 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:49.843253 kubelet[3059]: E0813 00:02:49.843138 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:49.845631 kubelet[3059]: E0813 00:02:49.845583 3059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-03132a7374?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Aug 13 00:02:49.944053 kubelet[3059]: E0813 00:02:49.944000 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:50.044671 kubelet[3059]: E0813 00:02:50.044622 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:50.144970 kubelet[3059]: E0813 00:02:50.144923 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:50.245606 kubelet[3059]: E0813 00:02:50.245552 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:50.247022 kubelet[3059]: E0813 00:02:50.246985 3059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-03132a7374?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Aug 13 00:02:50.346529 kubelet[3059]: E0813 00:02:50.346476 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:50.447497 kubelet[3059]: E0813 00:02:50.447350 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:50.548366 kubelet[3059]: E0813 00:02:50.548317 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:50.648755 kubelet[3059]: E0813 00:02:50.648713 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:50.699783 kubelet[3059]: E0813 00:02:50.699659 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-a-03132a7374&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:02:50.749547 kubelet[3059]: E0813 00:02:50.749496 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:50.850288 kubelet[3059]: E0813 00:02:50.850239 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:50.870123 kubelet[3059]: E0813 00:02:50.870065 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:02:50.951063 kubelet[3059]: E0813 00:02:50.950931 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:50.953404 kubelet[3059]: E0813 00:02:50.953373 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:02:51.047946 kubelet[3059]: E0813 00:02:51.047843 3059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-03132a7374?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="1.6s" Aug 13 00:02:51.051887 kubelet[3059]: E0813 00:02:51.051854 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:51.152539 kubelet[3059]: E0813 00:02:51.152485 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:51.253644 kubelet[3059]: E0813 00:02:51.253520 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:51.354583 kubelet[3059]: E0813 00:02:51.354530 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:51.455279 kubelet[3059]: E0813 00:02:51.455230 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:51.555620 kubelet[3059]: E0813 00:02:51.555441 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:51.608581 kubelet[3059]: E0813 00:02:51.608476 3059 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-a-03132a7374.185b2aa564238563 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-a-03132a7374,UID:ci-4230.2.2-a-03132a7374,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-a-03132a7374,},FirstTimestamp:2025-08-13 00:02:49.628960099 +0000 UTC m=+0.973061137,LastTimestamp:2025-08-13 00:02:49.628960099 +0000 UTC m=+0.973061137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-a-03132a7374,}" Aug 13 00:02:51.656178 kubelet[3059]: E0813 00:02:51.656129 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:51.701502 kubelet[3059]: E0813 00:02:51.701456 3059 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:02:51.756677 kubelet[3059]: E0813 00:02:51.756568 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:51.857568 kubelet[3059]: E0813 00:02:51.857447 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:51.958343 kubelet[3059]: E0813 00:02:51.958294 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:52.059162 kubelet[3059]: E0813 00:02:52.059116 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:52.159660 kubelet[3059]: E0813 00:02:52.159603 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:52.260652 kubelet[3059]: E0813 00:02:52.260597 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:52.345764 kubelet[3059]: I0813 00:02:52.345700 3059 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:02:52.349416 kubelet[3059]: I0813 00:02:52.349189 3059 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:02:52.349416 kubelet[3059]: I0813 00:02:52.349215 3059 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:02:52.349416 kubelet[3059]: I0813 00:02:52.349255 3059 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:02:52.349416 kubelet[3059]: I0813 00:02:52.349267 3059 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:02:52.349416 kubelet[3059]: E0813 00:02:52.349315 3059 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:02:52.350880 kubelet[3059]: E0813 00:02:52.350632 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:02:52.360839 kubelet[3059]: E0813 00:02:52.360812 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:52.657329 kubelet[3059]: E0813 00:02:52.449590 3059 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:02:52.657329 kubelet[3059]: E0813 00:02:52.461854 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:52.657329 kubelet[3059]: E0813 00:02:52.562745 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:52.657329 kubelet[3059]: E0813 00:02:52.588586 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:02:52.657329 kubelet[3059]: E0813 00:02:52.648871 3059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-03132a7374?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="3.2s" Aug 13 00:02:52.657329 kubelet[3059]: E0813 00:02:52.650009 3059 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:02:52.663413 kubelet[3059]: E0813 00:02:52.663379 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:52.764302 kubelet[3059]: E0813 00:02:52.764255 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:52.765430 kubelet[3059]: I0813 00:02:52.765403 3059 policy_none.go:49] "None policy: Start" Aug 13 00:02:52.765530 kubelet[3059]: I0813 00:02:52.765442 3059 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:02:52.765530 kubelet[3059]: I0813 00:02:52.765461 3059 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:02:52.865152 kubelet[3059]: E0813 00:02:52.865106 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:52.902992 kubelet[3059]: E0813 00:02:52.902949 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-a-03132a7374&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:02:52.966107 kubelet[3059]: E0813 00:02:52.965974 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:53.031780 kubelet[3059]: E0813 00:02:53.031736 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.050681 3059 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.066885 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.167572 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.268411 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.369257 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.470079 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.571162 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.671992 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.758534 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.772952 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.850758 3059 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:02:53.951784 kubelet[3059]: E0813 00:02:53.874122 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:53.975069 kubelet[3059]: E0813 00:02:53.975017 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:54.075919 kubelet[3059]: E0813 00:02:54.075864 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:54.176626 kubelet[3059]: E0813 00:02:54.176570 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:54.277625 kubelet[3059]: E0813 00:02:54.277492 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:54.366680 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:02:54.375115 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:02:54.378237 kubelet[3059]: E0813 00:02:54.378210 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:54.383414 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:02:54.385638 kubelet[3059]: E0813 00:02:54.384986 3059 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:02:54.385638 kubelet[3059]: I0813 00:02:54.385216 3059 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:02:54.385638 kubelet[3059]: I0813 00:02:54.385231 3059 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:02:54.385638 kubelet[3059]: I0813 00:02:54.385500 3059 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:02:54.387242 kubelet[3059]: E0813 00:02:54.387221 3059 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:02:54.387379 kubelet[3059]: E0813 00:02:54.387364 3059 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:02:54.487759 kubelet[3059]: I0813 00:02:54.487722 3059 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:54.488134 kubelet[3059]: E0813 00:02:54.488084 3059 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:54.689700 kubelet[3059]: I0813 00:02:54.689671 3059 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:54.690044 kubelet[3059]: E0813 00:02:54.690015 3059 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:55.092644 kubelet[3059]: I0813 00:02:55.092538 3059 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:55.093054 kubelet[3059]: E0813 00:02:55.092934 3059 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:55.589388 kubelet[3059]: I0813 00:02:55.589320 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af4835bcfb93f443150e24ef56a5bfa3-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-a-03132a7374\" (UID: \"af4835bcfb93f443150e24ef56a5bfa3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-03132a7374" Aug 13 00:02:55.589388 kubelet[3059]: I0813 00:02:55.589376 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af4835bcfb93f443150e24ef56a5bfa3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-a-03132a7374\" (UID: \"af4835bcfb93f443150e24ef56a5bfa3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-03132a7374" Aug 13 00:02:55.589388 kubelet[3059]: I0813 00:02:55.589408 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af4835bcfb93f443150e24ef56a5bfa3-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-a-03132a7374\" (UID: \"af4835bcfb93f443150e24ef56a5bfa3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-03132a7374" Aug 13 00:02:55.775536 kubelet[3059]: E0813 00:02:55.775493 3059 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:02:55.849984 kubelet[3059]: E0813 00:02:55.849866 3059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-a-03132a7374?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="6.4s" Aug 13 00:02:55.894747 kubelet[3059]: I0813 00:02:55.894704 3059 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:55.895074 kubelet[3059]: E0813 00:02:55.895046 3059 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:56.514061 kubelet[3059]: E0813 00:02:56.513996 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:02:56.770983 kubelet[3059]: E0813 00:02:56.770862 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:02:57.496970 kubelet[3059]: I0813 00:02:57.496935 3059 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:57.497322 kubelet[3059]: E0813 00:02:57.497291 3059 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:57.964290 kubelet[3059]: E0813 00:02:57.964241 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:02:58.861171 kubelet[3059]: E0813 00:02:58.612439 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-a-03132a7374&limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:02:58.908607 systemd[1]: Created slice kubepods-burstable-podaf4835bcfb93f443150e24ef56a5bfa3.slice - libcontainer container kubepods-burstable-podaf4835bcfb93f443150e24ef56a5bfa3.slice. Aug 13 00:02:58.918758 kubelet[3059]: E0813 00:02:58.918728 3059 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:58.919687 containerd[1757]: time="2025-08-13T00:02:58.919646527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-a-03132a7374,Uid:af4835bcfb93f443150e24ef56a5bfa3,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:59.013354 kubelet[3059]: I0813 00:02:59.013290 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/348ba8ea1ed3e708d20561bf9d7a5681-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-a-03132a7374\" (UID: \"348ba8ea1ed3e708d20561bf9d7a5681\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" Aug 13 00:02:59.013354 kubelet[3059]: I0813 00:02:59.013354 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/348ba8ea1ed3e708d20561bf9d7a5681-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-03132a7374\" (UID: \"348ba8ea1ed3e708d20561bf9d7a5681\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" Aug 13 00:02:59.013797 kubelet[3059]: I0813 00:02:59.013382 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/348ba8ea1ed3e708d20561bf9d7a5681-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-a-03132a7374\" (UID: \"348ba8ea1ed3e708d20561bf9d7a5681\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" Aug 13 00:02:59.013797 kubelet[3059]: I0813 00:02:59.013401 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/348ba8ea1ed3e708d20561bf9d7a5681-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-03132a7374\" (UID: \"348ba8ea1ed3e708d20561bf9d7a5681\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" Aug 13 00:02:59.013797 kubelet[3059]: I0813 00:02:59.013425 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/348ba8ea1ed3e708d20561bf9d7a5681-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-a-03132a7374\" (UID: \"348ba8ea1ed3e708d20561bf9d7a5681\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" Aug 13 00:02:59.166728 systemd[1]: Created slice kubepods-burstable-pod348ba8ea1ed3e708d20561bf9d7a5681.slice - libcontainer container kubepods-burstable-pod348ba8ea1ed3e708d20561bf9d7a5681.slice. Aug 13 00:02:59.169377 kubelet[3059]: E0813 00:02:59.168898 3059 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:59.169978 containerd[1757]: time="2025-08-13T00:02:59.169675727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-a-03132a7374,Uid:348ba8ea1ed3e708d20561bf9d7a5681,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:59.176800 systemd[1]: Created slice kubepods-burstable-podd07663613339dfdead6ab23d9a6ff817.slice - libcontainer container kubepods-burstable-podd07663613339dfdead6ab23d9a6ff817.slice. Aug 13 00:02:59.179058 kubelet[3059]: E0813 00:02:59.178881 3059 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:02:59.315789 kubelet[3059]: I0813 00:02:59.315745 3059 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d07663613339dfdead6ab23d9a6ff817-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-a-03132a7374\" (UID: \"d07663613339dfdead6ab23d9a6ff817\") " pod="kube-system/kube-scheduler-ci-4230.2.2-a-03132a7374" Aug 13 00:02:59.480129 containerd[1757]: time="2025-08-13T00:02:59.479987278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-a-03132a7374,Uid:d07663613339dfdead6ab23d9a6ff817,Namespace:kube-system,Attempt:0,}" Aug 13 00:02:59.708260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2244146001.mount: Deactivated successfully. Aug 13 00:02:59.777169 containerd[1757]: time="2025-08-13T00:02:59.776756183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:02:59.797408 containerd[1757]: time="2025-08-13T00:02:59.797233104Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Aug 13 00:02:59.802643 containerd[1757]: time="2025-08-13T00:02:59.802599662Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:02:59.810548 containerd[1757]: time="2025-08-13T00:02:59.810503348Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:02:59.824711 containerd[1757]: time="2025-08-13T00:02:59.824390998Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:02:59.833326 containerd[1757]: time="2025-08-13T00:02:59.833286194Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:02:59.838633 containerd[1757]: time="2025-08-13T00:02:59.838593551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:02:59.839395 containerd[1757]: time="2025-08-13T00:02:59.839361559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 669.585731ms" Aug 13 00:02:59.841963 containerd[1757]: time="2025-08-13T00:02:59.841911387Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:02:59.849198 containerd[1757]: time="2025-08-13T00:02:59.849163765Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 929.408937ms" Aug 13 00:02:59.887488 containerd[1757]: time="2025-08-13T00:02:59.887440678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 407.323898ms" Aug 13 00:03:00.613834 kubelet[3059]: E0813 00:03:00.613789 3059 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:03:00.698934 kubelet[3059]: I0813 00:03:00.698896 3059 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:00.699287 kubelet[3059]: E0813 00:03:00.699255 3059 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:01.016775 containerd[1757]: time="2025-08-13T00:03:01.012417127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:01.016775 containerd[1757]: time="2025-08-13T00:03:01.016497871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:01.016775 containerd[1757]: time="2025-08-13T00:03:01.016517271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:01.016775 containerd[1757]: time="2025-08-13T00:03:01.016634973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:01.019127 containerd[1757]: time="2025-08-13T00:03:01.018866797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:01.019127 containerd[1757]: time="2025-08-13T00:03:01.018918497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:01.019127 containerd[1757]: time="2025-08-13T00:03:01.018939097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:01.023759 containerd[1757]: time="2025-08-13T00:03:01.023201444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:01.038303 containerd[1757]: time="2025-08-13T00:03:01.035054372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:01.038303 containerd[1757]: time="2025-08-13T00:03:01.035145472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:01.038303 containerd[1757]: time="2025-08-13T00:03:01.035166473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:01.038303 containerd[1757]: time="2025-08-13T00:03:01.035287674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:01.060551 systemd[1]: Started cri-containerd-67f6d7893ab1c32fd8792fee2a1929b3a0e688e26ac3382804587b5493f164c6.scope - libcontainer container 67f6d7893ab1c32fd8792fee2a1929b3a0e688e26ac3382804587b5493f164c6. Aug 13 00:03:01.072715 systemd[1]: Started cri-containerd-5bd84e872ade2d8cec09b562d49abd775ff364affd7d895297eeeeeb95f30f93.scope - libcontainer container 5bd84e872ade2d8cec09b562d49abd775ff364affd7d895297eeeeeb95f30f93. Aug 13 00:03:01.084943 systemd[1]: Started cri-containerd-e3667a1ccfae985da38dcf4e53067940a0b4d4c2563716085ff8470faa81b956.scope - libcontainer container e3667a1ccfae985da38dcf4e53067940a0b4d4c2563716085ff8470faa81b956. Aug 13 00:03:01.138683 containerd[1757]: time="2025-08-13T00:03:01.138571789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-a-03132a7374,Uid:348ba8ea1ed3e708d20561bf9d7a5681,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bd84e872ade2d8cec09b562d49abd775ff364affd7d895297eeeeeb95f30f93\"" Aug 13 00:03:01.157612 containerd[1757]: time="2025-08-13T00:03:01.157349292Z" level=info msg="CreateContainer within sandbox \"5bd84e872ade2d8cec09b562d49abd775ff364affd7d895297eeeeeb95f30f93\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:03:01.162997 containerd[1757]: time="2025-08-13T00:03:01.161933242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-a-03132a7374,Uid:d07663613339dfdead6ab23d9a6ff817,Namespace:kube-system,Attempt:0,} returns sandbox id \"67f6d7893ab1c32fd8792fee2a1929b3a0e688e26ac3382804587b5493f164c6\"" Aug 13 00:03:01.167427 containerd[1757]: time="2025-08-13T00:03:01.167328400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-a-03132a7374,Uid:af4835bcfb93f443150e24ef56a5bfa3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3667a1ccfae985da38dcf4e53067940a0b4d4c2563716085ff8470faa81b956\"" Aug 13 00:03:01.172913 containerd[1757]: time="2025-08-13T00:03:01.172789959Z" level=info msg="CreateContainer within sandbox \"67f6d7893ab1c32fd8792fee2a1929b3a0e688e26ac3382804587b5493f164c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:03:01.182702 containerd[1757]: time="2025-08-13T00:03:01.182682266Z" level=info msg="CreateContainer within sandbox \"e3667a1ccfae985da38dcf4e53067940a0b4d4c2563716085ff8470faa81b956\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:03:01.221412 containerd[1757]: time="2025-08-13T00:03:01.221373884Z" level=info msg="CreateContainer within sandbox \"5bd84e872ade2d8cec09b562d49abd775ff364affd7d895297eeeeeb95f30f93\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"636de5c54c6807a3fa0c78639d91ec1f294a985350313a58b946ccdfd84e42cc\"" Aug 13 00:03:01.222211 containerd[1757]: time="2025-08-13T00:03:01.222178592Z" level=info msg="StartContainer for \"636de5c54c6807a3fa0c78639d91ec1f294a985350313a58b946ccdfd84e42cc\"" Aug 13 00:03:01.252241 systemd[1]: Started cri-containerd-636de5c54c6807a3fa0c78639d91ec1f294a985350313a58b946ccdfd84e42cc.scope - libcontainer container 636de5c54c6807a3fa0c78639d91ec1f294a985350313a58b946ccdfd84e42cc. Aug 13 00:03:01.283301 containerd[1757]: time="2025-08-13T00:03:01.281948238Z" level=info msg="CreateContainer within sandbox \"67f6d7893ab1c32fd8792fee2a1929b3a0e688e26ac3382804587b5493f164c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"466536f1312d02dcc20b1c1644c77c320ecf1dbfd6f087901e050ad4d62dc387\"" Aug 13 00:03:01.283301 containerd[1757]: time="2025-08-13T00:03:01.283224551Z" level=info msg="StartContainer for \"466536f1312d02dcc20b1c1644c77c320ecf1dbfd6f087901e050ad4d62dc387\"" Aug 13 00:03:01.298071 containerd[1757]: time="2025-08-13T00:03:01.298017011Z" level=info msg="CreateContainer within sandbox \"e3667a1ccfae985da38dcf4e53067940a0b4d4c2563716085ff8470faa81b956\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"776f3494558cefe7f5911b59836036c80d9795f28db96496c461cefd0f35d2f9\"" Aug 13 00:03:01.299433 containerd[1757]: time="2025-08-13T00:03:01.299356826Z" level=info msg="StartContainer for \"776f3494558cefe7f5911b59836036c80d9795f28db96496c461cefd0f35d2f9\"" Aug 13 00:03:01.318125 containerd[1757]: time="2025-08-13T00:03:01.315556501Z" level=info msg="StartContainer for \"636de5c54c6807a3fa0c78639d91ec1f294a985350313a58b946ccdfd84e42cc\" returns successfully" Aug 13 00:03:01.355318 systemd[1]: Started cri-containerd-466536f1312d02dcc20b1c1644c77c320ecf1dbfd6f087901e050ad4d62dc387.scope - libcontainer container 466536f1312d02dcc20b1c1644c77c320ecf1dbfd6f087901e050ad4d62dc387. Aug 13 00:03:01.357465 systemd[1]: Started cri-containerd-776f3494558cefe7f5911b59836036c80d9795f28db96496c461cefd0f35d2f9.scope - libcontainer container 776f3494558cefe7f5911b59836036c80d9795f28db96496c461cefd0f35d2f9. Aug 13 00:03:01.382331 kubelet[3059]: E0813 00:03:01.382292 3059 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:01.482205 containerd[1757]: time="2025-08-13T00:03:01.482157500Z" level=info msg="StartContainer for \"466536f1312d02dcc20b1c1644c77c320ecf1dbfd6f087901e050ad4d62dc387\" returns successfully" Aug 13 00:03:01.483513 containerd[1757]: time="2025-08-13T00:03:01.482329402Z" level=info msg="StartContainer for \"776f3494558cefe7f5911b59836036c80d9795f28db96496c461cefd0f35d2f9\" returns successfully" Aug 13 00:03:02.390119 kubelet[3059]: E0813 00:03:02.390015 3059 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:02.393602 kubelet[3059]: E0813 00:03:02.393359 3059 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:03.396629 kubelet[3059]: E0813 00:03:03.396419 3059 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:03.397415 kubelet[3059]: E0813 00:03:03.397256 3059 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:04.050170 kubelet[3059]: E0813 00:03:04.050124 3059 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:04.388283 kubelet[3059]: E0813 00:03:04.388133 3059 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:03:04.396834 kubelet[3059]: E0813 00:03:04.396801 3059 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:04.398328 kubelet[3059]: E0813 00:03:04.398164 3059 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:04.416825 kubelet[3059]: E0813 00:03:04.416777 3059 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.2-a-03132a7374" not found Aug 13 00:03:04.768945 kubelet[3059]: E0813 00:03:04.768819 3059 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.2-a-03132a7374" not found Aug 13 00:03:05.213243 kubelet[3059]: E0813 00:03:05.213203 3059 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.2-a-03132a7374" not found Aug 13 00:03:06.129901 kubelet[3059]: E0813 00:03:06.129860 3059 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4230.2.2-a-03132a7374" not found Aug 13 00:03:07.102238 kubelet[3059]: I0813 00:03:07.102198 3059 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:07.108039 kubelet[3059]: I0813 00:03:07.107802 3059 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:07.108039 kubelet[3059]: E0813 00:03:07.107836 3059 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.2-a-03132a7374\": node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:03:07.115677 kubelet[3059]: E0813 00:03:07.115640 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:03:07.216778 kubelet[3059]: E0813 00:03:07.216715 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:03:07.316897 kubelet[3059]: E0813 00:03:07.316832 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:03:07.417839 kubelet[3059]: E0813 00:03:07.417709 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:03:07.486580 systemd[1]: Reload requested from client PID 3344 ('systemctl') (unit session-9.scope)... Aug 13 00:03:07.486598 systemd[1]: Reloading... Aug 13 00:03:07.518716 kubelet[3059]: E0813 00:03:07.518679 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:03:07.615121 zram_generator::config[3391]: No configuration found. Aug 13 00:03:07.619517 kubelet[3059]: E0813 00:03:07.619462 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:03:07.720352 kubelet[3059]: E0813 00:03:07.719990 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:03:07.753423 kubelet[3059]: E0813 00:03:07.752031 3059 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-a-03132a7374\" not found" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:07.753326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:03:07.821012 kubelet[3059]: E0813 00:03:07.820972 3059 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-a-03132a7374\" not found" Aug 13 00:03:07.887196 systemd[1]: Reloading finished in 400 ms. Aug 13 00:03:07.918590 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:03:07.930933 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:03:07.931207 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:03:07.931274 systemd[1]: kubelet.service: Consumed 1.420s CPU time, 131.5M memory peak. Aug 13 00:03:07.938354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:03:08.224379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:03:08.233464 (kubelet)[3458]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:03:08.278155 kubelet[3458]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:03:08.278155 kubelet[3458]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:03:08.278155 kubelet[3458]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:03:08.278155 kubelet[3458]: I0813 00:03:08.277361 3458 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:03:08.284251 kubelet[3458]: I0813 00:03:08.284215 3458 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:03:08.284251 kubelet[3458]: I0813 00:03:08.284244 3458 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:03:08.284597 kubelet[3458]: I0813 00:03:08.284481 3458 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:03:08.285622 kubelet[3458]: I0813 00:03:08.285597 3458 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 00:03:08.288423 kubelet[3458]: I0813 00:03:08.287794 3458 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:03:08.293636 kubelet[3458]: E0813 00:03:08.293595 3458 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:03:08.293747 kubelet[3458]: I0813 00:03:08.293735 3458 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:03:08.298344 kubelet[3458]: I0813 00:03:08.298315 3458 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:03:08.298574 kubelet[3458]: I0813 00:03:08.298551 3458 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:03:08.298728 kubelet[3458]: I0813 00:03:08.298574 3458 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-a-03132a7374","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:03:08.298857 kubelet[3458]: I0813 00:03:08.298738 3458 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:03:08.298857 kubelet[3458]: I0813 00:03:08.298752 3458 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:03:08.298857 kubelet[3458]: I0813 00:03:08.298801 3458 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:03:08.299192 kubelet[3458]: I0813 00:03:08.298955 3458 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:03:08.299192 kubelet[3458]: I0813 00:03:08.298992 3458 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:03:08.299192 kubelet[3458]: I0813 00:03:08.299018 3458 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:03:08.299192 kubelet[3458]: I0813 00:03:08.299036 3458 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:03:08.301471 kubelet[3458]: I0813 00:03:08.300887 3458 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Aug 13 00:03:08.302637 kubelet[3458]: I0813 00:03:08.302040 3458 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:03:08.310714 kubelet[3458]: I0813 00:03:08.310700 3458 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:03:08.311999 kubelet[3458]: I0813 00:03:08.310904 3458 server.go:1289] "Started kubelet" Aug 13 00:03:08.315218 kubelet[3458]: I0813 00:03:08.315202 3458 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:03:08.326307 kubelet[3458]: I0813 00:03:08.326174 3458 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:03:08.328970 kubelet[3458]: I0813 00:03:08.328930 3458 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:03:08.329542 kubelet[3458]: I0813 00:03:08.329462 3458 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:03:08.331711 kubelet[3458]: I0813 00:03:08.331682 3458 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:03:08.333469 kubelet[3458]: I0813 00:03:08.333428 3458 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:03:08.333700 kubelet[3458]: I0813 00:03:08.333687 3458 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:03:08.333918 kubelet[3458]: I0813 00:03:08.333907 3458 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:03:08.337594 kubelet[3458]: I0813 00:03:08.337570 3458 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:03:08.338611 kubelet[3458]: I0813 00:03:08.337994 3458 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:03:08.341569 kubelet[3458]: E0813 00:03:08.341550 3458 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:03:08.341843 kubelet[3458]: I0813 00:03:08.341757 3458 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:03:08.342632 kubelet[3458]: I0813 00:03:08.341887 3458 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:03:08.342843 kubelet[3458]: I0813 00:03:08.342828 3458 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:03:08.346359 kubelet[3458]: I0813 00:03:08.344798 3458 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:03:08.346453 kubelet[3458]: I0813 00:03:08.346438 3458 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:03:08.347392 kubelet[3458]: I0813 00:03:08.347151 3458 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:03:08.347392 kubelet[3458]: I0813 00:03:08.347173 3458 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:03:08.347392 kubelet[3458]: E0813 00:03:08.347261 3458 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:03:08.390918 kubelet[3458]: I0813 00:03:08.390893 3458 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:03:08.391250 kubelet[3458]: I0813 00:03:08.391068 3458 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:03:08.391250 kubelet[3458]: I0813 00:03:08.391250 3458 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:03:08.391423 kubelet[3458]: I0813 00:03:08.391400 3458 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:03:08.391473 kubelet[3458]: I0813 00:03:08.391416 3458 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:03:08.391473 kubelet[3458]: I0813 00:03:08.391437 3458 policy_none.go:49] "None policy: Start" Aug 13 00:03:08.391473 kubelet[3458]: I0813 00:03:08.391460 3458 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:03:08.391473 kubelet[3458]: I0813 00:03:08.391473 3458 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:03:08.391691 kubelet[3458]: I0813 00:03:08.391593 3458 state_mem.go:75] "Updated machine memory state" Aug 13 00:03:08.395487 kubelet[3458]: E0813 00:03:08.395460 3458 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:03:08.396566 kubelet[3458]: I0813 00:03:08.396454 3458 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:03:08.396566 kubelet[3458]: I0813 00:03:08.396471 3458 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:03:08.396767 kubelet[3458]: I0813 00:03:08.396708 3458 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:03:08.399117 kubelet[3458]: E0813 00:03:08.399005 3458 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:03:08.448069 kubelet[3458]: I0813 00:03:08.448031 3458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.448393 kubelet[3458]: I0813 00:03:08.448370 3458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.722594 kubelet[3458]: I0813 00:03:08.722545 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af4835bcfb93f443150e24ef56a5bfa3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-a-03132a7374\" (UID: \"af4835bcfb93f443150e24ef56a5bfa3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.722594 kubelet[3458]: I0813 00:03:08.722597 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/348ba8ea1ed3e708d20561bf9d7a5681-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-a-03132a7374\" (UID: \"348ba8ea1ed3e708d20561bf9d7a5681\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.722801 kubelet[3458]: I0813 00:03:08.722624 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/348ba8ea1ed3e708d20561bf9d7a5681-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-03132a7374\" (UID: \"348ba8ea1ed3e708d20561bf9d7a5681\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.722801 kubelet[3458]: I0813 00:03:08.722644 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/348ba8ea1ed3e708d20561bf9d7a5681-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-a-03132a7374\" (UID: \"348ba8ea1ed3e708d20561bf9d7a5681\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.722801 kubelet[3458]: I0813 00:03:08.722664 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/348ba8ea1ed3e708d20561bf9d7a5681-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-a-03132a7374\" (UID: \"348ba8ea1ed3e708d20561bf9d7a5681\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.722801 kubelet[3458]: I0813 00:03:08.722684 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/348ba8ea1ed3e708d20561bf9d7a5681-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-a-03132a7374\" (UID: \"348ba8ea1ed3e708d20561bf9d7a5681\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.722801 kubelet[3458]: I0813 00:03:08.722705 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d07663613339dfdead6ab23d9a6ff817-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-a-03132a7374\" (UID: \"d07663613339dfdead6ab23d9a6ff817\") " pod="kube-system/kube-scheduler-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.722982 kubelet[3458]: I0813 00:03:08.722723 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af4835bcfb93f443150e24ef56a5bfa3-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-a-03132a7374\" (UID: \"af4835bcfb93f443150e24ef56a5bfa3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.722982 kubelet[3458]: I0813 00:03:08.722743 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af4835bcfb93f443150e24ef56a5bfa3-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-a-03132a7374\" (UID: \"af4835bcfb93f443150e24ef56a5bfa3\") " pod="kube-system/kube-apiserver-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.722982 kubelet[3458]: I0813 00:03:08.448031 3458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.728118 kubelet[3458]: I0813 00:03:08.727237 3458 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.759811 kubelet[3458]: I0813 00:03:08.759754 3458 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 00:03:08.761576 kubelet[3458]: I0813 00:03:08.761052 3458 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 00:03:08.761576 kubelet[3458]: I0813 00:03:08.761338 3458 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 00:03:08.762830 kubelet[3458]: I0813 00:03:08.762477 3458 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.762830 kubelet[3458]: I0813 00:03:08.762573 3458 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-a-03132a7374" Aug 13 00:03:08.832692 sudo[3494]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 13 00:03:08.833064 sudo[3494]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 13 00:03:09.305926 kubelet[3458]: I0813 00:03:09.305525 3458 apiserver.go:52] "Watching apiserver" Aug 13 00:03:09.334880 kubelet[3458]: I0813 00:03:09.334753 3458 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:03:09.359920 sudo[3494]: pam_unix(sudo:session): session closed for user root Aug 13 00:03:09.375731 kubelet[3458]: I0813 00:03:09.375496 3458 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-a-03132a7374" Aug 13 00:03:09.389851 kubelet[3458]: I0813 00:03:09.389812 3458 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Aug 13 00:03:09.390011 kubelet[3458]: E0813 00:03:09.389894 3458 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-a-03132a7374\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-a-03132a7374" Aug 13 00:03:09.416445 kubelet[3458]: I0813 00:03:09.416256 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-a-03132a7374" podStartSLOduration=1.416236941 podStartE2EDuration="1.416236941s" podCreationTimestamp="2025-08-13 00:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:09.41616464 +0000 UTC m=+1.178476369" watchObservedRunningTime="2025-08-13 00:03:09.416236941 +0000 UTC m=+1.178548670" Aug 13 00:03:09.416887 kubelet[3458]: I0813 00:03:09.416663 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-a-03132a7374" podStartSLOduration=1.416649945 podStartE2EDuration="1.416649945s" podCreationTimestamp="2025-08-13 00:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:09.404633612 +0000 UTC m=+1.166945241" watchObservedRunningTime="2025-08-13 00:03:09.416649945 +0000 UTC m=+1.178961574" Aug 13 00:03:09.428023 kubelet[3458]: I0813 00:03:09.427879 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-a-03132a7374" podStartSLOduration=1.427866769 podStartE2EDuration="1.427866769s" podCreationTimestamp="2025-08-13 00:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:09.427834669 +0000 UTC m=+1.190146298" watchObservedRunningTime="2025-08-13 00:03:09.427866769 +0000 UTC m=+1.190178398" Aug 13 00:03:10.874687 sudo[2439]: pam_unix(sudo:session): session closed for user root Aug 13 00:03:10.983639 sshd[2438]: Connection closed by 10.200.16.10 port 39466 Aug 13 00:03:10.984341 sshd-session[2436]: pam_unix(sshd:session): session closed for user core Aug 13 00:03:10.988406 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:39466.service: Deactivated successfully. Aug 13 00:03:10.990595 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:03:10.990815 systemd[1]: session-9.scope: Consumed 5.537s CPU time, 266.3M memory peak. Aug 13 00:03:10.992331 systemd-logind[1721]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:03:10.993355 systemd-logind[1721]: Removed session 9. Aug 13 00:03:11.990762 kubelet[3458]: I0813 00:03:11.990708 3458 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:03:11.991565 kubelet[3458]: I0813 00:03:11.991418 3458 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:03:11.991615 containerd[1757]: time="2025-08-13T00:03:11.991184393Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:03:13.020487 systemd[1]: Created slice kubepods-burstable-pod8acdc579_7fac_4ced_8ae9_d5d94e65de08.slice - libcontainer container kubepods-burstable-pod8acdc579_7fac_4ced_8ae9_d5d94e65de08.slice. Aug 13 00:03:13.029432 systemd[1]: Created slice kubepods-besteffort-pod029d1b4b_52d9_4702_ba63_92cf4e0143db.slice - libcontainer container kubepods-besteffort-pod029d1b4b_52d9_4702_ba63_92cf4e0143db.slice. Aug 13 00:03:13.153740 kubelet[3458]: I0813 00:03:13.153695 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-xtables-lock\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.153740 kubelet[3458]: I0813 00:03:13.153745 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-config-path\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154268 kubelet[3458]: I0813 00:03:13.153769 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-hostproc\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154268 kubelet[3458]: I0813 00:03:13.153789 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-cgroup\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154268 kubelet[3458]: I0813 00:03:13.153807 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-etc-cni-netd\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154268 kubelet[3458]: I0813 00:03:13.153826 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-host-proc-sys-net\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154268 kubelet[3458]: I0813 00:03:13.153847 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-host-proc-sys-kernel\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154268 kubelet[3458]: I0813 00:03:13.153873 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/029d1b4b-52d9-4702-ba63-92cf4e0143db-lib-modules\") pod \"kube-proxy-dqq82\" (UID: \"029d1b4b-52d9-4702-ba63-92cf4e0143db\") " pod="kube-system/kube-proxy-dqq82" Aug 13 00:03:13.154493 kubelet[3458]: I0813 00:03:13.153896 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cni-path\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154493 kubelet[3458]: I0813 00:03:13.153933 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8acdc579-7fac-4ced-8ae9-d5d94e65de08-clustermesh-secrets\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154493 kubelet[3458]: I0813 00:03:13.153953 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8acdc579-7fac-4ced-8ae9-d5d94e65de08-hubble-tls\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154493 kubelet[3458]: I0813 00:03:13.153974 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj2kx\" (UniqueName: \"kubernetes.io/projected/8acdc579-7fac-4ced-8ae9-d5d94e65de08-kube-api-access-sj2kx\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154493 kubelet[3458]: I0813 00:03:13.153996 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-run\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154493 kubelet[3458]: I0813 00:03:13.154026 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/029d1b4b-52d9-4702-ba63-92cf4e0143db-kube-proxy\") pod \"kube-proxy-dqq82\" (UID: \"029d1b4b-52d9-4702-ba63-92cf4e0143db\") " pod="kube-system/kube-proxy-dqq82" Aug 13 00:03:13.154700 kubelet[3458]: I0813 00:03:13.154046 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/029d1b4b-52d9-4702-ba63-92cf4e0143db-xtables-lock\") pod \"kube-proxy-dqq82\" (UID: \"029d1b4b-52d9-4702-ba63-92cf4e0143db\") " pod="kube-system/kube-proxy-dqq82" Aug 13 00:03:13.154700 kubelet[3458]: I0813 00:03:13.154066 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpqhx\" (UniqueName: \"kubernetes.io/projected/029d1b4b-52d9-4702-ba63-92cf4e0143db-kube-api-access-lpqhx\") pod \"kube-proxy-dqq82\" (UID: \"029d1b4b-52d9-4702-ba63-92cf4e0143db\") " pod="kube-system/kube-proxy-dqq82" Aug 13 00:03:13.154700 kubelet[3458]: I0813 00:03:13.154109 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-bpf-maps\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.154700 kubelet[3458]: I0813 00:03:13.154132 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-lib-modules\") pod \"cilium-hjbbb\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " pod="kube-system/cilium-hjbbb" Aug 13 00:03:13.244762 systemd[1]: Created slice kubepods-besteffort-pod3785c4fd_c45b_49dc_ae6f_3226f2ec9bdb.slice - libcontainer container kubepods-besteffort-pod3785c4fd_c45b_49dc_ae6f_3226f2ec9bdb.slice. Aug 13 00:03:13.328739 containerd[1757]: time="2025-08-13T00:03:13.328122540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjbbb,Uid:8acdc579-7fac-4ced-8ae9-d5d94e65de08,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:13.337020 containerd[1757]: time="2025-08-13T00:03:13.336978538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqq82,Uid:029d1b4b-52d9-4702-ba63-92cf4e0143db,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:13.355702 kubelet[3458]: I0813 00:03:13.355621 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcg4l\" (UniqueName: \"kubernetes.io/projected/3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb-kube-api-access-tcg4l\") pod \"cilium-operator-6c4d7847fc-cpvpr\" (UID: \"3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb\") " pod="kube-system/cilium-operator-6c4d7847fc-cpvpr" Aug 13 00:03:13.355702 kubelet[3458]: I0813 00:03:13.355690 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cpvpr\" (UID: \"3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb\") " pod="kube-system/cilium-operator-6c4d7847fc-cpvpr" Aug 13 00:03:13.387706 containerd[1757]: time="2025-08-13T00:03:13.387601096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:13.387706 containerd[1757]: time="2025-08-13T00:03:13.387652497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:13.387706 containerd[1757]: time="2025-08-13T00:03:13.387668897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:13.388157 containerd[1757]: time="2025-08-13T00:03:13.387750698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:13.409690 containerd[1757]: time="2025-08-13T00:03:13.409610039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:13.409911 containerd[1757]: time="2025-08-13T00:03:13.409875542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:13.410047 containerd[1757]: time="2025-08-13T00:03:13.410020243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:13.410443 containerd[1757]: time="2025-08-13T00:03:13.410380747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:13.422300 systemd[1]: Started cri-containerd-55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f.scope - libcontainer container 55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f. Aug 13 00:03:13.438273 systemd[1]: Started cri-containerd-68b5ccbced38470227bc5c56763b73769bede94daac99ec4e70a2fd5696c3c95.scope - libcontainer container 68b5ccbced38470227bc5c56763b73769bede94daac99ec4e70a2fd5696c3c95. Aug 13 00:03:13.470708 containerd[1757]: time="2025-08-13T00:03:13.470267808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hjbbb,Uid:8acdc579-7fac-4ced-8ae9-d5d94e65de08,Namespace:kube-system,Attempt:0,} returns sandbox id \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\"" Aug 13 00:03:13.474041 containerd[1757]: time="2025-08-13T00:03:13.473718746Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 13 00:03:13.484366 containerd[1757]: time="2025-08-13T00:03:13.484256662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dqq82,Uid:029d1b4b-52d9-4702-ba63-92cf4e0143db,Namespace:kube-system,Attempt:0,} returns sandbox id \"68b5ccbced38470227bc5c56763b73769bede94daac99ec4e70a2fd5696c3c95\"" Aug 13 00:03:13.494225 containerd[1757]: time="2025-08-13T00:03:13.494174672Z" level=info msg="CreateContainer within sandbox \"68b5ccbced38470227bc5c56763b73769bede94daac99ec4e70a2fd5696c3c95\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:03:13.541513 containerd[1757]: time="2025-08-13T00:03:13.541464093Z" level=info msg="CreateContainer within sandbox \"68b5ccbced38470227bc5c56763b73769bede94daac99ec4e70a2fd5696c3c95\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"09228e47656a69e5ab473a3b320669f0ab5caf4b869029ce06e1acdd889a5a05\"" Aug 13 00:03:13.542867 containerd[1757]: time="2025-08-13T00:03:13.542258202Z" level=info msg="StartContainer for \"09228e47656a69e5ab473a3b320669f0ab5caf4b869029ce06e1acdd889a5a05\"" Aug 13 00:03:13.548931 containerd[1757]: time="2025-08-13T00:03:13.548875275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cpvpr,Uid:3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:13.571273 systemd[1]: Started cri-containerd-09228e47656a69e5ab473a3b320669f0ab5caf4b869029ce06e1acdd889a5a05.scope - libcontainer container 09228e47656a69e5ab473a3b320669f0ab5caf4b869029ce06e1acdd889a5a05. Aug 13 00:03:13.606267 containerd[1757]: time="2025-08-13T00:03:13.605891504Z" level=info msg="StartContainer for \"09228e47656a69e5ab473a3b320669f0ab5caf4b869029ce06e1acdd889a5a05\" returns successfully" Aug 13 00:03:13.625878 containerd[1757]: time="2025-08-13T00:03:13.625792323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:13.626030 containerd[1757]: time="2025-08-13T00:03:13.625893924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:13.626030 containerd[1757]: time="2025-08-13T00:03:13.625939625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:13.626178 containerd[1757]: time="2025-08-13T00:03:13.626139627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:13.650303 systemd[1]: Started cri-containerd-3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7.scope - libcontainer container 3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7. Aug 13 00:03:13.700508 containerd[1757]: time="2025-08-13T00:03:13.700461647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cpvpr,Uid:3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7\"" Aug 13 00:03:14.490931 kubelet[3458]: I0813 00:03:14.490864 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dqq82" podStartSLOduration=2.490845863 podStartE2EDuration="2.490845863s" podCreationTimestamp="2025-08-13 00:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:14.402004183 +0000 UTC m=+6.164315812" watchObservedRunningTime="2025-08-13 00:03:14.490845863 +0000 UTC m=+6.253157592" Aug 13 00:03:19.398952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242543180.mount: Deactivated successfully. Aug 13 00:03:21.723913 containerd[1757]: time="2025-08-13T00:03:21.723857861Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:03:21.728985 containerd[1757]: time="2025-08-13T00:03:21.728921617Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Aug 13 00:03:21.733519 containerd[1757]: time="2025-08-13T00:03:21.733466167Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:03:21.735669 containerd[1757]: time="2025-08-13T00:03:21.734963083Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.261202737s" Aug 13 00:03:21.735669 containerd[1757]: time="2025-08-13T00:03:21.735001484Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Aug 13 00:03:21.739572 containerd[1757]: time="2025-08-13T00:03:21.739519434Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 13 00:03:21.755607 containerd[1757]: time="2025-08-13T00:03:21.755579611Z" level=info msg="CreateContainer within sandbox \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:03:21.824956 containerd[1757]: time="2025-08-13T00:03:21.824907477Z" level=info msg="CreateContainer within sandbox \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa\"" Aug 13 00:03:21.826600 containerd[1757]: time="2025-08-13T00:03:21.825600985Z" level=info msg="StartContainer for \"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa\"" Aug 13 00:03:21.868283 systemd[1]: Started cri-containerd-684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa.scope - libcontainer container 684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa. Aug 13 00:03:21.903843 containerd[1757]: time="2025-08-13T00:03:21.903732948Z" level=info msg="StartContainer for \"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa\" returns successfully" Aug 13 00:03:21.905297 systemd[1]: cri-containerd-684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa.scope: Deactivated successfully. Aug 13 00:03:22.806320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa-rootfs.mount: Deactivated successfully. Aug 13 00:03:25.587110 containerd[1757]: time="2025-08-13T00:03:25.585583117Z" level=info msg="shim disconnected" id=684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa namespace=k8s.io Aug 13 00:03:25.587110 containerd[1757]: time="2025-08-13T00:03:25.585642118Z" level=warning msg="cleaning up after shim disconnected" id=684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa namespace=k8s.io Aug 13 00:03:25.587110 containerd[1757]: time="2025-08-13T00:03:25.585652618Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:25.600174 containerd[1757]: time="2025-08-13T00:03:25.600128278Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:03:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:03:26.198903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4010284656.mount: Deactivated successfully. Aug 13 00:03:26.426909 containerd[1757]: time="2025-08-13T00:03:26.426671008Z" level=info msg="CreateContainer within sandbox \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:03:26.489719 containerd[1757]: time="2025-08-13T00:03:26.489207098Z" level=info msg="CreateContainer within sandbox \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f\"" Aug 13 00:03:26.490168 containerd[1757]: time="2025-08-13T00:03:26.490136209Z" level=info msg="StartContainer for \"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f\"" Aug 13 00:03:26.565626 systemd[1]: Started cri-containerd-0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f.scope - libcontainer container 0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f. Aug 13 00:03:26.618381 containerd[1757]: time="2025-08-13T00:03:26.618330825Z" level=info msg="StartContainer for \"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f\" returns successfully" Aug 13 00:03:26.632658 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:03:26.632996 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:03:26.634259 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:03:26.640910 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:03:26.643817 systemd[1]: cri-containerd-0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f.scope: Deactivated successfully. Aug 13 00:03:26.673897 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:03:26.829534 containerd[1757]: time="2025-08-13T00:03:26.828885850Z" level=info msg="shim disconnected" id=0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f namespace=k8s.io Aug 13 00:03:26.829534 containerd[1757]: time="2025-08-13T00:03:26.829304755Z" level=warning msg="cleaning up after shim disconnected" id=0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f namespace=k8s.io Aug 13 00:03:26.829534 containerd[1757]: time="2025-08-13T00:03:26.829325755Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:27.181966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f-rootfs.mount: Deactivated successfully. Aug 13 00:03:27.204702 containerd[1757]: time="2025-08-13T00:03:27.204652201Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:03:27.206905 containerd[1757]: time="2025-08-13T00:03:27.206842825Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Aug 13 00:03:27.210052 containerd[1757]: time="2025-08-13T00:03:27.210015960Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:03:27.211734 containerd[1757]: time="2025-08-13T00:03:27.211620078Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 5.471906442s" Aug 13 00:03:27.211734 containerd[1757]: time="2025-08-13T00:03:27.211657678Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Aug 13 00:03:27.223462 containerd[1757]: time="2025-08-13T00:03:27.223430508Z" level=info msg="CreateContainer within sandbox \"3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 13 00:03:27.261774 containerd[1757]: time="2025-08-13T00:03:27.261731332Z" level=info msg="CreateContainer within sandbox \"3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\"" Aug 13 00:03:27.263682 containerd[1757]: time="2025-08-13T00:03:27.262243637Z" level=info msg="StartContainer for \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\"" Aug 13 00:03:27.295272 systemd[1]: Started cri-containerd-dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc.scope - libcontainer container dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc. Aug 13 00:03:27.326742 containerd[1757]: time="2025-08-13T00:03:27.326563348Z" level=info msg="StartContainer for \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\" returns successfully" Aug 13 00:03:27.443237 containerd[1757]: time="2025-08-13T00:03:27.443120735Z" level=info msg="CreateContainer within sandbox \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:03:27.497525 kubelet[3458]: I0813 00:03:27.496775 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cpvpr" podStartSLOduration=0.986301205 podStartE2EDuration="14.496752928s" podCreationTimestamp="2025-08-13 00:03:13 +0000 UTC" firstStartedPulling="2025-08-13 00:03:13.702022564 +0000 UTC m=+5.464334193" lastFinishedPulling="2025-08-13 00:03:27.212474287 +0000 UTC m=+18.974785916" observedRunningTime="2025-08-13 00:03:27.452845843 +0000 UTC m=+19.215157572" watchObservedRunningTime="2025-08-13 00:03:27.496752928 +0000 UTC m=+19.259064557" Aug 13 00:03:27.501122 containerd[1757]: time="2025-08-13T00:03:27.499078153Z" level=info msg="CreateContainer within sandbox \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743\"" Aug 13 00:03:27.501122 containerd[1757]: time="2025-08-13T00:03:27.499828062Z" level=info msg="StartContainer for \"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743\"" Aug 13 00:03:27.544344 systemd[1]: Started cri-containerd-51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743.scope - libcontainer container 51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743. Aug 13 00:03:27.598176 containerd[1757]: time="2025-08-13T00:03:27.598115847Z" level=info msg="StartContainer for \"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743\" returns successfully" Aug 13 00:03:27.601927 systemd[1]: cri-containerd-51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743.scope: Deactivated successfully. Aug 13 00:03:27.927877 containerd[1757]: time="2025-08-13T00:03:27.927621487Z" level=info msg="shim disconnected" id=51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743 namespace=k8s.io Aug 13 00:03:27.927877 containerd[1757]: time="2025-08-13T00:03:27.927685388Z" level=warning msg="cleaning up after shim disconnected" id=51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743 namespace=k8s.io Aug 13 00:03:27.927877 containerd[1757]: time="2025-08-13T00:03:27.927695488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:28.438522 containerd[1757]: time="2025-08-13T00:03:28.438352995Z" level=info msg="CreateContainer within sandbox \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:03:28.490315 containerd[1757]: time="2025-08-13T00:03:28.490263478Z" level=info msg="CreateContainer within sandbox \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1\"" Aug 13 00:03:28.490825 containerd[1757]: time="2025-08-13T00:03:28.490744183Z" level=info msg="StartContainer for \"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1\"" Aug 13 00:03:28.525258 systemd[1]: Started cri-containerd-98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1.scope - libcontainer container 98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1. Aug 13 00:03:28.548059 systemd[1]: cri-containerd-98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1.scope: Deactivated successfully. Aug 13 00:03:28.556637 containerd[1757]: time="2025-08-13T00:03:28.555920114Z" level=info msg="StartContainer for \"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1\" returns successfully" Aug 13 00:03:28.589536 containerd[1757]: time="2025-08-13T00:03:28.589468991Z" level=info msg="shim disconnected" id=98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1 namespace=k8s.io Aug 13 00:03:28.589536 containerd[1757]: time="2025-08-13T00:03:28.589533292Z" level=warning msg="cleaning up after shim disconnected" id=98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1 namespace=k8s.io Aug 13 00:03:28.589536 containerd[1757]: time="2025-08-13T00:03:28.589544292Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:03:29.181934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1-rootfs.mount: Deactivated successfully. Aug 13 00:03:29.457765 containerd[1757]: time="2025-08-13T00:03:29.457641133Z" level=info msg="CreateContainer within sandbox \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:03:29.519703 containerd[1757]: time="2025-08-13T00:03:29.519654829Z" level=info msg="CreateContainer within sandbox \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\"" Aug 13 00:03:29.520586 containerd[1757]: time="2025-08-13T00:03:29.520235836Z" level=info msg="StartContainer for \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\"" Aug 13 00:03:29.553543 systemd[1]: Started cri-containerd-5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea.scope - libcontainer container 5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea. Aug 13 00:03:29.595946 containerd[1757]: time="2025-08-13T00:03:29.595891885Z" level=info msg="StartContainer for \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\" returns successfully" Aug 13 00:03:29.709171 kubelet[3458]: I0813 00:03:29.709066 3458 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:03:29.768787 systemd[1]: Created slice kubepods-burstable-pod2c5d5fb9_7d6b_4ad5_8c1e_4413a1cb79b0.slice - libcontainer container kubepods-burstable-pod2c5d5fb9_7d6b_4ad5_8c1e_4413a1cb79b0.slice. Aug 13 00:03:29.778270 systemd[1]: Created slice kubepods-burstable-pod4d5bd837_87f4_4d84_a09c_97fae7b1255d.slice - libcontainer container kubepods-burstable-pod4d5bd837_87f4_4d84_a09c_97fae7b1255d.slice. Aug 13 00:03:29.864616 kubelet[3458]: I0813 00:03:29.864561 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d5bd837-87f4-4d84-a09c-97fae7b1255d-config-volume\") pod \"coredns-674b8bbfcf-lrvmg\" (UID: \"4d5bd837-87f4-4d84-a09c-97fae7b1255d\") " pod="kube-system/coredns-674b8bbfcf-lrvmg" Aug 13 00:03:29.864616 kubelet[3458]: I0813 00:03:29.864619 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c5d5fb9-7d6b-4ad5-8c1e-4413a1cb79b0-config-volume\") pod \"coredns-674b8bbfcf-w5d2k\" (UID: \"2c5d5fb9-7d6b-4ad5-8c1e-4413a1cb79b0\") " pod="kube-system/coredns-674b8bbfcf-w5d2k" Aug 13 00:03:29.864829 kubelet[3458]: I0813 00:03:29.864647 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjk85\" (UniqueName: \"kubernetes.io/projected/4d5bd837-87f4-4d84-a09c-97fae7b1255d-kube-api-access-rjk85\") pod \"coredns-674b8bbfcf-lrvmg\" (UID: \"4d5bd837-87f4-4d84-a09c-97fae7b1255d\") " pod="kube-system/coredns-674b8bbfcf-lrvmg" Aug 13 00:03:29.864829 kubelet[3458]: I0813 00:03:29.864673 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgs5p\" (UniqueName: \"kubernetes.io/projected/2c5d5fb9-7d6b-4ad5-8c1e-4413a1cb79b0-kube-api-access-hgs5p\") pod \"coredns-674b8bbfcf-w5d2k\" (UID: \"2c5d5fb9-7d6b-4ad5-8c1e-4413a1cb79b0\") " pod="kube-system/coredns-674b8bbfcf-w5d2k" Aug 13 00:03:30.075178 containerd[1757]: time="2025-08-13T00:03:30.074976261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w5d2k,Uid:2c5d5fb9-7d6b-4ad5-8c1e-4413a1cb79b0,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:30.083162 containerd[1757]: time="2025-08-13T00:03:30.082850449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lrvmg,Uid:4d5bd837-87f4-4d84-a09c-97fae7b1255d,Namespace:kube-system,Attempt:0,}" Aug 13 00:03:30.465726 kubelet[3458]: I0813 00:03:30.465278 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hjbbb" podStartSLOduration=10.20101987 podStartE2EDuration="18.46525764s" podCreationTimestamp="2025-08-13 00:03:12 +0000 UTC" firstStartedPulling="2025-08-13 00:03:13.472978238 +0000 UTC m=+5.235289967" lastFinishedPulling="2025-08-13 00:03:21.737216108 +0000 UTC m=+13.499527737" observedRunningTime="2025-08-13 00:03:30.463759823 +0000 UTC m=+22.226071552" watchObservedRunningTime="2025-08-13 00:03:30.46525764 +0000 UTC m=+22.227569369" Aug 13 00:03:31.760895 systemd-networkd[1466]: cilium_host: Link UP Aug 13 00:03:31.761083 systemd-networkd[1466]: cilium_net: Link UP Aug 13 00:03:31.761917 systemd-networkd[1466]: cilium_net: Gained carrier Aug 13 00:03:31.762862 systemd-networkd[1466]: cilium_host: Gained carrier Aug 13 00:03:31.763536 systemd-networkd[1466]: cilium_host: Gained IPv6LL Aug 13 00:03:31.927513 systemd-networkd[1466]: cilium_vxlan: Link UP Aug 13 00:03:31.927524 systemd-networkd[1466]: cilium_vxlan: Gained carrier Aug 13 00:03:31.970273 systemd-networkd[1466]: cilium_net: Gained IPv6LL Aug 13 00:03:32.189126 kernel: NET: Registered PF_ALG protocol family Aug 13 00:03:32.955394 systemd-networkd[1466]: lxc_health: Link UP Aug 13 00:03:32.956164 systemd-networkd[1466]: lxc_health: Gained carrier Aug 13 00:03:33.192212 kernel: eth0: renamed from tmpf720e Aug 13 00:03:33.197758 systemd-networkd[1466]: lxce1546070a08f: Link UP Aug 13 00:03:33.206449 kernel: eth0: renamed from tmp4640c Aug 13 00:03:33.212889 systemd-networkd[1466]: lxc949c326c8a66: Link UP Aug 13 00:03:33.213263 systemd-networkd[1466]: lxce1546070a08f: Gained carrier Aug 13 00:03:33.219740 systemd-networkd[1466]: lxc949c326c8a66: Gained carrier Aug 13 00:03:33.810297 systemd-networkd[1466]: cilium_vxlan: Gained IPv6LL Aug 13 00:03:34.322396 systemd-networkd[1466]: lxc949c326c8a66: Gained IPv6LL Aug 13 00:03:34.706256 systemd-networkd[1466]: lxc_health: Gained IPv6LL Aug 13 00:03:35.026386 systemd-networkd[1466]: lxce1546070a08f: Gained IPv6LL Aug 13 00:03:35.720253 kubelet[3458]: I0813 00:03:35.719534 3458 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:03:37.099366 containerd[1757]: time="2025-08-13T00:03:37.099007111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:37.099366 containerd[1757]: time="2025-08-13T00:03:37.099085512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:37.099366 containerd[1757]: time="2025-08-13T00:03:37.099156813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:37.099366 containerd[1757]: time="2025-08-13T00:03:37.099261214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:37.100420 containerd[1757]: time="2025-08-13T00:03:37.100124826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:03:37.100420 containerd[1757]: time="2025-08-13T00:03:37.100197427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:03:37.100420 containerd[1757]: time="2025-08-13T00:03:37.100216527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:37.100420 containerd[1757]: time="2025-08-13T00:03:37.100329429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:03:37.140648 systemd[1]: Started cri-containerd-4640c7aa485e37e09898bf63a47dfdfb605db1e6c6f1d98eb40fe3a56458dd97.scope - libcontainer container 4640c7aa485e37e09898bf63a47dfdfb605db1e6c6f1d98eb40fe3a56458dd97. Aug 13 00:03:37.165290 systemd[1]: Started cri-containerd-f720e60510dd75eeb1a65782888b27ee7f619d1380a099aac568a24b3e804295.scope - libcontainer container f720e60510dd75eeb1a65782888b27ee7f619d1380a099aac568a24b3e804295. Aug 13 00:03:37.217764 containerd[1757]: time="2025-08-13T00:03:37.217704634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lrvmg,Uid:4d5bd837-87f4-4d84-a09c-97fae7b1255d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4640c7aa485e37e09898bf63a47dfdfb605db1e6c6f1d98eb40fe3a56458dd97\"" Aug 13 00:03:37.228732 containerd[1757]: time="2025-08-13T00:03:37.228556583Z" level=info msg="CreateContainer within sandbox \"4640c7aa485e37e09898bf63a47dfdfb605db1e6c6f1d98eb40fe3a56458dd97\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:03:37.247339 containerd[1757]: time="2025-08-13T00:03:37.247070736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w5d2k,Uid:2c5d5fb9-7d6b-4ad5-8c1e-4413a1cb79b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f720e60510dd75eeb1a65782888b27ee7f619d1380a099aac568a24b3e804295\"" Aug 13 00:03:37.258225 containerd[1757]: time="2025-08-13T00:03:37.257974885Z" level=info msg="CreateContainer within sandbox \"f720e60510dd75eeb1a65782888b27ee7f619d1380a099aac568a24b3e804295\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:03:37.284117 containerd[1757]: time="2025-08-13T00:03:37.282295418Z" level=info msg="CreateContainer within sandbox \"4640c7aa485e37e09898bf63a47dfdfb605db1e6c6f1d98eb40fe3a56458dd97\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a044a001218816e51e13212df8a5c2db031ad4d9d26bb60aab5a578cc0779ab\"" Aug 13 00:03:37.284117 containerd[1757]: time="2025-08-13T00:03:37.283463534Z" level=info msg="StartContainer for \"9a044a001218816e51e13212df8a5c2db031ad4d9d26bb60aab5a578cc0779ab\"" Aug 13 00:03:37.335244 systemd[1]: Started cri-containerd-9a044a001218816e51e13212df8a5c2db031ad4d9d26bb60aab5a578cc0779ab.scope - libcontainer container 9a044a001218816e51e13212df8a5c2db031ad4d9d26bb60aab5a578cc0779ab. Aug 13 00:03:37.342453 containerd[1757]: time="2025-08-13T00:03:37.342275638Z" level=info msg="CreateContainer within sandbox \"f720e60510dd75eeb1a65782888b27ee7f619d1380a099aac568a24b3e804295\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"788cc6a5bcd76f10d8d7f7b98953c2ce4dc8e5e5d82ec176370688181995cbe3\"" Aug 13 00:03:37.343063 containerd[1757]: time="2025-08-13T00:03:37.343027048Z" level=info msg="StartContainer for \"788cc6a5bcd76f10d8d7f7b98953c2ce4dc8e5e5d82ec176370688181995cbe3\"" Aug 13 00:03:37.378202 containerd[1757]: time="2025-08-13T00:03:37.377721523Z" level=info msg="StartContainer for \"9a044a001218816e51e13212df8a5c2db031ad4d9d26bb60aab5a578cc0779ab\" returns successfully" Aug 13 00:03:37.389291 systemd[1]: Started cri-containerd-788cc6a5bcd76f10d8d7f7b98953c2ce4dc8e5e5d82ec176370688181995cbe3.scope - libcontainer container 788cc6a5bcd76f10d8d7f7b98953c2ce4dc8e5e5d82ec176370688181995cbe3. Aug 13 00:03:37.432240 containerd[1757]: time="2025-08-13T00:03:37.432065266Z" level=info msg="StartContainer for \"788cc6a5bcd76f10d8d7f7b98953c2ce4dc8e5e5d82ec176370688181995cbe3\" returns successfully" Aug 13 00:03:37.491662 kubelet[3458]: I0813 00:03:37.491124 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lrvmg" podStartSLOduration=24.490995572 podStartE2EDuration="24.490995572s" podCreationTimestamp="2025-08-13 00:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:37.488859342 +0000 UTC m=+29.251170971" watchObservedRunningTime="2025-08-13 00:03:37.490995572 +0000 UTC m=+29.253307201" Aug 13 00:03:37.511158 kubelet[3458]: I0813 00:03:37.511072 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w5d2k" podStartSLOduration=24.511053046 podStartE2EDuration="24.511053046s" podCreationTimestamp="2025-08-13 00:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:03:37.509661627 +0000 UTC m=+29.271973356" watchObservedRunningTime="2025-08-13 00:03:37.511053046 +0000 UTC m=+29.273364675" Aug 13 00:04:14.076023 update_engine[1722]: I20250813 00:04:14.075956 1722 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 00:04:14.076023 update_engine[1722]: I20250813 00:04:14.076012 1722 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 00:04:14.076666 update_engine[1722]: I20250813 00:04:14.076264 1722 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 00:04:14.076865 update_engine[1722]: I20250813 00:04:14.076833 1722 omaha_request_params.cc:62] Current group set to stable Aug 13 00:04:14.077011 update_engine[1722]: I20250813 00:04:14.076977 1722 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 00:04:14.077011 update_engine[1722]: I20250813 00:04:14.076995 1722 update_attempter.cc:643] Scheduling an action processor start. Aug 13 00:04:14.077112 update_engine[1722]: I20250813 00:04:14.077016 1722 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:04:14.077112 update_engine[1722]: I20250813 00:04:14.077054 1722 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 00:04:14.077190 update_engine[1722]: I20250813 00:04:14.077147 1722 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:04:14.077190 update_engine[1722]: I20250813 00:04:14.077159 1722 omaha_request_action.cc:272] Request: Aug 13 00:04:14.077190 update_engine[1722]: Aug 13 00:04:14.077190 update_engine[1722]: Aug 13 00:04:14.077190 update_engine[1722]: Aug 13 00:04:14.077190 update_engine[1722]: Aug 13 00:04:14.077190 update_engine[1722]: Aug 13 00:04:14.077190 update_engine[1722]: Aug 13 00:04:14.077190 update_engine[1722]: Aug 13 00:04:14.077190 update_engine[1722]: Aug 13 00:04:14.077190 update_engine[1722]: I20250813 00:04:14.077168 1722 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:04:14.077943 locksmithd[1783]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 00:04:14.078826 update_engine[1722]: I20250813 00:04:14.078799 1722 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:04:14.079230 update_engine[1722]: I20250813 00:04:14.079196 1722 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:04:14.093909 update_engine[1722]: E20250813 00:04:14.093858 1722 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:04:14.093996 update_engine[1722]: I20250813 00:04:14.093956 1722 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 00:04:24.057390 update_engine[1722]: I20250813 00:04:24.057305 1722 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:04:24.057863 update_engine[1722]: I20250813 00:04:24.057623 1722 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:04:24.057969 update_engine[1722]: I20250813 00:04:24.057937 1722 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:04:24.094554 update_engine[1722]: E20250813 00:04:24.094477 1722 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:04:24.094705 update_engine[1722]: I20250813 00:04:24.094581 1722 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 00:04:34.054274 update_engine[1722]: I20250813 00:04:34.054188 1722 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:04:34.054751 update_engine[1722]: I20250813 00:04:34.054517 1722 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:04:34.054878 update_engine[1722]: I20250813 00:04:34.054843 1722 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:04:34.085240 update_engine[1722]: E20250813 00:04:34.085165 1722 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:04:34.085409 update_engine[1722]: I20250813 00:04:34.085275 1722 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 00:04:44.055245 update_engine[1722]: I20250813 00:04:44.055159 1722 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:04:44.055706 update_engine[1722]: I20250813 00:04:44.055460 1722 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:04:44.055812 update_engine[1722]: I20250813 00:04:44.055779 1722 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:04:44.066538 update_engine[1722]: E20250813 00:04:44.066439 1722 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:04:44.066733 update_engine[1722]: I20250813 00:04:44.066567 1722 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:04:44.066733 update_engine[1722]: I20250813 00:04:44.066586 1722 omaha_request_action.cc:617] Omaha request response: Aug 13 00:04:44.066733 update_engine[1722]: E20250813 00:04:44.066684 1722 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 00:04:44.066733 update_engine[1722]: I20250813 00:04:44.066730 1722 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 00:04:44.066893 update_engine[1722]: I20250813 00:04:44.066740 1722 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:04:44.066893 update_engine[1722]: I20250813 00:04:44.066746 1722 update_attempter.cc:306] Processing Done. Aug 13 00:04:44.066893 update_engine[1722]: E20250813 00:04:44.066767 1722 update_attempter.cc:619] Update failed. Aug 13 00:04:44.066893 update_engine[1722]: I20250813 00:04:44.066776 1722 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 00:04:44.066893 update_engine[1722]: I20250813 00:04:44.066784 1722 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 00:04:44.066893 update_engine[1722]: I20250813 00:04:44.066865 1722 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 00:04:44.067120 update_engine[1722]: I20250813 00:04:44.066965 1722 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 00:04:44.067120 update_engine[1722]: I20250813 00:04:44.066996 1722 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 00:04:44.067120 update_engine[1722]: I20250813 00:04:44.067008 1722 omaha_request_action.cc:272] Request: Aug 13 00:04:44.067120 update_engine[1722]: Aug 13 00:04:44.067120 update_engine[1722]: Aug 13 00:04:44.067120 update_engine[1722]: Aug 13 00:04:44.067120 update_engine[1722]: Aug 13 00:04:44.067120 update_engine[1722]: Aug 13 00:04:44.067120 update_engine[1722]: Aug 13 00:04:44.067120 update_engine[1722]: I20250813 00:04:44.067018 1722 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 00:04:44.067456 update_engine[1722]: I20250813 00:04:44.067257 1722 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 00:04:44.067607 update_engine[1722]: I20250813 00:04:44.067543 1722 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 00:04:44.067879 locksmithd[1783]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 00:04:44.088675 update_engine[1722]: E20250813 00:04:44.088604 1722 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 00:04:44.088877 update_engine[1722]: I20250813 00:04:44.088703 1722 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 00:04:44.088877 update_engine[1722]: I20250813 00:04:44.088716 1722 omaha_request_action.cc:617] Omaha request response: Aug 13 00:04:44.088877 update_engine[1722]: I20250813 00:04:44.088726 1722 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:04:44.088877 update_engine[1722]: I20250813 00:04:44.088733 1722 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 00:04:44.088877 update_engine[1722]: I20250813 00:04:44.088742 1722 update_attempter.cc:306] Processing Done. Aug 13 00:04:44.088877 update_engine[1722]: I20250813 00:04:44.088750 1722 update_attempter.cc:310] Error event sent. Aug 13 00:04:44.088877 update_engine[1722]: I20250813 00:04:44.088764 1722 update_check_scheduler.cc:74] Next update check in 42m34s Aug 13 00:04:44.089242 locksmithd[1783]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 00:05:17.120454 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:60220.service - OpenSSH per-connection server daemon (10.200.16.10:60220). Aug 13 00:05:17.745555 sshd[4850]: Accepted publickey for core from 10.200.16.10 port 60220 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:05:17.747018 sshd-session[4850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:17.751592 systemd-logind[1721]: New session 10 of user core. Aug 13 00:05:17.759269 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:05:18.272965 sshd[4852]: Connection closed by 10.200.16.10 port 60220 Aug 13 00:05:18.272400 sshd-session[4850]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:18.276418 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:60220.service: Deactivated successfully. Aug 13 00:05:18.278724 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:05:18.279597 systemd-logind[1721]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:05:18.280700 systemd-logind[1721]: Removed session 10. Aug 13 00:05:23.388412 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:45356.service - OpenSSH per-connection server daemon (10.200.16.10:45356). Aug 13 00:05:24.014594 sshd[4864]: Accepted publickey for core from 10.200.16.10 port 45356 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:05:24.016189 sshd-session[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:24.020848 systemd-logind[1721]: New session 11 of user core. Aug 13 00:05:24.030256 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:05:24.525579 sshd[4866]: Connection closed by 10.200.16.10 port 45356 Aug 13 00:05:24.526330 sshd-session[4864]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:24.529249 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:45356.service: Deactivated successfully. Aug 13 00:05:24.531588 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:05:24.533244 systemd-logind[1721]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:05:24.534304 systemd-logind[1721]: Removed session 11. Aug 13 00:05:29.645428 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:45372.service - OpenSSH per-connection server daemon (10.200.16.10:45372). Aug 13 00:05:30.270801 sshd[4879]: Accepted publickey for core from 10.200.16.10 port 45372 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:05:30.272332 sshd-session[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:30.276671 systemd-logind[1721]: New session 12 of user core. Aug 13 00:05:30.281581 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:05:30.774681 sshd[4881]: Connection closed by 10.200.16.10 port 45372 Aug 13 00:05:30.775431 sshd-session[4879]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:30.779277 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:45372.service: Deactivated successfully. Aug 13 00:05:30.781372 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:05:30.782265 systemd-logind[1721]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:05:30.783385 systemd-logind[1721]: Removed session 12. Aug 13 00:05:35.896430 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:45040.service - OpenSSH per-connection server daemon (10.200.16.10:45040). Aug 13 00:05:36.521454 sshd[4895]: Accepted publickey for core from 10.200.16.10 port 45040 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:05:36.522922 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:36.527180 systemd-logind[1721]: New session 13 of user core. Aug 13 00:05:36.535265 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:05:37.019796 sshd[4897]: Connection closed by 10.200.16.10 port 45040 Aug 13 00:05:37.020557 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:37.023412 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:45040.service: Deactivated successfully. Aug 13 00:05:37.025717 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:05:37.027355 systemd-logind[1721]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:05:37.028696 systemd-logind[1721]: Removed session 13. Aug 13 00:05:42.136427 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:46030.service - OpenSSH per-connection server daemon (10.200.16.10:46030). Aug 13 00:05:42.763007 sshd[4911]: Accepted publickey for core from 10.200.16.10 port 46030 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:05:42.764500 sshd-session[4911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:42.768797 systemd-logind[1721]: New session 14 of user core. Aug 13 00:05:42.779284 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:05:43.262039 sshd[4913]: Connection closed by 10.200.16.10 port 46030 Aug 13 00:05:43.263136 sshd-session[4911]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:43.267002 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:46030.service: Deactivated successfully. Aug 13 00:05:43.269063 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:05:43.269983 systemd-logind[1721]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:05:43.270994 systemd-logind[1721]: Removed session 14. Aug 13 00:05:48.378394 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:46032.service - OpenSSH per-connection server daemon (10.200.16.10:46032). Aug 13 00:05:49.003589 sshd[4928]: Accepted publickey for core from 10.200.16.10 port 46032 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:05:49.005041 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:49.014053 systemd-logind[1721]: New session 15 of user core. Aug 13 00:05:49.017833 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:05:49.504662 sshd[4933]: Connection closed by 10.200.16.10 port 46032 Aug 13 00:05:49.505410 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:49.509207 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:46032.service: Deactivated successfully. Aug 13 00:05:49.511713 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:05:49.512596 systemd-logind[1721]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:05:49.513758 systemd-logind[1721]: Removed session 15. Aug 13 00:05:54.622387 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:42802.service - OpenSSH per-connection server daemon (10.200.16.10:42802). Aug 13 00:05:55.246933 sshd[4946]: Accepted publickey for core from 10.200.16.10 port 42802 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:05:55.248391 sshd-session[4946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:55.253131 systemd-logind[1721]: New session 16 of user core. Aug 13 00:05:55.257584 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:05:55.752644 sshd[4948]: Connection closed by 10.200.16.10 port 42802 Aug 13 00:05:55.753409 sshd-session[4946]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:55.757481 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:42802.service: Deactivated successfully. Aug 13 00:05:55.759550 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:05:55.760486 systemd-logind[1721]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:05:55.761534 systemd-logind[1721]: Removed session 16. Aug 13 00:05:55.867616 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:42814.service - OpenSSH per-connection server daemon (10.200.16.10:42814). Aug 13 00:05:56.494960 sshd[4961]: Accepted publickey for core from 10.200.16.10 port 42814 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:05:56.496458 sshd-session[4961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:56.501052 systemd-logind[1721]: New session 17 of user core. Aug 13 00:05:56.506251 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:05:57.037275 sshd[4963]: Connection closed by 10.200.16.10 port 42814 Aug 13 00:05:57.038196 sshd-session[4961]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:57.041262 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:42814.service: Deactivated successfully. Aug 13 00:05:57.043497 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:05:57.045084 systemd-logind[1721]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:05:57.046481 systemd-logind[1721]: Removed session 17. Aug 13 00:05:57.154407 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:42820.service - OpenSSH per-connection server daemon (10.200.16.10:42820). Aug 13 00:05:57.780643 sshd[4973]: Accepted publickey for core from 10.200.16.10 port 42820 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:05:57.782060 sshd-session[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:05:57.786524 systemd-logind[1721]: New session 18 of user core. Aug 13 00:05:57.798241 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:05:58.284613 sshd[4975]: Connection closed by 10.200.16.10 port 42820 Aug 13 00:05:58.285408 sshd-session[4973]: pam_unix(sshd:session): session closed for user core Aug 13 00:05:58.289709 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:42820.service: Deactivated successfully. Aug 13 00:05:58.291838 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:05:58.292729 systemd-logind[1721]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:05:58.293770 systemd-logind[1721]: Removed session 18. Aug 13 00:06:03.400434 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:47290.service - OpenSSH per-connection server daemon (10.200.16.10:47290). Aug 13 00:06:04.027279 sshd[4988]: Accepted publickey for core from 10.200.16.10 port 47290 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:04.028780 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:04.033153 systemd-logind[1721]: New session 19 of user core. Aug 13 00:06:04.039239 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:06:04.536415 sshd[4990]: Connection closed by 10.200.16.10 port 47290 Aug 13 00:06:04.537221 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:04.540199 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:47290.service: Deactivated successfully. Aug 13 00:06:04.542588 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:06:04.544039 systemd-logind[1721]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:06:04.545372 systemd-logind[1721]: Removed session 19. Aug 13 00:06:09.650402 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:47298.service - OpenSSH per-connection server daemon (10.200.16.10:47298). Aug 13 00:06:10.277247 sshd[5005]: Accepted publickey for core from 10.200.16.10 port 47298 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:10.278705 sshd-session[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:10.283011 systemd-logind[1721]: New session 20 of user core. Aug 13 00:06:10.290285 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:06:10.780339 sshd[5007]: Connection closed by 10.200.16.10 port 47298 Aug 13 00:06:10.781050 sshd-session[5005]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:10.784179 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:47298.service: Deactivated successfully. Aug 13 00:06:10.786445 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:06:10.787907 systemd-logind[1721]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:06:10.789176 systemd-logind[1721]: Removed session 20. Aug 13 00:06:10.896398 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:40886.service - OpenSSH per-connection server daemon (10.200.16.10:40886). Aug 13 00:06:11.520774 sshd[5019]: Accepted publickey for core from 10.200.16.10 port 40886 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:11.522276 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:11.526507 systemd-logind[1721]: New session 21 of user core. Aug 13 00:06:11.535257 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:06:12.081055 sshd[5021]: Connection closed by 10.200.16.10 port 40886 Aug 13 00:06:12.081902 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:12.085678 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:40886.service: Deactivated successfully. Aug 13 00:06:12.087958 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:06:12.089056 systemd-logind[1721]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:06:12.090231 systemd-logind[1721]: Removed session 21. Aug 13 00:06:12.192324 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:40902.service - OpenSSH per-connection server daemon (10.200.16.10:40902). Aug 13 00:06:12.827553 sshd[5030]: Accepted publickey for core from 10.200.16.10 port 40902 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:12.828986 sshd-session[5030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:12.833316 systemd-logind[1721]: New session 22 of user core. Aug 13 00:06:12.841242 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:06:13.727551 sshd[5032]: Connection closed by 10.200.16.10 port 40902 Aug 13 00:06:13.728379 sshd-session[5030]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:13.732290 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:40902.service: Deactivated successfully. Aug 13 00:06:13.734613 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:06:13.735515 systemd-logind[1721]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:06:13.736948 systemd-logind[1721]: Removed session 22. Aug 13 00:06:13.847386 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:40910.service - OpenSSH per-connection server daemon (10.200.16.10:40910). Aug 13 00:06:14.472880 sshd[5052]: Accepted publickey for core from 10.200.16.10 port 40910 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:14.474357 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:14.478734 systemd-logind[1721]: New session 23 of user core. Aug 13 00:06:14.488236 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:06:15.088900 sshd[5054]: Connection closed by 10.200.16.10 port 40910 Aug 13 00:06:15.089798 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:15.092884 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:40910.service: Deactivated successfully. Aug 13 00:06:15.095309 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:06:15.096976 systemd-logind[1721]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:06:15.098084 systemd-logind[1721]: Removed session 23. Aug 13 00:06:15.204406 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:40926.service - OpenSSH per-connection server daemon (10.200.16.10:40926). Aug 13 00:06:15.832288 sshd[5064]: Accepted publickey for core from 10.200.16.10 port 40926 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:15.833712 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:15.838050 systemd-logind[1721]: New session 24 of user core. Aug 13 00:06:15.846240 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:06:16.330708 sshd[5066]: Connection closed by 10.200.16.10 port 40926 Aug 13 00:06:16.331455 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:16.334431 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:40926.service: Deactivated successfully. Aug 13 00:06:16.336804 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:06:16.338393 systemd-logind[1721]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:06:16.339567 systemd-logind[1721]: Removed session 24. Aug 13 00:06:21.446464 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:39026.service - OpenSSH per-connection server daemon (10.200.16.10:39026). Aug 13 00:06:22.073137 sshd[5080]: Accepted publickey for core from 10.200.16.10 port 39026 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:22.074627 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:22.079219 systemd-logind[1721]: New session 25 of user core. Aug 13 00:06:22.093338 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:06:22.571937 sshd[5082]: Connection closed by 10.200.16.10 port 39026 Aug 13 00:06:22.572695 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:22.576590 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:39026.service: Deactivated successfully. Aug 13 00:06:22.578727 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:06:22.579563 systemd-logind[1721]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:06:22.580766 systemd-logind[1721]: Removed session 25. Aug 13 00:06:27.687403 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.16.10:39038.service - OpenSSH per-connection server daemon (10.200.16.10:39038). Aug 13 00:06:28.314643 sshd[5098]: Accepted publickey for core from 10.200.16.10 port 39038 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:28.316158 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:28.320491 systemd-logind[1721]: New session 26 of user core. Aug 13 00:06:28.324284 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:06:28.811778 sshd[5100]: Connection closed by 10.200.16.10 port 39038 Aug 13 00:06:28.812530 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:28.816340 systemd[1]: sshd@23-10.200.8.39:22-10.200.16.10:39038.service: Deactivated successfully. Aug 13 00:06:28.818610 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:06:28.819480 systemd-logind[1721]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:06:28.820499 systemd-logind[1721]: Removed session 26. Aug 13 00:06:28.927806 systemd[1]: Started sshd@24-10.200.8.39:22-10.200.16.10:39048.service - OpenSSH per-connection server daemon (10.200.16.10:39048). Aug 13 00:06:29.553038 sshd[5112]: Accepted publickey for core from 10.200.16.10 port 39048 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:29.554552 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:29.559173 systemd-logind[1721]: New session 27 of user core. Aug 13 00:06:29.564262 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:06:31.244850 systemd[1]: run-containerd-runc-k8s.io-5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea-runc.8j1wBs.mount: Deactivated successfully. Aug 13 00:06:31.247579 containerd[1757]: time="2025-08-13T00:06:31.247441670Z" level=info msg="StopContainer for \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\" with timeout 30 (s)" Aug 13 00:06:31.249835 containerd[1757]: time="2025-08-13T00:06:31.248371581Z" level=info msg="Stop container \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\" with signal terminated" Aug 13 00:06:31.283602 containerd[1757]: time="2025-08-13T00:06:31.283542117Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:06:31.305505 containerd[1757]: time="2025-08-13T00:06:31.305461988Z" level=info msg="StopContainer for \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\" with timeout 2 (s)" Aug 13 00:06:31.306301 containerd[1757]: time="2025-08-13T00:06:31.306269598Z" level=info msg="Stop container \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\" with signal terminated" Aug 13 00:06:31.329195 systemd-networkd[1466]: lxc_health: Link DOWN Aug 13 00:06:31.329205 systemd-networkd[1466]: lxc_health: Lost carrier Aug 13 00:06:31.352971 systemd[1]: cri-containerd-5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea.scope: Deactivated successfully. Aug 13 00:06:31.354239 systemd[1]: cri-containerd-5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea.scope: Consumed 7.505s CPU time, 124.8M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 00:06:31.366742 systemd[1]: cri-containerd-dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc.scope: Deactivated successfully. Aug 13 00:06:31.391829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea-rootfs.mount: Deactivated successfully. Aug 13 00:06:31.400594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc-rootfs.mount: Deactivated successfully. Aug 13 00:06:31.480561 containerd[1757]: time="2025-08-13T00:06:31.480074349Z" level=info msg="shim disconnected" id=5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea namespace=k8s.io Aug 13 00:06:31.480561 containerd[1757]: time="2025-08-13T00:06:31.480202151Z" level=warning msg="cleaning up after shim disconnected" id=5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea namespace=k8s.io Aug 13 00:06:31.480561 containerd[1757]: time="2025-08-13T00:06:31.480215151Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:06:31.480561 containerd[1757]: time="2025-08-13T00:06:31.480332652Z" level=info msg="shim disconnected" id=dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc namespace=k8s.io Aug 13 00:06:31.480561 containerd[1757]: time="2025-08-13T00:06:31.480379553Z" level=warning msg="cleaning up after shim disconnected" id=dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc namespace=k8s.io Aug 13 00:06:31.480561 containerd[1757]: time="2025-08-13T00:06:31.480389253Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:06:31.498214 containerd[1757]: time="2025-08-13T00:06:31.497338963Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:06:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:06:31.506831 containerd[1757]: time="2025-08-13T00:06:31.506797080Z" level=info msg="StopContainer for \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\" returns successfully" Aug 13 00:06:31.507522 containerd[1757]: time="2025-08-13T00:06:31.507484788Z" level=info msg="StopPodSandbox for \"3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7\"" Aug 13 00:06:31.507650 containerd[1757]: time="2025-08-13T00:06:31.507525089Z" level=info msg="Container to stop \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:31.508150 containerd[1757]: time="2025-08-13T00:06:31.507995795Z" level=info msg="StopContainer for \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\" returns successfully" Aug 13 00:06:31.508765 containerd[1757]: time="2025-08-13T00:06:31.508540701Z" level=info msg="StopPodSandbox for \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\"" Aug 13 00:06:31.508765 containerd[1757]: time="2025-08-13T00:06:31.508581902Z" level=info msg="Container to stop \"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:31.508765 containerd[1757]: time="2025-08-13T00:06:31.508619202Z" level=info msg="Container to stop \"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:31.508765 containerd[1757]: time="2025-08-13T00:06:31.508631202Z" level=info msg="Container to stop \"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:31.508765 containerd[1757]: time="2025-08-13T00:06:31.508645503Z" level=info msg="Container to stop \"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:31.508765 containerd[1757]: time="2025-08-13T00:06:31.508659103Z" level=info msg="Container to stop \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 00:06:31.514118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7-shm.mount: Deactivated successfully. Aug 13 00:06:31.514292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f-shm.mount: Deactivated successfully. Aug 13 00:06:31.518360 systemd[1]: cri-containerd-55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f.scope: Deactivated successfully. Aug 13 00:06:31.528811 systemd[1]: cri-containerd-3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7.scope: Deactivated successfully. Aug 13 00:06:31.570407 containerd[1757]: time="2025-08-13T00:06:31.570182864Z" level=info msg="shim disconnected" id=55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f namespace=k8s.io Aug 13 00:06:31.570407 containerd[1757]: time="2025-08-13T00:06:31.570245365Z" level=warning msg="cleaning up after shim disconnected" id=55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f namespace=k8s.io Aug 13 00:06:31.570407 containerd[1757]: time="2025-08-13T00:06:31.570258765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:06:31.572448 containerd[1757]: time="2025-08-13T00:06:31.572205689Z" level=info msg="shim disconnected" id=3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7 namespace=k8s.io Aug 13 00:06:31.572448 containerd[1757]: time="2025-08-13T00:06:31.572252790Z" level=warning msg="cleaning up after shim disconnected" id=3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7 namespace=k8s.io Aug 13 00:06:31.572448 containerd[1757]: time="2025-08-13T00:06:31.572263890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:06:31.594679 containerd[1757]: time="2025-08-13T00:06:31.594513265Z" level=info msg="TearDown network for sandbox \"3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7\" successfully" Aug 13 00:06:31.594679 containerd[1757]: time="2025-08-13T00:06:31.594548166Z" level=info msg="StopPodSandbox for \"3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7\" returns successfully" Aug 13 00:06:31.596674 containerd[1757]: time="2025-08-13T00:06:31.596569091Z" level=info msg="TearDown network for sandbox \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" successfully" Aug 13 00:06:31.596674 containerd[1757]: time="2025-08-13T00:06:31.596597991Z" level=info msg="StopPodSandbox for \"55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f\" returns successfully" Aug 13 00:06:31.644708 kubelet[3458]: I0813 00:06:31.644664 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-xtables-lock\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.644708 kubelet[3458]: I0813 00:06:31.644704 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-host-proc-sys-kernel\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645312 kubelet[3458]: I0813 00:06:31.644724 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-bpf-maps\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645312 kubelet[3458]: I0813 00:06:31.644752 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8acdc579-7fac-4ced-8ae9-d5d94e65de08-hubble-tls\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645312 kubelet[3458]: I0813 00:06:31.644770 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-cgroup\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645312 kubelet[3458]: I0813 00:06:31.644787 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-etc-cni-netd\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645312 kubelet[3458]: I0813 00:06:31.644805 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-run\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645312 kubelet[3458]: I0813 00:06:31.644833 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb-cilium-config-path\") pod \"3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb\" (UID: \"3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb\") " Aug 13 00:06:31.645569 kubelet[3458]: I0813 00:06:31.644856 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-config-path\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645569 kubelet[3458]: I0813 00:06:31.644878 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cni-path\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645569 kubelet[3458]: I0813 00:06:31.644900 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj2kx\" (UniqueName: \"kubernetes.io/projected/8acdc579-7fac-4ced-8ae9-d5d94e65de08-kube-api-access-sj2kx\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645569 kubelet[3458]: I0813 00:06:31.644919 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-hostproc\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645569 kubelet[3458]: I0813 00:06:31.644942 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-host-proc-sys-net\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645569 kubelet[3458]: I0813 00:06:31.644962 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-lib-modules\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645808 kubelet[3458]: I0813 00:06:31.644987 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8acdc579-7fac-4ced-8ae9-d5d94e65de08-clustermesh-secrets\") pod \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\" (UID: \"8acdc579-7fac-4ced-8ae9-d5d94e65de08\") " Aug 13 00:06:31.645808 kubelet[3458]: I0813 00:06:31.645011 3458 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcg4l\" (UniqueName: \"kubernetes.io/projected/3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb-kube-api-access-tcg4l\") pod \"3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb\" (UID: \"3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb\") " Aug 13 00:06:31.650033 kubelet[3458]: I0813 00:06:31.648841 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb" (UID: "3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:06:31.650033 kubelet[3458]: I0813 00:06:31.648925 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:31.650033 kubelet[3458]: I0813 00:06:31.648954 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:31.650033 kubelet[3458]: I0813 00:06:31.648977 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:31.650033 kubelet[3458]: I0813 00:06:31.649759 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cni-path" (OuterVolumeSpecName: "cni-path") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:31.650363 kubelet[3458]: I0813 00:06:31.649809 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:31.650363 kubelet[3458]: I0813 00:06:31.649835 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:31.650363 kubelet[3458]: I0813 00:06:31.649856 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:31.650363 kubelet[3458]: I0813 00:06:31.649923 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb-kube-api-access-tcg4l" (OuterVolumeSpecName: "kube-api-access-tcg4l") pod "3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb" (UID: "3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb"). InnerVolumeSpecName "kube-api-access-tcg4l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:06:31.650363 kubelet[3458]: I0813 00:06:31.649960 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:31.650573 kubelet[3458]: I0813 00:06:31.649980 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-hostproc" (OuterVolumeSpecName: "hostproc") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:31.650573 kubelet[3458]: I0813 00:06:31.650003 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 00:06:31.653875 kubelet[3458]: I0813 00:06:31.653814 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8acdc579-7fac-4ced-8ae9-d5d94e65de08-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:06:31.654694 kubelet[3458]: I0813 00:06:31.654661 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:06:31.655906 kubelet[3458]: I0813 00:06:31.655880 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8acdc579-7fac-4ced-8ae9-d5d94e65de08-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:06:31.656029 kubelet[3458]: I0813 00:06:31.655999 3458 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8acdc579-7fac-4ced-8ae9-d5d94e65de08-kube-api-access-sj2kx" (OuterVolumeSpecName: "kube-api-access-sj2kx") pod "8acdc579-7fac-4ced-8ae9-d5d94e65de08" (UID: "8acdc579-7fac-4ced-8ae9-d5d94e65de08"). InnerVolumeSpecName "kube-api-access-sj2kx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:06:31.745238 kubelet[3458]: I0813 00:06:31.745199 3458 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8acdc579-7fac-4ced-8ae9-d5d94e65de08-hubble-tls\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745238 kubelet[3458]: I0813 00:06:31.745231 3458 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-cgroup\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745238 kubelet[3458]: I0813 00:06:31.745244 3458 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-etc-cni-netd\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745238 kubelet[3458]: I0813 00:06:31.745257 3458 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-run\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745514 kubelet[3458]: I0813 00:06:31.745268 3458 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb-cilium-config-path\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745514 kubelet[3458]: I0813 00:06:31.745280 3458 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cilium-config-path\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745514 kubelet[3458]: I0813 00:06:31.745291 3458 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-cni-path\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745514 kubelet[3458]: I0813 00:06:31.745301 3458 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sj2kx\" (UniqueName: \"kubernetes.io/projected/8acdc579-7fac-4ced-8ae9-d5d94e65de08-kube-api-access-sj2kx\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745514 kubelet[3458]: I0813 00:06:31.745312 3458 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-hostproc\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745514 kubelet[3458]: I0813 00:06:31.745322 3458 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-host-proc-sys-net\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745514 kubelet[3458]: I0813 00:06:31.745333 3458 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-lib-modules\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745514 kubelet[3458]: I0813 00:06:31.745343 3458 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8acdc579-7fac-4ced-8ae9-d5d94e65de08-clustermesh-secrets\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745756 kubelet[3458]: I0813 00:06:31.745357 3458 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tcg4l\" (UniqueName: \"kubernetes.io/projected/3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb-kube-api-access-tcg4l\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745756 kubelet[3458]: I0813 00:06:31.745368 3458 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-xtables-lock\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745756 kubelet[3458]: I0813 00:06:31.745379 3458 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-host-proc-sys-kernel\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.745756 kubelet[3458]: I0813 00:06:31.745390 3458 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8acdc579-7fac-4ced-8ae9-d5d94e65de08-bpf-maps\") on node \"ci-4230.2.2-a-03132a7374\" DevicePath \"\"" Aug 13 00:06:31.793296 kubelet[3458]: I0813 00:06:31.793181 3458 scope.go:117] "RemoveContainer" containerID="5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea" Aug 13 00:06:31.796115 containerd[1757]: time="2025-08-13T00:06:31.795719955Z" level=info msg="RemoveContainer for \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\"" Aug 13 00:06:31.800507 systemd[1]: Removed slice kubepods-burstable-pod8acdc579_7fac_4ced_8ae9_d5d94e65de08.slice - libcontainer container kubepods-burstable-pod8acdc579_7fac_4ced_8ae9_d5d94e65de08.slice. Aug 13 00:06:31.800656 systemd[1]: kubepods-burstable-pod8acdc579_7fac_4ced_8ae9_d5d94e65de08.slice: Consumed 7.595s CPU time, 125.2M memory peak, 136K read from disk, 13.3M written to disk. Aug 13 00:06:31.807624 systemd[1]: Removed slice kubepods-besteffort-pod3785c4fd_c45b_49dc_ae6f_3226f2ec9bdb.slice - libcontainer container kubepods-besteffort-pod3785c4fd_c45b_49dc_ae6f_3226f2ec9bdb.slice. Aug 13 00:06:31.813298 containerd[1757]: time="2025-08-13T00:06:31.813261673Z" level=info msg="RemoveContainer for \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\" returns successfully" Aug 13 00:06:31.813606 kubelet[3458]: I0813 00:06:31.813583 3458 scope.go:117] "RemoveContainer" containerID="98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1" Aug 13 00:06:31.814789 containerd[1757]: time="2025-08-13T00:06:31.814524288Z" level=info msg="RemoveContainer for \"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1\"" Aug 13 00:06:31.822878 containerd[1757]: time="2025-08-13T00:06:31.822836491Z" level=info msg="RemoveContainer for \"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1\" returns successfully" Aug 13 00:06:31.823104 kubelet[3458]: I0813 00:06:31.823045 3458 scope.go:117] "RemoveContainer" containerID="51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743" Aug 13 00:06:31.824768 containerd[1757]: time="2025-08-13T00:06:31.824506212Z" level=info msg="RemoveContainer for \"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743\"" Aug 13 00:06:31.831424 containerd[1757]: time="2025-08-13T00:06:31.831391397Z" level=info msg="RemoveContainer for \"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743\" returns successfully" Aug 13 00:06:31.831859 kubelet[3458]: I0813 00:06:31.831733 3458 scope.go:117] "RemoveContainer" containerID="0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f" Aug 13 00:06:31.833399 containerd[1757]: time="2025-08-13T00:06:31.833363921Z" level=info msg="RemoveContainer for \"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f\"" Aug 13 00:06:31.841327 containerd[1757]: time="2025-08-13T00:06:31.841297020Z" level=info msg="RemoveContainer for \"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f\" returns successfully" Aug 13 00:06:31.841536 kubelet[3458]: I0813 00:06:31.841475 3458 scope.go:117] "RemoveContainer" containerID="684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa" Aug 13 00:06:31.842551 containerd[1757]: time="2025-08-13T00:06:31.842511635Z" level=info msg="RemoveContainer for \"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa\"" Aug 13 00:06:31.851151 containerd[1757]: time="2025-08-13T00:06:31.851118141Z" level=info msg="RemoveContainer for \"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa\" returns successfully" Aug 13 00:06:31.851370 kubelet[3458]: I0813 00:06:31.851285 3458 scope.go:117] "RemoveContainer" containerID="5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea" Aug 13 00:06:31.851507 containerd[1757]: time="2025-08-13T00:06:31.851475046Z" level=error msg="ContainerStatus for \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\": not found" Aug 13 00:06:31.851670 kubelet[3458]: E0813 00:06:31.851634 3458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\": not found" containerID="5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea" Aug 13 00:06:31.851733 kubelet[3458]: I0813 00:06:31.851668 3458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea"} err="failed to get container status \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\": rpc error: code = NotFound desc = an error occurred when try to find container \"5e223b02d45ef2fb9adcdd1ddb44def7e9e018bdc342ce748b908edbd190ccea\": not found" Aug 13 00:06:31.851733 kubelet[3458]: I0813 00:06:31.851714 3458 scope.go:117] "RemoveContainer" containerID="98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1" Aug 13 00:06:31.851976 containerd[1757]: time="2025-08-13T00:06:31.851902451Z" level=error msg="ContainerStatus for \"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1\": not found" Aug 13 00:06:31.852059 kubelet[3458]: E0813 00:06:31.852040 3458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1\": not found" containerID="98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1" Aug 13 00:06:31.852137 kubelet[3458]: I0813 00:06:31.852068 3458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1"} err="failed to get container status \"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"98b2ab098d06f904a3e6aa178882896b0737c424a1ff473dfc4cb7aef708a8f1\": not found" Aug 13 00:06:31.852137 kubelet[3458]: I0813 00:06:31.852103 3458 scope.go:117] "RemoveContainer" containerID="51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743" Aug 13 00:06:31.852347 containerd[1757]: time="2025-08-13T00:06:31.852261755Z" level=error msg="ContainerStatus for \"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743\": not found" Aug 13 00:06:31.852415 kubelet[3458]: E0813 00:06:31.852382 3458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743\": not found" containerID="51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743" Aug 13 00:06:31.852415 kubelet[3458]: I0813 00:06:31.852406 3458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743"} err="failed to get container status \"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743\": rpc error: code = NotFound desc = an error occurred when try to find container \"51f710ce70818b0c75e9dd4bee4716596dfa33059cb69fa198a433b7cc0a8743\": not found" Aug 13 00:06:31.852503 kubelet[3458]: I0813 00:06:31.852425 3458 scope.go:117] "RemoveContainer" containerID="0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f" Aug 13 00:06:31.852609 containerd[1757]: time="2025-08-13T00:06:31.852577459Z" level=error msg="ContainerStatus for \"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f\": not found" Aug 13 00:06:31.852742 kubelet[3458]: E0813 00:06:31.852709 3458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f\": not found" containerID="0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f" Aug 13 00:06:31.852800 kubelet[3458]: I0813 00:06:31.852755 3458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f"} err="failed to get container status \"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0dbefbcc945379e5a31534626e6e24edbb0ab4dcd62d4a173946a08280e5892f\": not found" Aug 13 00:06:31.852800 kubelet[3458]: I0813 00:06:31.852775 3458 scope.go:117] "RemoveContainer" containerID="684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa" Aug 13 00:06:31.853015 containerd[1757]: time="2025-08-13T00:06:31.852950164Z" level=error msg="ContainerStatus for \"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa\": not found" Aug 13 00:06:31.853165 kubelet[3458]: E0813 00:06:31.853131 3458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa\": not found" containerID="684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa" Aug 13 00:06:31.853238 kubelet[3458]: I0813 00:06:31.853169 3458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa"} err="failed to get container status \"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"684ed1fa27bef3ab97f9e0d9fbf67ca57098662975d44db4d3a6319fa4cbeeaa\": not found" Aug 13 00:06:31.853238 kubelet[3458]: I0813 00:06:31.853188 3458 scope.go:117] "RemoveContainer" containerID="dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc" Aug 13 00:06:31.854581 containerd[1757]: time="2025-08-13T00:06:31.854197279Z" level=info msg="RemoveContainer for \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\"" Aug 13 00:06:31.863724 containerd[1757]: time="2025-08-13T00:06:31.863688897Z" level=info msg="RemoveContainer for \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\" returns successfully" Aug 13 00:06:31.863957 kubelet[3458]: I0813 00:06:31.863851 3458 scope.go:117] "RemoveContainer" containerID="dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc" Aug 13 00:06:31.864140 containerd[1757]: time="2025-08-13T00:06:31.864057101Z" level=error msg="ContainerStatus for \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\": not found" Aug 13 00:06:31.864231 kubelet[3458]: E0813 00:06:31.864209 3458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\": not found" containerID="dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc" Aug 13 00:06:31.864291 kubelet[3458]: I0813 00:06:31.864234 3458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc"} err="failed to get container status \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd0b65712953b70d29e95de54224763a0e30f2447c27505119b8a9d53a9946dc\": not found" Aug 13 00:06:32.227624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3951e7bf4fa56a6cb56fbfa93169d76a958920e41ac9e5508e7ddb01b9fdcab7-rootfs.mount: Deactivated successfully. Aug 13 00:06:32.227750 systemd[1]: var-lib-kubelet-pods-3785c4fd\x2dc45b\x2d49dc\x2dae6f\x2d3226f2ec9bdb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtcg4l.mount: Deactivated successfully. Aug 13 00:06:32.227851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55f8fb29d7a2b03a97eea312899fc38ac3291010d8b8fb8801af97a42b89d68f-rootfs.mount: Deactivated successfully. Aug 13 00:06:32.227930 systemd[1]: var-lib-kubelet-pods-8acdc579\x2d7fac\x2d4ced\x2d8ae9\x2dd5d94e65de08-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsj2kx.mount: Deactivated successfully. Aug 13 00:06:32.228015 systemd[1]: var-lib-kubelet-pods-8acdc579\x2d7fac\x2d4ced\x2d8ae9\x2dd5d94e65de08-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 13 00:06:32.228115 systemd[1]: var-lib-kubelet-pods-8acdc579\x2d7fac\x2d4ced\x2d8ae9\x2dd5d94e65de08-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 13 00:06:32.350543 kubelet[3458]: I0813 00:06:32.350492 3458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb" path="/var/lib/kubelet/pods/3785c4fd-c45b-49dc-ae6f-3226f2ec9bdb/volumes" Aug 13 00:06:32.350960 kubelet[3458]: I0813 00:06:32.350928 3458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8acdc579-7fac-4ced-8ae9-d5d94e65de08" path="/var/lib/kubelet/pods/8acdc579-7fac-4ced-8ae9-d5d94e65de08/volumes" Aug 13 00:06:33.258780 sshd[5114]: Connection closed by 10.200.16.10 port 39048 Aug 13 00:06:33.259666 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:33.263852 systemd[1]: sshd@24-10.200.8.39:22-10.200.16.10:39048.service: Deactivated successfully. Aug 13 00:06:33.265904 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:06:33.267453 systemd-logind[1721]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:06:33.268482 systemd-logind[1721]: Removed session 27. Aug 13 00:06:33.374430 systemd[1]: Started sshd@25-10.200.8.39:22-10.200.16.10:41856.service - OpenSSH per-connection server daemon (10.200.16.10:41856). Aug 13 00:06:33.452420 kubelet[3458]: E0813 00:06:33.452347 3458 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:06:34.000053 sshd[5276]: Accepted publickey for core from 10.200.16.10 port 41856 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:34.001548 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:34.006612 systemd-logind[1721]: New session 28 of user core. Aug 13 00:06:34.012237 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:06:34.881136 systemd[1]: Created slice kubepods-burstable-pod2f542fb5_43e9_4aba_ac1a_e0b9a70394b5.slice - libcontainer container kubepods-burstable-pod2f542fb5_43e9_4aba_ac1a_e0b9a70394b5.slice. Aug 13 00:06:34.890684 kubelet[3458]: E0813 00:06:34.890163 3458 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230.2.2-a-03132a7374\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-a-03132a7374' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Aug 13 00:06:34.890684 kubelet[3458]: E0813 00:06:34.890254 3458 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230.2.2-a-03132a7374\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-a-03132a7374' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-ipsec-keys\"" type="*v1.Secret" Aug 13 00:06:34.890684 kubelet[3458]: E0813 00:06:34.890316 3458 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230.2.2-a-03132a7374\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-a-03132a7374' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Aug 13 00:06:34.890684 kubelet[3458]: I0813 00:06:34.890421 3458 status_manager.go:895] "Failed to get status for pod" podUID="2f542fb5-43e9-4aba-ac1a-e0b9a70394b5" pod="kube-system/cilium-6jnmp" err="pods \"cilium-6jnmp\" is forbidden: User \"system:node:ci-4230.2.2-a-03132a7374\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230.2.2-a-03132a7374' and this object" Aug 13 00:06:34.936668 sshd[5278]: Connection closed by 10.200.16.10 port 41856 Aug 13 00:06:34.937343 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:34.940392 systemd[1]: sshd@25-10.200.8.39:22-10.200.16.10:41856.service: Deactivated successfully. Aug 13 00:06:34.942850 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:06:34.944654 systemd-logind[1721]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:06:34.945813 systemd-logind[1721]: Removed session 28. Aug 13 00:06:35.055417 systemd[1]: Started sshd@26-10.200.8.39:22-10.200.16.10:41866.service - OpenSSH per-connection server daemon (10.200.16.10:41866). Aug 13 00:06:35.066122 kubelet[3458]: I0813 00:06:35.065332 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-cni-path\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066122 kubelet[3458]: I0813 00:06:35.065380 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-cilium-config-path\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066122 kubelet[3458]: I0813 00:06:35.065402 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffpjt\" (UniqueName: \"kubernetes.io/projected/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-kube-api-access-ffpjt\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066122 kubelet[3458]: I0813 00:06:35.065423 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-etc-cni-netd\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066122 kubelet[3458]: I0813 00:06:35.065446 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-clustermesh-secrets\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066351 kubelet[3458]: I0813 00:06:35.065465 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-cilium-ipsec-secrets\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066351 kubelet[3458]: I0813 00:06:35.065489 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-xtables-lock\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066351 kubelet[3458]: I0813 00:06:35.065509 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-cilium-cgroup\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066351 kubelet[3458]: I0813 00:06:35.065531 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-hubble-tls\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066351 kubelet[3458]: I0813 00:06:35.065551 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-cilium-run\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066351 kubelet[3458]: I0813 00:06:35.065573 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-bpf-maps\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066488 kubelet[3458]: I0813 00:06:35.065592 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-lib-modules\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066488 kubelet[3458]: I0813 00:06:35.065612 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-hostproc\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066488 kubelet[3458]: I0813 00:06:35.065637 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-host-proc-sys-kernel\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.066488 kubelet[3458]: I0813 00:06:35.065659 3458 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-host-proc-sys-net\") pod \"cilium-6jnmp\" (UID: \"2f542fb5-43e9-4aba-ac1a-e0b9a70394b5\") " pod="kube-system/cilium-6jnmp" Aug 13 00:06:35.683257 sshd[5288]: Accepted publickey for core from 10.200.16.10 port 41866 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:35.684722 sshd-session[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:35.689124 systemd-logind[1721]: New session 29 of user core. Aug 13 00:06:35.698234 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 00:06:36.125073 sshd[5292]: Connection closed by 10.200.16.10 port 41866 Aug 13 00:06:36.125791 sshd-session[5288]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:36.128839 systemd[1]: sshd@26-10.200.8.39:22-10.200.16.10:41866.service: Deactivated successfully. Aug 13 00:06:36.131276 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 00:06:36.133164 systemd-logind[1721]: Session 29 logged out. Waiting for processes to exit. Aug 13 00:06:36.134546 systemd-logind[1721]: Removed session 29. Aug 13 00:06:36.167075 kubelet[3458]: E0813 00:06:36.167023 3458 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Aug 13 00:06:36.167928 kubelet[3458]: E0813 00:06:36.167178 3458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-cilium-config-path podName:2f542fb5-43e9-4aba-ac1a-e0b9a70394b5 nodeName:}" failed. No retries permitted until 2025-08-13 00:06:36.667151752 +0000 UTC m=+208.429463481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-cilium-config-path") pod "cilium-6jnmp" (UID: "2f542fb5-43e9-4aba-ac1a-e0b9a70394b5") : failed to sync configmap cache: timed out waiting for the condition Aug 13 00:06:36.167928 kubelet[3458]: E0813 00:06:36.167179 3458 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Aug 13 00:06:36.167928 kubelet[3458]: E0813 00:06:36.167221 3458 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-clustermesh-secrets podName:2f542fb5-43e9-4aba-ac1a-e0b9a70394b5 nodeName:}" failed. No retries permitted until 2025-08-13 00:06:36.667211553 +0000 UTC m=+208.429523182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/2f542fb5-43e9-4aba-ac1a-e0b9a70394b5-clustermesh-secrets") pod "cilium-6jnmp" (UID: "2f542fb5-43e9-4aba-ac1a-e0b9a70394b5") : failed to sync secret cache: timed out waiting for the condition Aug 13 00:06:36.242435 systemd[1]: Started sshd@27-10.200.8.39:22-10.200.16.10:41878.service - OpenSSH per-connection server daemon (10.200.16.10:41878). Aug 13 00:06:36.686555 containerd[1757]: time="2025-08-13T00:06:36.686502831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6jnmp,Uid:2f542fb5-43e9-4aba-ac1a-e0b9a70394b5,Namespace:kube-system,Attempt:0,}" Aug 13 00:06:36.738778 containerd[1757]: time="2025-08-13T00:06:36.738300357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:06:36.738778 containerd[1757]: time="2025-08-13T00:06:36.738369958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:06:36.738778 containerd[1757]: time="2025-08-13T00:06:36.738386458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:06:36.738778 containerd[1757]: time="2025-08-13T00:06:36.738532760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:06:36.762961 systemd[1]: run-containerd-runc-k8s.io-1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3-runc.y7UivE.mount: Deactivated successfully. Aug 13 00:06:36.772285 systemd[1]: Started cri-containerd-1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3.scope - libcontainer container 1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3. Aug 13 00:06:36.794831 containerd[1757]: time="2025-08-13T00:06:36.794789640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6jnmp,Uid:2f542fb5-43e9-4aba-ac1a-e0b9a70394b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3\"" Aug 13 00:06:36.811122 containerd[1757]: time="2025-08-13T00:06:36.811046437Z" level=info msg="CreateContainer within sandbox \"1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 13 00:06:36.847175 containerd[1757]: time="2025-08-13T00:06:36.847128973Z" level=info msg="CreateContainer within sandbox \"1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e66b60b1d222c3ff587ec95d4dadc17ca11900f503043c1a254c2b805078ccb0\"" Aug 13 00:06:36.847747 containerd[1757]: time="2025-08-13T00:06:36.847718480Z" level=info msg="StartContainer for \"e66b60b1d222c3ff587ec95d4dadc17ca11900f503043c1a254c2b805078ccb0\"" Aug 13 00:06:36.875816 sshd[5300]: Accepted publickey for core from 10.200.16.10 port 41878 ssh2: RSA SHA256:kRoPe1+JBYyOI9tKM+bCs+uwHuZQVr4SuVZUnAhtmfk Aug 13 00:06:36.878054 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:06:36.878781 systemd[1]: Started cri-containerd-e66b60b1d222c3ff587ec95d4dadc17ca11900f503043c1a254c2b805078ccb0.scope - libcontainer container e66b60b1d222c3ff587ec95d4dadc17ca11900f503043c1a254c2b805078ccb0. Aug 13 00:06:36.888942 systemd-logind[1721]: New session 30 of user core. Aug 13 00:06:36.896364 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 00:06:36.919362 containerd[1757]: time="2025-08-13T00:06:36.919310646Z" level=info msg="StartContainer for \"e66b60b1d222c3ff587ec95d4dadc17ca11900f503043c1a254c2b805078ccb0\" returns successfully" Aug 13 00:06:36.925735 systemd[1]: cri-containerd-e66b60b1d222c3ff587ec95d4dadc17ca11900f503043c1a254c2b805078ccb0.scope: Deactivated successfully. Aug 13 00:06:37.015982 containerd[1757]: time="2025-08-13T00:06:37.015768812Z" level=info msg="shim disconnected" id=e66b60b1d222c3ff587ec95d4dadc17ca11900f503043c1a254c2b805078ccb0 namespace=k8s.io Aug 13 00:06:37.015982 containerd[1757]: time="2025-08-13T00:06:37.015840613Z" level=warning msg="cleaning up after shim disconnected" id=e66b60b1d222c3ff587ec95d4dadc17ca11900f503043c1a254c2b805078ccb0 namespace=k8s.io Aug 13 00:06:37.015982 containerd[1757]: time="2025-08-13T00:06:37.015854413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:06:37.827181 containerd[1757]: time="2025-08-13T00:06:37.826832218Z" level=info msg="CreateContainer within sandbox \"1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 13 00:06:37.880705 containerd[1757]: time="2025-08-13T00:06:37.880599368Z" level=info msg="CreateContainer within sandbox \"1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7c4f1c78fe1cdf8ade28e850ed89b50648595ca0fd95ff59a1bd60fe92e399ab\"" Aug 13 00:06:37.882150 containerd[1757]: time="2025-08-13T00:06:37.881263576Z" level=info msg="StartContainer for \"7c4f1c78fe1cdf8ade28e850ed89b50648595ca0fd95ff59a1bd60fe92e399ab\"" Aug 13 00:06:37.927518 systemd[1]: Started cri-containerd-7c4f1c78fe1cdf8ade28e850ed89b50648595ca0fd95ff59a1bd60fe92e399ab.scope - libcontainer container 7c4f1c78fe1cdf8ade28e850ed89b50648595ca0fd95ff59a1bd60fe92e399ab. Aug 13 00:06:37.958243 containerd[1757]: time="2025-08-13T00:06:37.958161505Z" level=info msg="StartContainer for \"7c4f1c78fe1cdf8ade28e850ed89b50648595ca0fd95ff59a1bd60fe92e399ab\" returns successfully" Aug 13 00:06:37.962295 systemd[1]: cri-containerd-7c4f1c78fe1cdf8ade28e850ed89b50648595ca0fd95ff59a1bd60fe92e399ab.scope: Deactivated successfully. Aug 13 00:06:37.983695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c4f1c78fe1cdf8ade28e850ed89b50648595ca0fd95ff59a1bd60fe92e399ab-rootfs.mount: Deactivated successfully. Aug 13 00:06:37.994558 containerd[1757]: time="2025-08-13T00:06:37.994491245Z" level=info msg="shim disconnected" id=7c4f1c78fe1cdf8ade28e850ed89b50648595ca0fd95ff59a1bd60fe92e399ab namespace=k8s.io Aug 13 00:06:37.994558 containerd[1757]: time="2025-08-13T00:06:37.994555045Z" level=warning msg="cleaning up after shim disconnected" id=7c4f1c78fe1cdf8ade28e850ed89b50648595ca0fd95ff59a1bd60fe92e399ab namespace=k8s.io Aug 13 00:06:37.994771 containerd[1757]: time="2025-08-13T00:06:37.994566146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:06:38.453857 kubelet[3458]: E0813 00:06:38.453799 3458 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 13 00:06:38.828188 containerd[1757]: time="2025-08-13T00:06:38.828063723Z" level=info msg="CreateContainer within sandbox \"1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 13 00:06:38.876628 containerd[1757]: time="2025-08-13T00:06:38.876465908Z" level=info msg="CreateContainer within sandbox \"1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c856ce32f0f861a7760231af767d99bd517df0238cddb22f34be8e4129cc9466\"" Aug 13 00:06:38.877367 containerd[1757]: time="2025-08-13T00:06:38.877332418Z" level=info msg="StartContainer for \"c856ce32f0f861a7760231af767d99bd517df0238cddb22f34be8e4129cc9466\"" Aug 13 00:06:38.936114 systemd[1]: Started cri-containerd-c856ce32f0f861a7760231af767d99bd517df0238cddb22f34be8e4129cc9466.scope - libcontainer container c856ce32f0f861a7760231af767d99bd517df0238cddb22f34be8e4129cc9466. Aug 13 00:06:38.972871 systemd[1]: cri-containerd-c856ce32f0f861a7760231af767d99bd517df0238cddb22f34be8e4129cc9466.scope: Deactivated successfully. Aug 13 00:06:38.974192 containerd[1757]: time="2025-08-13T00:06:38.974140289Z" level=info msg="StartContainer for \"c856ce32f0f861a7760231af767d99bd517df0238cddb22f34be8e4129cc9466\" returns successfully" Aug 13 00:06:39.003437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c856ce32f0f861a7760231af767d99bd517df0238cddb22f34be8e4129cc9466-rootfs.mount: Deactivated successfully. Aug 13 00:06:39.017175 containerd[1757]: time="2025-08-13T00:06:39.015570990Z" level=info msg="shim disconnected" id=c856ce32f0f861a7760231af767d99bd517df0238cddb22f34be8e4129cc9466 namespace=k8s.io Aug 13 00:06:39.017175 containerd[1757]: time="2025-08-13T00:06:39.015643791Z" level=warning msg="cleaning up after shim disconnected" id=c856ce32f0f861a7760231af767d99bd517df0238cddb22f34be8e4129cc9466 namespace=k8s.io Aug 13 00:06:39.017175 containerd[1757]: time="2025-08-13T00:06:39.015654291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:06:39.829738 containerd[1757]: time="2025-08-13T00:06:39.829616032Z" level=info msg="CreateContainer within sandbox \"1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 13 00:06:39.860884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3809415752.mount: Deactivated successfully. Aug 13 00:06:39.872982 containerd[1757]: time="2025-08-13T00:06:39.872924255Z" level=info msg="CreateContainer within sandbox \"1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ea30c2f0ba67c93f2457c498feb72c84d2c243583f1e83dfd43cff9b2ec50b8f\"" Aug 13 00:06:39.874872 containerd[1757]: time="2025-08-13T00:06:39.874189370Z" level=info msg="StartContainer for \"ea30c2f0ba67c93f2457c498feb72c84d2c243583f1e83dfd43cff9b2ec50b8f\"" Aug 13 00:06:39.925307 systemd[1]: Started cri-containerd-ea30c2f0ba67c93f2457c498feb72c84d2c243583f1e83dfd43cff9b2ec50b8f.scope - libcontainer container ea30c2f0ba67c93f2457c498feb72c84d2c243583f1e83dfd43cff9b2ec50b8f. Aug 13 00:06:40.004815 systemd[1]: cri-containerd-ea30c2f0ba67c93f2457c498feb72c84d2c243583f1e83dfd43cff9b2ec50b8f.scope: Deactivated successfully. Aug 13 00:06:40.013317 containerd[1757]: time="2025-08-13T00:06:40.012826747Z" level=info msg="StartContainer for \"ea30c2f0ba67c93f2457c498feb72c84d2c243583f1e83dfd43cff9b2ec50b8f\" returns successfully" Aug 13 00:06:40.031720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea30c2f0ba67c93f2457c498feb72c84d2c243583f1e83dfd43cff9b2ec50b8f-rootfs.mount: Deactivated successfully. Aug 13 00:06:40.045846 containerd[1757]: time="2025-08-13T00:06:40.045783345Z" level=info msg="shim disconnected" id=ea30c2f0ba67c93f2457c498feb72c84d2c243583f1e83dfd43cff9b2ec50b8f namespace=k8s.io Aug 13 00:06:40.046031 containerd[1757]: time="2025-08-13T00:06:40.045881246Z" level=warning msg="cleaning up after shim disconnected" id=ea30c2f0ba67c93f2457c498feb72c84d2c243583f1e83dfd43cff9b2ec50b8f namespace=k8s.io Aug 13 00:06:40.046031 containerd[1757]: time="2025-08-13T00:06:40.045897846Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:06:40.834691 containerd[1757]: time="2025-08-13T00:06:40.834642407Z" level=info msg="CreateContainer within sandbox \"1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 13 00:06:40.891155 containerd[1757]: time="2025-08-13T00:06:40.891083271Z" level=info msg="CreateContainer within sandbox \"1c3a0fc7acb06560d381cbcae30cbadc71c6f0e9275d19b9475975a54596c6a3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"be03cf4b1f7c3651c13f4c5782f976e9c81b74bb79fc63ea7592612543ea8911\"" Aug 13 00:06:40.892000 containerd[1757]: time="2025-08-13T00:06:40.891928781Z" level=info msg="StartContainer for \"be03cf4b1f7c3651c13f4c5782f976e9c81b74bb79fc63ea7592612543ea8911\"" Aug 13 00:06:40.935237 systemd[1]: Started cri-containerd-be03cf4b1f7c3651c13f4c5782f976e9c81b74bb79fc63ea7592612543ea8911.scope - libcontainer container be03cf4b1f7c3651c13f4c5782f976e9c81b74bb79fc63ea7592612543ea8911. Aug 13 00:06:40.966992 containerd[1757]: time="2025-08-13T00:06:40.966949063Z" level=info msg="StartContainer for \"be03cf4b1f7c3651c13f4c5782f976e9c81b74bb79fc63ea7592612543ea8911\" returns successfully" Aug 13 00:06:41.397209 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Aug 13 00:06:41.853708 kubelet[3458]: I0813 00:06:41.853472 3458 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6jnmp" podStartSLOduration=7.8534523830000005 podStartE2EDuration="7.853452383s" podCreationTimestamp="2025-08-13 00:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:06:41.85316678 +0000 UTC m=+213.615478409" watchObservedRunningTime="2025-08-13 00:06:41.853452383 +0000 UTC m=+213.615764112" Aug 13 00:06:43.236685 kubelet[3458]: I0813 00:06:43.236596 3458 setters.go:618] "Node became not ready" node="ci-4230.2.2-a-03132a7374" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-13T00:06:43Z","lastTransitionTime":"2025-08-13T00:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 13 00:06:43.578575 systemd[1]: run-containerd-runc-k8s.io-be03cf4b1f7c3651c13f4c5782f976e9c81b74bb79fc63ea7592612543ea8911-runc.0mlf0V.mount: Deactivated successfully. Aug 13 00:06:44.380394 systemd-networkd[1466]: lxc_health: Link UP Aug 13 00:06:44.381520 systemd-networkd[1466]: lxc_health: Gained carrier Aug 13 00:06:45.746366 systemd-networkd[1466]: lxc_health: Gained IPv6LL Aug 13 00:06:45.784793 systemd[1]: run-containerd-runc-k8s.io-be03cf4b1f7c3651c13f4c5782f976e9c81b74bb79fc63ea7592612543ea8911-runc.5pROTq.mount: Deactivated successfully. Aug 13 00:06:48.023059 systemd[1]: run-containerd-runc-k8s.io-be03cf4b1f7c3651c13f4c5782f976e9c81b74bb79fc63ea7592612543ea8911-runc.6E5yRy.mount: Deactivated successfully. Aug 13 00:06:48.089835 kubelet[3458]: E0813 00:06:48.089732 3458 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38250->127.0.0.1:34683: write tcp 127.0.0.1:38250->127.0.0.1:34683: write: broken pipe Aug 13 00:06:50.300264 sshd[5370]: Connection closed by 10.200.16.10 port 41878 Aug 13 00:06:50.301038 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Aug 13 00:06:50.304375 systemd[1]: sshd@27-10.200.8.39:22-10.200.16.10:41878.service: Deactivated successfully. Aug 13 00:06:50.306761 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 00:06:50.308480 systemd-logind[1721]: Session 30 logged out. Waiting for processes to exit. Aug 13 00:06:50.310266 systemd-logind[1721]: Removed session 30.