Jun 21 06:10:14.975671 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 06:10:14.975702 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 06:10:14.975712 kernel: BIOS-provided physical RAM map: Jun 21 06:10:14.975723 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 21 06:10:14.975730 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 21 06:10:14.975738 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 21 06:10:14.975746 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jun 21 06:10:14.975754 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jun 21 06:10:14.975762 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 21 06:10:14.976810 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 21 06:10:14.976823 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jun 21 06:10:14.976831 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 21 06:10:14.976842 kernel: NX (Execute Disable) protection: active Jun 21 06:10:14.976850 kernel: APIC: Static calls initialized Jun 21 06:10:14.976859 kernel: SMBIOS 3.0.0 present. Jun 21 06:10:14.976867 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jun 21 06:10:14.976875 kernel: DMI: Memory slots populated: 1/1 Jun 21 06:10:14.976885 kernel: Hypervisor detected: KVM Jun 21 06:10:14.976893 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 21 06:10:14.976901 kernel: kvm-clock: using sched offset of 4939767300 cycles Jun 21 06:10:14.976910 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 21 06:10:14.976919 kernel: tsc: Detected 1996.249 MHz processor Jun 21 06:10:14.976927 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 06:10:14.976936 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 06:10:14.976945 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jun 21 06:10:14.976953 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 21 06:10:14.976964 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 06:10:14.976973 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jun 21 06:10:14.976981 kernel: ACPI: Early table checksum verification disabled Jun 21 06:10:14.976989 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jun 21 06:10:14.976997 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 06:10:14.977006 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 06:10:14.977014 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 06:10:14.977022 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jun 21 06:10:14.977030 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 06:10:14.977040 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 06:10:14.977049 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jun 21 06:10:14.977057 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jun 21 06:10:14.977065 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jun 21 06:10:14.977073 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jun 21 06:10:14.977085 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jun 21 06:10:14.977094 kernel: No NUMA configuration found Jun 21 06:10:14.977104 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jun 21 06:10:14.977113 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] Jun 21 06:10:14.977121 kernel: Zone ranges: Jun 21 06:10:14.977131 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 06:10:14.977140 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 21 06:10:14.977148 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jun 21 06:10:14.977157 kernel: Device empty Jun 21 06:10:14.977165 kernel: Movable zone start for each node Jun 21 06:10:14.977176 kernel: Early memory node ranges Jun 21 06:10:14.977185 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 21 06:10:14.977193 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jun 21 06:10:14.977202 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jun 21 06:10:14.977211 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jun 21 06:10:14.977219 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 06:10:14.977228 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 21 06:10:14.977237 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jun 21 06:10:14.977246 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 21 06:10:14.977256 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 21 06:10:14.977265 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 06:10:14.977274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 21 06:10:14.977283 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 21 06:10:14.977309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 06:10:14.977318 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 21 06:10:14.977327 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 21 06:10:14.977336 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 06:10:14.977344 kernel: CPU topo: Max. logical packages: 2 Jun 21 06:10:14.977355 kernel: CPU topo: Max. logical dies: 2 Jun 21 06:10:14.977364 kernel: CPU topo: Max. dies per package: 1 Jun 21 06:10:14.977373 kernel: CPU topo: Max. threads per core: 1 Jun 21 06:10:14.977382 kernel: CPU topo: Num. cores per package: 1 Jun 21 06:10:14.977390 kernel: CPU topo: Num. threads per package: 1 Jun 21 06:10:14.977399 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 21 06:10:14.977407 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 21 06:10:14.977416 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jun 21 06:10:14.977425 kernel: Booting paravirtualized kernel on KVM Jun 21 06:10:14.977436 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 06:10:14.977446 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 21 06:10:14.977455 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 21 06:10:14.977464 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 21 06:10:14.977472 kernel: pcpu-alloc: [0] 0 1 Jun 21 06:10:14.977481 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 21 06:10:14.977494 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 06:10:14.977504 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 06:10:14.977516 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 06:10:14.977525 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 21 06:10:14.977534 kernel: Fallback order for Node 0: 0 Jun 21 06:10:14.977544 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jun 21 06:10:14.977553 kernel: Policy zone: Normal Jun 21 06:10:14.977562 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 06:10:14.977571 kernel: software IO TLB: area num 2. Jun 21 06:10:14.977581 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 21 06:10:14.977590 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 06:10:14.977601 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 06:10:14.977611 kernel: Dynamic Preempt: voluntary Jun 21 06:10:14.977621 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 06:10:14.977632 kernel: rcu: RCU event tracing is enabled. Jun 21 06:10:14.977643 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 21 06:10:14.977653 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 06:10:14.977663 kernel: Rude variant of Tasks RCU enabled. Jun 21 06:10:14.977673 kernel: Tracing variant of Tasks RCU enabled. Jun 21 06:10:14.977683 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 06:10:14.977692 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 21 06:10:14.977704 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 06:10:14.977714 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 06:10:14.977723 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 06:10:14.977733 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 21 06:10:14.977743 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 06:10:14.977752 kernel: Console: colour VGA+ 80x25 Jun 21 06:10:14.977761 kernel: printk: legacy console [tty0] enabled Jun 21 06:10:14.978812 kernel: printk: legacy console [ttyS0] enabled Jun 21 06:10:14.978828 kernel: ACPI: Core revision 20240827 Jun 21 06:10:14.978836 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 06:10:14.978846 kernel: x2apic enabled Jun 21 06:10:14.978855 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 06:10:14.978864 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 21 06:10:14.978873 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 21 06:10:14.978889 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jun 21 06:10:14.978900 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 21 06:10:14.978910 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 21 06:10:14.978919 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 06:10:14.978929 kernel: Spectre V2 : Mitigation: Retpolines Jun 21 06:10:14.978938 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 06:10:14.978949 kernel: Speculative Store Bypass: Vulnerable Jun 21 06:10:14.978958 kernel: x86/fpu: x87 FPU will use FXSAVE Jun 21 06:10:14.978967 kernel: Freeing SMP alternatives memory: 32K Jun 21 06:10:14.978976 kernel: pid_max: default: 32768 minimum: 301 Jun 21 06:10:14.978986 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 06:10:14.978996 kernel: landlock: Up and running. Jun 21 06:10:14.979006 kernel: SELinux: Initializing. Jun 21 06:10:14.979015 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 06:10:14.979025 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 06:10:14.979034 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jun 21 06:10:14.979043 kernel: Performance Events: AMD PMU driver. Jun 21 06:10:14.979052 kernel: ... version: 0 Jun 21 06:10:14.979061 kernel: ... bit width: 48 Jun 21 06:10:14.979070 kernel: ... generic registers: 4 Jun 21 06:10:14.979081 kernel: ... value mask: 0000ffffffffffff Jun 21 06:10:14.979091 kernel: ... max period: 00007fffffffffff Jun 21 06:10:14.979100 kernel: ... fixed-purpose events: 0 Jun 21 06:10:14.979109 kernel: ... event mask: 000000000000000f Jun 21 06:10:14.979118 kernel: signal: max sigframe size: 1440 Jun 21 06:10:14.979127 kernel: rcu: Hierarchical SRCU implementation. Jun 21 06:10:14.979136 kernel: rcu: Max phase no-delay instances is 400. Jun 21 06:10:14.979146 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 06:10:14.979156 kernel: smp: Bringing up secondary CPUs ... Jun 21 06:10:14.979166 kernel: smpboot: x86: Booting SMP configuration: Jun 21 06:10:14.979176 kernel: .... node #0, CPUs: #1 Jun 21 06:10:14.979184 kernel: smp: Brought up 1 node, 2 CPUs Jun 21 06:10:14.979194 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jun 21 06:10:14.979203 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 227296K reserved, 0K cma-reserved) Jun 21 06:10:14.979213 kernel: devtmpfs: initialized Jun 21 06:10:14.979222 kernel: x86/mm: Memory block size: 128MB Jun 21 06:10:14.979232 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 06:10:14.979241 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 21 06:10:14.979252 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 06:10:14.979261 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 06:10:14.979270 kernel: audit: initializing netlink subsys (disabled) Jun 21 06:10:14.979280 kernel: audit: type=2000 audit(1750486211.957:1): state=initialized audit_enabled=0 res=1 Jun 21 06:10:14.979289 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 06:10:14.979298 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 06:10:14.979307 kernel: cpuidle: using governor menu Jun 21 06:10:14.979316 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 06:10:14.979325 kernel: dca service started, version 1.12.1 Jun 21 06:10:14.979335 kernel: PCI: Using configuration type 1 for base access Jun 21 06:10:14.979345 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 06:10:14.979354 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 06:10:14.979363 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 06:10:14.979372 kernel: ACPI: Added _OSI(Module Device) Jun 21 06:10:14.979382 kernel: ACPI: Added _OSI(Processor Device) Jun 21 06:10:14.979391 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 06:10:14.979400 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 06:10:14.979409 kernel: ACPI: Interpreter enabled Jun 21 06:10:14.979420 kernel: ACPI: PM: (supports S0 S3 S5) Jun 21 06:10:14.979429 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 06:10:14.979438 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 06:10:14.979448 kernel: PCI: Using E820 reservations for host bridge windows Jun 21 06:10:14.979457 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 21 06:10:14.979466 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 06:10:14.979643 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 21 06:10:14.979734 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 21 06:10:14.981896 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 21 06:10:14.981936 kernel: acpiphp: Slot [3] registered Jun 21 06:10:14.981950 kernel: acpiphp: Slot [4] registered Jun 21 06:10:14.981964 kernel: acpiphp: Slot [5] registered Jun 21 06:10:14.981977 kernel: acpiphp: Slot [6] registered Jun 21 06:10:14.981992 kernel: acpiphp: Slot [7] registered Jun 21 06:10:14.982003 kernel: acpiphp: Slot [8] registered Jun 21 06:10:14.982014 kernel: acpiphp: Slot [9] registered Jun 21 06:10:14.982025 kernel: acpiphp: Slot [10] registered Jun 21 06:10:14.982041 kernel: acpiphp: Slot [11] registered Jun 21 06:10:14.982052 kernel: acpiphp: Slot [12] registered Jun 21 06:10:14.982063 kernel: acpiphp: Slot [13] registered Jun 21 06:10:14.982074 kernel: acpiphp: Slot [14] registered Jun 21 06:10:14.982085 kernel: acpiphp: Slot [15] registered Jun 21 06:10:14.982096 kernel: acpiphp: Slot [16] registered Jun 21 06:10:14.982107 kernel: acpiphp: Slot [17] registered Jun 21 06:10:14.982118 kernel: acpiphp: Slot [18] registered Jun 21 06:10:14.982129 kernel: acpiphp: Slot [19] registered Jun 21 06:10:14.982142 kernel: acpiphp: Slot [20] registered Jun 21 06:10:14.982154 kernel: acpiphp: Slot [21] registered Jun 21 06:10:14.982165 kernel: acpiphp: Slot [22] registered Jun 21 06:10:14.982176 kernel: acpiphp: Slot [23] registered Jun 21 06:10:14.982187 kernel: acpiphp: Slot [24] registered Jun 21 06:10:14.982197 kernel: acpiphp: Slot [25] registered Jun 21 06:10:14.982208 kernel: acpiphp: Slot [26] registered Jun 21 06:10:14.982219 kernel: acpiphp: Slot [27] registered Jun 21 06:10:14.982230 kernel: acpiphp: Slot [28] registered Jun 21 06:10:14.982241 kernel: acpiphp: Slot [29] registered Jun 21 06:10:14.982255 kernel: acpiphp: Slot [30] registered Jun 21 06:10:14.982266 kernel: acpiphp: Slot [31] registered Jun 21 06:10:14.982277 kernel: PCI host bridge to bus 0000:00 Jun 21 06:10:14.982425 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 21 06:10:14.982532 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 21 06:10:14.982618 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 21 06:10:14.982699 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 21 06:10:14.982836 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jun 21 06:10:14.982926 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 06:10:14.983044 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jun 21 06:10:14.983155 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jun 21 06:10:14.983260 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jun 21 06:10:14.983356 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jun 21 06:10:14.983454 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jun 21 06:10:14.983548 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jun 21 06:10:14.983641 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jun 21 06:10:14.983733 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jun 21 06:10:14.984874 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 21 06:10:14.984969 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 21 06:10:14.985055 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 21 06:10:14.985157 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jun 21 06:10:14.985247 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jun 21 06:10:14.985353 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jun 21 06:10:14.985442 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jun 21 06:10:14.985531 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jun 21 06:10:14.985622 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 21 06:10:14.985730 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 06:10:14.986828 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jun 21 06:10:14.986932 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jun 21 06:10:14.987026 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jun 21 06:10:14.987119 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jun 21 06:10:14.987219 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 21 06:10:14.987312 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jun 21 06:10:14.987409 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jun 21 06:10:14.987500 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jun 21 06:10:14.987602 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 06:10:14.987695 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jun 21 06:10:14.988835 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jun 21 06:10:14.988951 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 06:10:14.989049 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jun 21 06:10:14.989149 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jun 21 06:10:14.989241 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jun 21 06:10:14.989256 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 21 06:10:14.989267 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 21 06:10:14.989277 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 21 06:10:14.989287 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 21 06:10:14.989311 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 21 06:10:14.989321 kernel: iommu: Default domain type: Translated Jun 21 06:10:14.989332 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 06:10:14.989346 kernel: PCI: Using ACPI for IRQ routing Jun 21 06:10:14.989357 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 21 06:10:14.989367 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 21 06:10:14.989377 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jun 21 06:10:14.989475 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 21 06:10:14.989569 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 21 06:10:14.989663 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 21 06:10:14.989677 kernel: vgaarb: loaded Jun 21 06:10:14.989688 kernel: clocksource: Switched to clocksource kvm-clock Jun 21 06:10:14.989703 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 06:10:14.989714 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 06:10:14.989725 kernel: pnp: PnP ACPI init Jun 21 06:10:14.990871 kernel: pnp 00:03: [dma 2] Jun 21 06:10:14.990892 kernel: pnp: PnP ACPI: found 5 devices Jun 21 06:10:14.990903 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 06:10:14.990913 kernel: NET: Registered PF_INET protocol family Jun 21 06:10:14.990923 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 06:10:14.990938 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 21 06:10:14.990948 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 06:10:14.990958 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 21 06:10:14.990969 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 21 06:10:14.990979 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 21 06:10:14.990989 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 06:10:14.990999 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 06:10:14.991009 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 06:10:14.991019 kernel: NET: Registered PF_XDP protocol family Jun 21 06:10:14.991107 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 21 06:10:14.991190 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 21 06:10:14.991270 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 21 06:10:14.991350 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jun 21 06:10:14.991430 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jun 21 06:10:14.991526 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 21 06:10:14.991621 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 21 06:10:14.991636 kernel: PCI: CLS 0 bytes, default 64 Jun 21 06:10:14.991649 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 21 06:10:14.991660 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jun 21 06:10:14.991670 kernel: Initialise system trusted keyrings Jun 21 06:10:14.991680 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 21 06:10:14.991690 kernel: Key type asymmetric registered Jun 21 06:10:14.991699 kernel: Asymmetric key parser 'x509' registered Jun 21 06:10:14.991710 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 06:10:14.991720 kernel: io scheduler mq-deadline registered Jun 21 06:10:14.991732 kernel: io scheduler kyber registered Jun 21 06:10:14.991742 kernel: io scheduler bfq registered Jun 21 06:10:14.991753 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 06:10:14.991763 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 21 06:10:14.993119 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 21 06:10:14.993132 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 21 06:10:14.993142 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 21 06:10:14.993153 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 06:10:14.993163 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 06:10:14.993173 kernel: random: crng init done Jun 21 06:10:14.993188 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 21 06:10:14.993198 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 21 06:10:14.993208 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 21 06:10:14.993218 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 21 06:10:14.993354 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 21 06:10:14.993443 kernel: rtc_cmos 00:04: registered as rtc0 Jun 21 06:10:14.993527 kernel: rtc_cmos 00:04: setting system clock to 2025-06-21T06:10:14 UTC (1750486214) Jun 21 06:10:14.993610 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 21 06:10:14.993628 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 21 06:10:14.993639 kernel: NET: Registered PF_INET6 protocol family Jun 21 06:10:14.993649 kernel: Segment Routing with IPv6 Jun 21 06:10:14.993659 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 06:10:14.993669 kernel: NET: Registered PF_PACKET protocol family Jun 21 06:10:14.993679 kernel: Key type dns_resolver registered Jun 21 06:10:14.993689 kernel: IPI shorthand broadcast: enabled Jun 21 06:10:14.993699 kernel: sched_clock: Marking stable (3763194856, 192074078)->(4007842117, -52573183) Jun 21 06:10:14.993711 kernel: registered taskstats version 1 Jun 21 06:10:14.993721 kernel: Loading compiled-in X.509 certificates Jun 21 06:10:14.993731 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 06:10:14.993741 kernel: Demotion targets for Node 0: null Jun 21 06:10:14.993752 kernel: Key type .fscrypt registered Jun 21 06:10:14.993761 kernel: Key type fscrypt-provisioning registered Jun 21 06:10:14.993788 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 06:10:14.993798 kernel: ima: Allocated hash algorithm: sha1 Jun 21 06:10:14.993808 kernel: ima: No architecture policies found Jun 21 06:10:14.993820 kernel: clk: Disabling unused clocks Jun 21 06:10:14.993830 kernel: Warning: unable to open an initial console. Jun 21 06:10:14.993841 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 06:10:14.993851 kernel: Write protecting the kernel read-only data: 24576k Jun 21 06:10:14.993861 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 06:10:14.993871 kernel: Run /init as init process Jun 21 06:10:14.993880 kernel: with arguments: Jun 21 06:10:14.993890 kernel: /init Jun 21 06:10:14.993900 kernel: with environment: Jun 21 06:10:14.993911 kernel: HOME=/ Jun 21 06:10:14.993922 kernel: TERM=linux Jun 21 06:10:14.993932 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 06:10:14.993943 systemd[1]: Successfully made /usr/ read-only. Jun 21 06:10:14.993957 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 06:10:14.993969 systemd[1]: Detected virtualization kvm. Jun 21 06:10:14.993980 systemd[1]: Detected architecture x86-64. Jun 21 06:10:14.994000 systemd[1]: Running in initrd. Jun 21 06:10:14.994012 systemd[1]: No hostname configured, using default hostname. Jun 21 06:10:14.994023 systemd[1]: Hostname set to . Jun 21 06:10:14.994034 systemd[1]: Initializing machine ID from VM UUID. Jun 21 06:10:14.994045 systemd[1]: Queued start job for default target initrd.target. Jun 21 06:10:14.994056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 06:10:14.994068 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 06:10:14.994080 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 06:10:14.994091 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 06:10:14.994102 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 06:10:14.994114 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 06:10:14.994126 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 06:10:14.994137 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 06:10:14.994150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 06:10:14.994161 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 06:10:14.994172 systemd[1]: Reached target paths.target - Path Units. Jun 21 06:10:14.994183 systemd[1]: Reached target slices.target - Slice Units. Jun 21 06:10:14.994193 systemd[1]: Reached target swap.target - Swaps. Jun 21 06:10:14.994204 systemd[1]: Reached target timers.target - Timer Units. Jun 21 06:10:14.994215 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 06:10:14.994226 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 06:10:14.994239 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 06:10:14.994250 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 06:10:14.994261 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 06:10:14.994272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 06:10:14.994283 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 06:10:14.994293 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 06:10:14.994304 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 06:10:14.994315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 06:10:14.994326 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 06:10:14.994339 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 06:10:14.994350 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 06:10:14.994363 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 06:10:14.994376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 06:10:14.994387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:10:14.994400 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 06:10:14.994412 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 06:10:14.994450 systemd-journald[214]: Collecting audit messages is disabled. Jun 21 06:10:14.994479 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 06:10:14.994492 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 06:10:14.994504 systemd-journald[214]: Journal started Jun 21 06:10:14.994531 systemd-journald[214]: Runtime Journal (/run/log/journal/e409d4615ffe4690a831023a4aba4c3b) is 8M, max 78.5M, 70.5M free. Jun 21 06:10:14.951970 systemd-modules-load[215]: Inserted module 'overlay' Jun 21 06:10:15.044708 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 06:10:15.044752 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 06:10:15.044793 kernel: Bridge firewalling registered Jun 21 06:10:15.000668 systemd-modules-load[215]: Inserted module 'br_netfilter' Jun 21 06:10:15.045400 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 06:10:15.046911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:10:15.048460 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 06:10:15.054914 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 06:10:15.057992 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:10:15.065557 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 06:10:15.070934 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 06:10:15.082048 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:10:15.094480 systemd-tmpfiles[235]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 06:10:15.094819 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 06:10:15.101004 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 06:10:15.104882 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 06:10:15.105645 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 06:10:15.109898 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 06:10:15.127862 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 06:10:15.151092 systemd-resolved[250]: Positive Trust Anchors: Jun 21 06:10:15.151872 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 06:10:15.151915 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 06:10:15.157836 systemd-resolved[250]: Defaulting to hostname 'linux'. Jun 21 06:10:15.158857 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 06:10:15.159629 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 06:10:15.207917 kernel: SCSI subsystem initialized Jun 21 06:10:15.218847 kernel: Loading iSCSI transport class v2.0-870. Jun 21 06:10:15.231206 kernel: iscsi: registered transport (tcp) Jun 21 06:10:15.254510 kernel: iscsi: registered transport (qla4xxx) Jun 21 06:10:15.254652 kernel: QLogic iSCSI HBA Driver Jun 21 06:10:15.282893 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 06:10:15.305170 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 06:10:15.306213 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 06:10:15.399942 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 06:10:15.405955 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 06:10:15.491882 kernel: raid6: sse2x4 gen() 5789 MB/s Jun 21 06:10:15.509861 kernel: raid6: sse2x2 gen() 11415 MB/s Jun 21 06:10:15.528707 kernel: raid6: sse2x1 gen() 9931 MB/s Jun 21 06:10:15.528879 kernel: raid6: using algorithm sse2x2 gen() 11415 MB/s Jun 21 06:10:15.547887 kernel: raid6: .... xor() 9429 MB/s, rmw enabled Jun 21 06:10:15.547997 kernel: raid6: using ssse3x2 recovery algorithm Jun 21 06:10:15.571441 kernel: xor: measuring software checksum speed Jun 21 06:10:15.571538 kernel: prefetch64-sse : 17127 MB/sec Jun 21 06:10:15.571955 kernel: generic_sse : 16814 MB/sec Jun 21 06:10:15.574644 kernel: xor: using function: prefetch64-sse (17127 MB/sec) Jun 21 06:10:15.778849 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 06:10:15.787592 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 06:10:15.792835 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 06:10:15.857715 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jun 21 06:10:15.870074 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 06:10:15.876944 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 06:10:15.909006 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Jun 21 06:10:15.946107 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 06:10:15.948901 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 06:10:16.012999 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 06:10:16.016906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 06:10:16.112805 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jun 21 06:10:16.115788 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 21 06:10:16.115817 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jun 21 06:10:16.136344 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 06:10:16.136412 kernel: GPT:17805311 != 20971519 Jun 21 06:10:16.136446 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 06:10:16.138105 kernel: GPT:17805311 != 20971519 Jun 21 06:10:16.140754 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 06:10:16.140792 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 06:10:16.154450 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 06:10:16.154591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:10:16.155909 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:10:16.157373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:10:16.162350 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 06:10:16.174832 kernel: libata version 3.00 loaded. Jun 21 06:10:16.184912 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 21 06:10:16.193792 kernel: scsi host0: ata_piix Jun 21 06:10:16.203830 kernel: scsi host1: ata_piix Jun 21 06:10:16.210011 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jun 21 06:10:16.210029 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jun 21 06:10:16.238449 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 21 06:10:16.256042 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:10:16.268062 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 21 06:10:16.284395 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 21 06:10:16.285111 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 21 06:10:16.297427 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 06:10:16.299153 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 06:10:16.373744 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 06:10:16.377237 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 06:10:16.378812 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 06:10:16.381650 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 06:10:16.386313 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 06:10:16.410863 disk-uuid[560]: Primary Header is updated. Jun 21 06:10:16.410863 disk-uuid[560]: Secondary Entries is updated. Jun 21 06:10:16.410863 disk-uuid[560]: Secondary Header is updated. Jun 21 06:10:16.427543 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 06:10:16.443832 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 06:10:17.487888 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 06:10:17.491037 disk-uuid[567]: The operation has completed successfully. Jun 21 06:10:17.571742 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 06:10:17.572632 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 06:10:17.622832 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 06:10:17.658419 sh[585]: Success Jun 21 06:10:17.704045 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 06:10:17.704166 kernel: device-mapper: uevent: version 1.0.3 Jun 21 06:10:17.707497 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 06:10:17.733808 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jun 21 06:10:17.789980 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 06:10:17.793898 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 06:10:17.804459 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 06:10:17.819546 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 06:10:17.819650 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (598) Jun 21 06:10:17.823403 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 06:10:17.826052 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:10:17.828096 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 06:10:17.844211 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 06:10:17.846303 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 06:10:17.847849 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 06:10:17.850996 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 06:10:17.852927 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 06:10:17.904814 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (636) Jun 21 06:10:17.912118 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:10:17.912182 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:10:17.912196 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 06:10:17.929886 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:10:17.932234 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 06:10:17.937003 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 06:10:18.004098 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 06:10:18.010003 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 06:10:18.070216 systemd-networkd[767]: lo: Link UP Jun 21 06:10:18.070227 systemd-networkd[767]: lo: Gained carrier Jun 21 06:10:18.071343 systemd-networkd[767]: Enumeration completed Jun 21 06:10:18.072248 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:10:18.072252 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 06:10:18.073595 systemd-networkd[767]: eth0: Link UP Jun 21 06:10:18.073598 systemd-networkd[767]: eth0: Gained carrier Jun 21 06:10:18.073607 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:10:18.076364 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 06:10:18.076973 systemd[1]: Reached target network.target - Network. Jun 21 06:10:18.088970 systemd-networkd[767]: eth0: DHCPv4 address 172.24.4.3/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 21 06:10:18.155638 ignition[704]: Ignition 2.21.0 Jun 21 06:10:18.155654 ignition[704]: Stage: fetch-offline Jun 21 06:10:18.155701 ignition[704]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:10:18.155713 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:10:18.155872 ignition[704]: parsed url from cmdline: "" Jun 21 06:10:18.159232 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 06:10:18.155877 ignition[704]: no config URL provided Jun 21 06:10:18.155886 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 06:10:18.161863 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 21 06:10:18.155895 ignition[704]: no config at "/usr/lib/ignition/user.ign" Jun 21 06:10:18.155901 ignition[704]: failed to fetch config: resource requires networking Jun 21 06:10:18.156255 ignition[704]: Ignition finished successfully Jun 21 06:10:18.190257 ignition[780]: Ignition 2.21.0 Jun 21 06:10:18.190268 ignition[780]: Stage: fetch Jun 21 06:10:18.191147 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:10:18.191158 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:10:18.191232 ignition[780]: parsed url from cmdline: "" Jun 21 06:10:18.191236 ignition[780]: no config URL provided Jun 21 06:10:18.191241 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 06:10:18.191248 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jun 21 06:10:18.191343 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jun 21 06:10:18.191393 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jun 21 06:10:18.191438 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jun 21 06:10:18.525986 ignition[780]: GET result: OK Jun 21 06:10:18.526338 ignition[780]: parsing config with SHA512: 5306ac6684695a14e18ebde9e9558afdddef3a192e942ebb605bdf27fdb9ca8a2e8367abf34b706d894a8be092eca104070cb8be86ec4dca44846810f257fee0 Jun 21 06:10:18.535457 unknown[780]: fetched base config from "system" Jun 21 06:10:18.535495 unknown[780]: fetched base config from "system" Jun 21 06:10:18.536758 ignition[780]: fetch: fetch complete Jun 21 06:10:18.535517 unknown[780]: fetched user config from "openstack" Jun 21 06:10:18.536849 ignition[780]: fetch: fetch passed Jun 21 06:10:18.537002 ignition[780]: Ignition finished successfully Jun 21 06:10:18.541819 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 21 06:10:18.546995 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 06:10:18.602140 ignition[787]: Ignition 2.21.0 Jun 21 06:10:18.602173 ignition[787]: Stage: kargs Jun 21 06:10:18.602522 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:10:18.602548 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:10:18.607583 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 06:10:18.604737 ignition[787]: kargs: kargs passed Jun 21 06:10:18.604881 ignition[787]: Ignition finished successfully Jun 21 06:10:18.613971 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 06:10:18.670131 ignition[793]: Ignition 2.21.0 Jun 21 06:10:18.670156 ignition[793]: Stage: disks Jun 21 06:10:18.670467 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:10:18.670490 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:10:18.673746 ignition[793]: disks: disks passed Jun 21 06:10:18.677204 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 06:10:18.675135 ignition[793]: Ignition finished successfully Jun 21 06:10:18.680612 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 06:10:18.681968 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 06:10:18.683766 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 06:10:18.685865 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 06:10:18.688303 systemd[1]: Reached target basic.target - Basic System. Jun 21 06:10:18.691104 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 06:10:18.728545 systemd-fsck[802]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jun 21 06:10:18.744649 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 06:10:18.749989 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 06:10:18.946912 kernel: EXT4-fs (vda9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 06:10:18.946679 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 06:10:18.948257 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 06:10:18.951295 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 06:10:18.953880 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 06:10:18.961730 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 21 06:10:18.967961 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jun 21 06:10:18.970210 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 06:10:18.970247 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 06:10:18.977549 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 06:10:18.981976 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 06:10:19.002926 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (810) Jun 21 06:10:19.002995 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:10:19.003028 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:10:19.003058 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 06:10:19.008212 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 06:10:19.145587 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 06:10:19.153306 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jun 21 06:10:19.156408 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:19.163564 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 06:10:19.171098 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 06:10:19.204989 systemd-networkd[767]: eth0: Gained IPv6LL Jun 21 06:10:19.274576 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 06:10:19.276691 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 06:10:19.277921 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 06:10:19.293270 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 06:10:19.296090 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:10:19.316435 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 06:10:19.327067 ignition[928]: INFO : Ignition 2.21.0 Jun 21 06:10:19.327067 ignition[928]: INFO : Stage: mount Jun 21 06:10:19.329968 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 06:10:19.329968 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:10:19.329968 ignition[928]: INFO : mount: mount passed Jun 21 06:10:19.329968 ignition[928]: INFO : Ignition finished successfully Jun 21 06:10:19.329383 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 06:10:20.194847 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:22.207842 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:26.221849 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:26.232330 coreos-metadata[812]: Jun 21 06:10:26.232 WARN failed to locate config-drive, using the metadata service API instead Jun 21 06:10:26.273887 coreos-metadata[812]: Jun 21 06:10:26.273 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 21 06:10:26.289991 coreos-metadata[812]: Jun 21 06:10:26.289 INFO Fetch successful Jun 21 06:10:26.291303 coreos-metadata[812]: Jun 21 06:10:26.291 INFO wrote hostname ci-4372-0-0-3-5f235c9307.novalocal to /sysroot/etc/hostname Jun 21 06:10:26.294640 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jun 21 06:10:26.294904 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jun 21 06:10:26.301920 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 06:10:26.347474 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 06:10:26.379846 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (945) Jun 21 06:10:26.389090 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:10:26.389164 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:10:26.393358 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 06:10:26.406250 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 06:10:26.461977 ignition[963]: INFO : Ignition 2.21.0 Jun 21 06:10:26.461977 ignition[963]: INFO : Stage: files Jun 21 06:10:26.465367 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 06:10:26.465367 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:10:26.471410 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jun 21 06:10:26.471410 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 06:10:26.471410 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 06:10:26.477485 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 06:10:26.477485 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 06:10:26.477485 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 06:10:26.476369 unknown[963]: wrote ssh authorized keys file for user: core Jun 21 06:10:26.484619 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 21 06:10:26.484619 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 21 06:10:26.556157 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 06:10:26.840839 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 21 06:10:26.840839 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 06:10:26.843026 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 21 06:10:27.510367 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 21 06:10:27.988879 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 06:10:27.988879 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 21 06:10:27.988879 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 06:10:27.988879 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 06:10:27.998373 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 06:10:27.998373 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 06:10:27.998373 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 06:10:27.998373 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 06:10:27.998373 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 06:10:27.998373 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 06:10:27.998373 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 06:10:27.998373 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 06:10:27.998373 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 06:10:27.998373 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 06:10:27.998373 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 21 06:10:28.530677 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 21 06:10:30.210595 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 21 06:10:30.210595 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 21 06:10:30.213996 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 06:10:30.222549 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 06:10:30.222549 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 21 06:10:30.222549 ignition[963]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 21 06:10:30.230758 ignition[963]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 06:10:30.230758 ignition[963]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 06:10:30.230758 ignition[963]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 06:10:30.230758 ignition[963]: INFO : files: files passed Jun 21 06:10:30.230758 ignition[963]: INFO : Ignition finished successfully Jun 21 06:10:30.225027 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 06:10:30.230938 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 06:10:30.235895 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 06:10:30.255288 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 06:10:30.255384 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 06:10:30.264579 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 06:10:30.265712 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 06:10:30.268227 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 06:10:30.270489 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 06:10:30.271510 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 06:10:30.274148 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 06:10:30.348427 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 06:10:30.348668 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 06:10:30.352337 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 06:10:30.355550 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 06:10:30.359043 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 06:10:30.361014 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 06:10:30.403252 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 06:10:30.408318 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 06:10:30.447230 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 06:10:30.448748 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 06:10:30.452111 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 06:10:30.455005 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 06:10:30.455384 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 06:10:30.458509 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 06:10:30.460298 systemd[1]: Stopped target basic.target - Basic System. Jun 21 06:10:30.463257 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 06:10:30.465909 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 06:10:30.468481 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 06:10:30.471423 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 06:10:30.474422 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 06:10:30.477384 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 06:10:30.480488 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 06:10:30.483247 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 06:10:30.486330 systemd[1]: Stopped target swap.target - Swaps. Jun 21 06:10:30.488918 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 06:10:30.489317 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 06:10:30.492233 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 06:10:30.494007 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 06:10:30.496621 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 06:10:30.496954 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 06:10:30.499409 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 06:10:30.499688 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 06:10:30.503845 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 06:10:30.504270 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 06:10:30.507358 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 06:10:30.507716 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 06:10:30.514467 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 06:10:30.519247 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 06:10:30.523911 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 06:10:30.524362 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 06:10:30.530878 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 06:10:30.531289 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 06:10:30.541580 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 06:10:30.541677 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 06:10:30.565715 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 06:10:30.568807 ignition[1018]: INFO : Ignition 2.21.0 Jun 21 06:10:30.568807 ignition[1018]: INFO : Stage: umount Jun 21 06:10:30.568807 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 06:10:30.568807 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:10:30.572471 ignition[1018]: INFO : umount: umount passed Jun 21 06:10:30.572471 ignition[1018]: INFO : Ignition finished successfully Jun 21 06:10:30.573199 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 06:10:30.573341 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 06:10:30.575302 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 06:10:30.575414 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 06:10:30.576723 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 06:10:30.576821 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 06:10:30.578314 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 06:10:30.578366 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 06:10:30.579408 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 21 06:10:30.579450 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 21 06:10:30.580497 systemd[1]: Stopped target network.target - Network. Jun 21 06:10:30.581559 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 06:10:30.581628 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 06:10:30.582725 systemd[1]: Stopped target paths.target - Path Units. Jun 21 06:10:30.583747 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 06:10:30.587958 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 06:10:30.588578 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 06:10:30.589915 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 06:10:30.591006 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 06:10:30.591046 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 06:10:30.592003 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 06:10:30.592034 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 06:10:30.593009 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 06:10:30.593061 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 06:10:30.594104 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 06:10:30.594152 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 06:10:30.598961 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 06:10:30.599010 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 06:10:30.600117 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 06:10:30.601186 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 06:10:30.606645 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 06:10:30.607091 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 06:10:30.610928 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 06:10:30.611172 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 06:10:30.611880 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 06:10:30.613654 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 06:10:30.614417 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 06:10:30.615060 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 06:10:30.615115 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 06:10:30.616977 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 06:10:30.619065 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 06:10:30.619117 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 06:10:30.620173 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 06:10:30.620214 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:10:30.623222 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 06:10:30.623271 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 06:10:30.624824 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 06:10:30.624871 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 06:10:30.626494 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 06:10:30.628386 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 06:10:30.628449 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 06:10:30.635419 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 06:10:30.636308 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 06:10:30.637955 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 06:10:30.638089 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 06:10:30.639638 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 06:10:30.639698 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 06:10:30.640537 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 06:10:30.640566 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 06:10:30.641721 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 06:10:30.641860 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 06:10:30.643510 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 06:10:30.643549 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 06:10:30.644755 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 06:10:30.644816 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 06:10:30.647884 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 06:10:30.648544 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 06:10:30.648591 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 06:10:30.651891 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 06:10:30.651948 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 06:10:30.654390 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 21 06:10:30.654431 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 06:10:30.655637 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 06:10:30.655676 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 06:10:30.656470 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 06:10:30.656509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:10:30.659644 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 21 06:10:30.659694 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jun 21 06:10:30.659731 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 21 06:10:30.659800 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 06:10:30.666039 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 06:10:30.666160 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 06:10:30.667588 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 06:10:30.669529 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 06:10:30.690284 systemd[1]: Switching root. Jun 21 06:10:30.730894 systemd-journald[214]: Journal stopped Jun 21 06:10:32.567047 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). Jun 21 06:10:32.567129 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 06:10:32.567148 kernel: SELinux: policy capability open_perms=1 Jun 21 06:10:32.567161 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 06:10:32.567172 kernel: SELinux: policy capability always_check_network=0 Jun 21 06:10:32.567184 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 06:10:32.567201 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 06:10:32.567212 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 06:10:32.567227 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 06:10:32.567238 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 06:10:32.567252 systemd[1]: Successfully loaded SELinux policy in 62.696ms. Jun 21 06:10:32.567273 kernel: audit: type=1403 audit(1750486231.377:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 06:10:32.567286 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 22.635ms. Jun 21 06:10:32.567299 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 06:10:32.567311 systemd[1]: Detected virtualization kvm. Jun 21 06:10:32.567324 systemd[1]: Detected architecture x86-64. Jun 21 06:10:32.567336 systemd[1]: Detected first boot. Jun 21 06:10:32.567348 systemd[1]: Hostname set to . Jun 21 06:10:32.567361 systemd[1]: Initializing machine ID from VM UUID. Jun 21 06:10:32.567373 zram_generator::config[1062]: No configuration found. Jun 21 06:10:32.567387 kernel: Guest personality initialized and is inactive Jun 21 06:10:32.567398 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 21 06:10:32.567410 kernel: Initialized host personality Jun 21 06:10:32.567421 kernel: NET: Registered PF_VSOCK protocol family Jun 21 06:10:32.567432 systemd[1]: Populated /etc with preset unit settings. Jun 21 06:10:32.567446 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 06:10:32.567458 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 06:10:32.567473 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 06:10:32.567485 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 06:10:32.567502 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 06:10:32.567515 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 06:10:32.567526 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 06:10:32.567539 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 06:10:32.567551 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 06:10:32.567563 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 06:10:32.567577 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 06:10:32.567589 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 06:10:32.567601 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 06:10:32.567613 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 06:10:32.567626 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 06:10:32.567638 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 06:10:32.567652 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 06:10:32.567666 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 06:10:32.567678 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 06:10:32.567691 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 06:10:32.567707 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 06:10:32.567725 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 06:10:32.567741 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 06:10:32.567757 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 06:10:32.572562 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 06:10:32.572588 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 06:10:32.572601 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 06:10:32.572614 systemd[1]: Reached target slices.target - Slice Units. Jun 21 06:10:32.572626 systemd[1]: Reached target swap.target - Swaps. Jun 21 06:10:32.572638 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 06:10:32.572651 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 06:10:32.572663 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 06:10:32.572675 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 06:10:32.572688 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 06:10:32.572699 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 06:10:32.572714 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 06:10:32.572726 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 06:10:32.572738 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 06:10:32.572750 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 06:10:32.572763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:10:32.572826 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 06:10:32.572840 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 06:10:32.572852 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 06:10:32.572869 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 06:10:32.572881 systemd[1]: Reached target machines.target - Containers. Jun 21 06:10:32.572893 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 06:10:32.572905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 06:10:32.572918 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 06:10:32.572932 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 06:10:32.572944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 06:10:32.572956 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 06:10:32.572968 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 06:10:32.572983 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 06:10:32.572995 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 06:10:32.573008 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 06:10:32.573020 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 06:10:32.573032 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 06:10:32.573045 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 06:10:32.573056 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 06:10:32.573070 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 06:10:32.573085 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 06:10:32.573099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 06:10:32.573112 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 06:10:32.573125 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 06:10:32.573138 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 06:10:32.573152 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 06:10:32.573165 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 06:10:32.573177 systemd[1]: Stopped verity-setup.service. Jun 21 06:10:32.573190 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:10:32.573202 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 06:10:32.573216 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 06:10:32.573228 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 06:10:32.573240 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 06:10:32.573266 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 06:10:32.573279 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 06:10:32.573291 kernel: ACPI: bus type drm_connector registered Jun 21 06:10:32.573303 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 06:10:32.573315 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 06:10:32.573327 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 06:10:32.573342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 06:10:32.573354 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 06:10:32.573366 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 06:10:32.573378 kernel: loop: module loaded Jun 21 06:10:32.573390 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 06:10:32.573435 systemd-journald[1152]: Collecting audit messages is disabled. Jun 21 06:10:32.573462 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 06:10:32.573477 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 06:10:32.573490 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 06:10:32.573502 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 06:10:32.573516 systemd-journald[1152]: Journal started Jun 21 06:10:32.573543 systemd-journald[1152]: Runtime Journal (/run/log/journal/e409d4615ffe4690a831023a4aba4c3b) is 8M, max 78.5M, 70.5M free. Jun 21 06:10:32.155812 systemd[1]: Queued start job for default target multi-user.target. Jun 21 06:10:32.176101 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 21 06:10:32.176585 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 06:10:32.577693 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 06:10:32.581585 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 06:10:32.581159 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 06:10:32.582281 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 06:10:32.589810 kernel: fuse: init (API version 7.41) Jun 21 06:10:32.591650 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 06:10:32.592556 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 06:10:32.592825 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 06:10:32.597555 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 06:10:32.603907 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 06:10:32.605903 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 06:10:32.606447 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 06:10:32.606484 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 06:10:32.610134 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 06:10:32.613893 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 06:10:32.615146 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 06:10:32.620997 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 06:10:32.625925 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 06:10:32.626513 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 06:10:32.628867 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 06:10:32.629472 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 06:10:32.665108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:10:32.679731 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 06:10:32.686204 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 06:10:32.689815 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 06:10:32.690670 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 06:10:32.692242 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 06:10:32.693926 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 06:10:32.698354 systemd-journald[1152]: Time spent on flushing to /var/log/journal/e409d4615ffe4690a831023a4aba4c3b is 36.859ms for 979 entries. Jun 21 06:10:32.698354 systemd-journald[1152]: System Journal (/var/log/journal/e409d4615ffe4690a831023a4aba4c3b) is 8M, max 584.8M, 576.8M free. Jun 21 06:10:32.761229 systemd-journald[1152]: Received client request to flush runtime journal. Jun 21 06:10:32.761285 kernel: loop0: detected capacity change from 0 to 146240 Jun 21 06:10:32.722471 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 06:10:32.723166 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 06:10:32.724839 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 06:10:32.739178 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:10:32.763200 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 06:10:32.817749 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jun 21 06:10:32.819498 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Jun 21 06:10:32.829509 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 06:10:32.837995 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 06:10:32.845450 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 06:10:32.850875 kernel: loop1: detected capacity change from 0 to 221472 Jun 21 06:10:32.848439 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 06:10:32.920826 kernel: loop2: detected capacity change from 0 to 8 Jun 21 06:10:32.938804 kernel: loop3: detected capacity change from 0 to 113872 Jun 21 06:10:32.942375 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 06:10:32.948564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 06:10:33.007709 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jun 21 06:10:33.008642 kernel: loop4: detected capacity change from 0 to 146240 Jun 21 06:10:33.008706 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jun 21 06:10:33.015427 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 06:10:33.044806 kernel: loop5: detected capacity change from 0 to 221472 Jun 21 06:10:33.139212 kernel: loop6: detected capacity change from 0 to 8 Jun 21 06:10:33.143806 kernel: loop7: detected capacity change from 0 to 113872 Jun 21 06:10:33.176974 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jun 21 06:10:33.178287 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 06:10:33.179578 (sd-merge)[1225]: Merged extensions into '/usr'. Jun 21 06:10:33.187928 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 06:10:33.188071 systemd[1]: Reloading... Jun 21 06:10:33.356821 zram_generator::config[1255]: No configuration found. Jun 21 06:10:33.599610 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:10:33.731723 systemd[1]: Reloading finished in 542 ms. Jun 21 06:10:33.759549 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 06:10:33.770024 systemd[1]: Starting ensure-sysext.service... Jun 21 06:10:33.776562 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 06:10:33.790870 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 06:10:33.795990 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 06:10:33.800270 systemd[1]: Reload requested from client PID 1307 ('systemctl') (unit ensure-sysext.service)... Jun 21 06:10:33.800289 systemd[1]: Reloading... Jun 21 06:10:33.854608 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 06:10:33.854655 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 06:10:33.855002 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 06:10:33.855236 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 06:10:33.856658 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 06:10:33.859998 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jun 21 06:10:33.860060 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jun 21 06:10:33.875077 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 06:10:33.875089 systemd-tmpfiles[1308]: Skipping /boot Jun 21 06:10:33.888618 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 06:10:33.915765 zram_generator::config[1340]: No configuration found. Jun 21 06:10:33.919726 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 06:10:33.920328 systemd-tmpfiles[1308]: Skipping /boot Jun 21 06:10:33.923674 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Jun 21 06:10:34.108274 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:10:34.240165 systemd[1]: Reloading finished in 439 ms. Jun 21 06:10:34.247923 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 06:10:34.250100 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 06:10:34.260228 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 06:10:34.272556 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 06:10:34.275658 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 06:10:34.282325 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 06:10:34.288051 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 06:10:34.294527 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 06:10:34.303005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 06:10:34.309967 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 06:10:34.316765 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:10:34.318023 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 06:10:34.319661 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 06:10:34.329183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 06:10:34.337962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 06:10:34.338627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 06:10:34.338753 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 06:10:34.338904 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:10:34.350078 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 06:10:34.355782 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:10:34.356513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 06:10:34.356910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 06:10:34.357170 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 06:10:34.357572 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:10:34.370445 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 06:10:34.362097 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 06:10:34.369746 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 06:10:34.377227 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:10:34.377511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 06:10:34.379937 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 06:10:34.380696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 06:10:34.380736 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 06:10:34.383460 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 06:10:34.384131 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:10:34.384534 systemd[1]: Finished ensure-sysext.service. Jun 21 06:10:34.410563 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 21 06:10:34.428720 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 06:10:34.430114 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 06:10:34.430915 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 06:10:34.472000 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 21 06:10:34.475931 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 21 06:10:34.473095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 06:10:34.473367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 06:10:34.480901 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 06:10:34.483367 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 06:10:34.485664 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 06:10:34.485868 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 21 06:10:34.486477 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 06:10:34.486675 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 06:10:34.487725 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 06:10:34.499309 augenrules[1471]: No rules Jun 21 06:10:34.500027 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 06:10:34.500527 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 06:10:34.504800 kernel: ACPI: button: Power Button [PWRF] Jun 21 06:10:34.521273 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 06:10:34.540177 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 06:10:34.542070 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 06:10:34.661604 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 21 06:10:34.663346 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 06:10:34.678002 systemd-networkd[1432]: lo: Link UP Jun 21 06:10:34.678378 systemd-networkd[1432]: lo: Gained carrier Jun 21 06:10:34.683033 systemd-networkd[1432]: Enumeration completed Jun 21 06:10:34.683265 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 06:10:34.683663 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:10:34.683841 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 06:10:34.685298 systemd-networkd[1432]: eth0: Link UP Jun 21 06:10:34.685528 systemd-networkd[1432]: eth0: Gained carrier Jun 21 06:10:34.685572 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 06:10:34.686832 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:10:34.688074 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 06:10:34.698849 systemd-networkd[1432]: eth0: DHCPv4 address 172.24.4.3/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 21 06:10:34.699483 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Jun 21 06:10:34.705448 systemd-resolved[1433]: Positive Trust Anchors: Jun 21 06:10:34.705464 systemd-resolved[1433]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 06:10:34.705508 systemd-resolved[1433]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 06:10:34.711364 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 06:10:34.714907 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 06:10:34.729444 systemd-resolved[1433]: Using system hostname 'ci-4372-0-0-3-5f235c9307.novalocal'. Jun 21 06:10:34.736738 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 06:10:34.737395 systemd[1]: Reached target network.target - Network. Jun 21 06:10:34.738688 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 06:10:34.739245 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 06:10:34.740276 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 06:10:34.741857 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 06:10:34.742391 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 06:10:34.743752 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 06:10:34.744323 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 06:10:34.745607 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 06:10:34.746842 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 06:10:34.746879 systemd[1]: Reached target paths.target - Path Units. Jun 21 06:10:34.747351 systemd[1]: Reached target timers.target - Timer Units. Jun 21 06:10:34.748715 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 06:10:34.752248 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 06:10:34.756712 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 06:10:34.759225 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 06:10:34.759761 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 06:10:34.768960 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 06:10:34.770268 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 06:10:34.772208 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 06:10:34.773171 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 06:10:34.773905 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 06:10:34.782891 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 06:10:34.783425 systemd[1]: Reached target basic.target - Basic System. Jun 21 06:10:34.783993 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 06:10:34.784026 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 06:10:34.786866 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 06:10:34.789910 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 21 06:10:34.792070 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 06:10:34.795284 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 06:10:34.802508 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 06:10:34.804085 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 06:10:34.804600 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 06:10:34.813056 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 06:10:34.823883 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 06:10:34.826749 extend-filesystems[1515]: Found /dev/vda6 Jun 21 06:10:34.829804 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:34.829880 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 06:10:34.834798 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 06:10:34.922386 extend-filesystems[1515]: Found /dev/vda9 Jun 21 06:10:34.921857 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 06:10:34.930962 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 06:10:34.933758 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 06:10:34.936314 extend-filesystems[1515]: Checking size of /dev/vda9 Jun 21 06:10:34.938296 jq[1514]: false Jun 21 06:10:34.943208 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 06:10:34.949006 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 06:10:34.951282 oslogin_cache_refresh[1516]: Refreshing passwd entry cache Jun 21 06:10:34.952156 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Refreshing passwd entry cache Jun 21 06:10:34.954277 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 21 06:10:34.966054 oslogin_cache_refresh[1516]: Failure getting users, quitting Jun 21 06:10:34.968071 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Failure getting users, quitting Jun 21 06:10:34.968071 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 06:10:34.966076 oslogin_cache_refresh[1516]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 06:10:34.968518 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 06:10:34.973797 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Refreshing group entry cache Jun 21 06:10:34.972899 oslogin_cache_refresh[1516]: Refreshing group entry cache Jun 21 06:10:34.992940 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 21 06:10:34.980973 oslogin_cache_refresh[1516]: Failure getting groups, quitting Jun 21 06:10:34.993295 extend-filesystems[1515]: Resized partition /dev/vda9 Jun 21 06:10:34.994502 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Failure getting groups, quitting Jun 21 06:10:34.994502 google_oslogin_nss_cache[1516]: oslogin_cache_refresh[1516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 06:10:34.981081 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 06:10:34.980986 oslogin_cache_refresh[1516]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 06:10:34.994647 extend-filesystems[1544]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 06:10:34.989181 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 06:10:34.989432 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 06:10:34.992082 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 06:10:35.003900 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jun 21 06:10:34.992303 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 06:10:34.995802 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 06:10:34.996887 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 06:10:35.000343 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 06:10:35.001081 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 06:10:35.020238 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:10:35.038529 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jun 21 06:10:35.074347 jq[1534]: true Jun 21 06:10:35.081378 update_engine[1531]: I20250621 06:10:35.067575 1531 main.cc:92] Flatcar Update Engine starting Jun 21 06:10:35.087256 kernel: Console: switching to colour dummy device 80x25 Jun 21 06:10:35.089370 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 21 06:10:35.089411 kernel: [drm] features: -context_init Jun 21 06:10:35.092988 extend-filesystems[1544]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 21 06:10:35.092988 extend-filesystems[1544]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 21 06:10:35.092988 extend-filesystems[1544]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jun 21 06:10:35.093800 extend-filesystems[1515]: Resized filesystem in /dev/vda9 Jun 21 06:10:35.094467 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 06:10:35.094960 (ntainerd)[1556]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 06:10:35.095124 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 06:10:35.105931 jq[1558]: true Jun 21 06:10:35.113484 kernel: [drm] number of scanouts: 1 Jun 21 06:10:35.113538 kernel: [drm] number of cap sets: 0 Jun 21 06:10:35.116793 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jun 21 06:10:35.150508 dbus-daemon[1512]: [system] SELinux support is enabled Jun 21 06:10:35.162204 update_engine[1531]: I20250621 06:10:35.160926 1531 update_check_scheduler.cc:74] Next update check in 6m44s Jun 21 06:10:35.212116 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 06:10:35.215942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:10:35.241808 tar[1545]: linux-amd64/helm Jun 21 06:10:35.240549 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 06:10:35.242034 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 06:10:35.242202 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 06:10:35.243311 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 06:10:35.244943 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 06:10:35.245836 systemd[1]: Started update-engine.service - Update Engine. Jun 21 06:10:35.261446 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 06:10:35.274842 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 06:10:35.274998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:10:35.277276 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:10:35.283192 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:10:35.286901 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 06:10:35.383365 systemd-logind[1528]: New seat seat0. Jun 21 06:10:35.387573 systemd-logind[1528]: Watching system buttons on /dev/input/event2 (Power Button) Jun 21 06:10:35.393076 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 06:10:35.394987 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 06:10:35.460812 bash[1593]: Updated "/home/core/.ssh/authorized_keys" Jun 21 06:10:35.447107 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 06:10:35.469308 systemd[1]: Starting sshkeys.service... Jun 21 06:10:35.490483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:10:35.491711 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 21 06:10:35.495334 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 21 06:10:35.530802 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:35.548672 locksmithd[1571]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 06:10:35.849503 containerd[1556]: time="2025-06-21T06:10:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 06:10:35.864641 containerd[1556]: time="2025-06-21T06:10:35.853550779Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 06:10:35.901320 containerd[1556]: time="2025-06-21T06:10:35.901264957Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="25.728µs" Jun 21 06:10:35.902589 containerd[1556]: time="2025-06-21T06:10:35.902568522Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 06:10:35.902979 containerd[1556]: time="2025-06-21T06:10:35.902961269Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 06:10:35.903250 containerd[1556]: time="2025-06-21T06:10:35.903231235Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 06:10:35.904055 containerd[1556]: time="2025-06-21T06:10:35.904037497Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 06:10:35.904146 containerd[1556]: time="2025-06-21T06:10:35.904130552Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 06:10:35.904375 containerd[1556]: time="2025-06-21T06:10:35.904353920Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 06:10:35.904743 containerd[1556]: time="2025-06-21T06:10:35.904727170Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 06:10:35.906375 containerd[1556]: time="2025-06-21T06:10:35.906349633Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 06:10:35.906462 containerd[1556]: time="2025-06-21T06:10:35.906446475Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 06:10:35.906546 containerd[1556]: time="2025-06-21T06:10:35.906530483Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 06:10:35.906640 containerd[1556]: time="2025-06-21T06:10:35.906623597Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 06:10:35.906946 containerd[1556]: time="2025-06-21T06:10:35.906914212Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 06:10:35.908133 containerd[1556]: time="2025-06-21T06:10:35.908113832Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 06:10:35.908350 containerd[1556]: time="2025-06-21T06:10:35.908329376Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 06:10:35.908790 containerd[1556]: time="2025-06-21T06:10:35.908726862Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 06:10:35.908882 containerd[1556]: time="2025-06-21T06:10:35.908846416Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 06:10:35.909863 containerd[1556]: time="2025-06-21T06:10:35.909816305Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 06:10:35.910029 containerd[1556]: time="2025-06-21T06:10:35.910004759Z" level=info msg="metadata content store policy set" policy=shared Jun 21 06:10:35.921502 containerd[1556]: time="2025-06-21T06:10:35.921419607Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 06:10:35.921562 containerd[1556]: time="2025-06-21T06:10:35.921529123Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 06:10:35.921562 containerd[1556]: time="2025-06-21T06:10:35.921556674Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 06:10:35.924044 containerd[1556]: time="2025-06-21T06:10:35.923805322Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 06:10:35.924044 containerd[1556]: time="2025-06-21T06:10:35.923829226Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 06:10:35.924044 containerd[1556]: time="2025-06-21T06:10:35.923846068Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 06:10:35.924044 containerd[1556]: time="2025-06-21T06:10:35.923878890Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 06:10:35.924044 containerd[1556]: time="2025-06-21T06:10:35.923896983Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 06:10:35.924044 containerd[1556]: time="2025-06-21T06:10:35.923910128Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 06:10:35.924044 containerd[1556]: time="2025-06-21T06:10:35.923921169Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 06:10:35.924044 containerd[1556]: time="2025-06-21T06:10:35.923933081Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 06:10:35.924044 containerd[1556]: time="2025-06-21T06:10:35.923956806Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 06:10:35.924261 containerd[1556]: time="2025-06-21T06:10:35.924093191Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 06:10:35.924261 containerd[1556]: time="2025-06-21T06:10:35.924127175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 06:10:35.924261 containerd[1556]: time="2025-06-21T06:10:35.924145379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 06:10:35.924261 containerd[1556]: time="2025-06-21T06:10:35.924178852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 06:10:35.924261 containerd[1556]: time="2025-06-21T06:10:35.924203488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 06:10:35.924261 containerd[1556]: time="2025-06-21T06:10:35.924216342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 06:10:35.924261 containerd[1556]: time="2025-06-21T06:10:35.924228014Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 06:10:35.924261 containerd[1556]: time="2025-06-21T06:10:35.924238955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 06:10:35.924261 containerd[1556]: time="2025-06-21T06:10:35.924250707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 06:10:35.924261 containerd[1556]: time="2025-06-21T06:10:35.924262098Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 06:10:35.924486 containerd[1556]: time="2025-06-21T06:10:35.924274471Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 06:10:35.924899 containerd[1556]: time="2025-06-21T06:10:35.924365632Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 06:10:35.924899 containerd[1556]: time="2025-06-21T06:10:35.924621703Z" level=info msg="Start snapshots syncer" Jun 21 06:10:35.924899 containerd[1556]: time="2025-06-21T06:10:35.924675534Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 06:10:35.932116 containerd[1556]: time="2025-06-21T06:10:35.930888977Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 06:10:35.932116 containerd[1556]: time="2025-06-21T06:10:35.931359700Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 06:10:35.932328 containerd[1556]: time="2025-06-21T06:10:35.931550377Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 06:10:35.932328 containerd[1556]: time="2025-06-21T06:10:35.931716359Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 06:10:35.932328 containerd[1556]: time="2025-06-21T06:10:35.931746826Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 06:10:35.932328 containerd[1556]: time="2025-06-21T06:10:35.931759890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 06:10:35.933002 containerd[1556]: time="2025-06-21T06:10:35.932982483Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 06:10:35.933075 containerd[1556]: time="2025-06-21T06:10:35.933059878Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 06:10:35.933136 containerd[1556]: time="2025-06-21T06:10:35.933122135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 06:10:35.933214 containerd[1556]: time="2025-06-21T06:10:35.933198468Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 06:10:35.933393 containerd[1556]: time="2025-06-21T06:10:35.933375811Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 06:10:35.933816 containerd[1556]: time="2025-06-21T06:10:35.933626832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 06:10:35.933816 containerd[1556]: time="2025-06-21T06:10:35.933649815Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 06:10:35.936126 containerd[1556]: time="2025-06-21T06:10:35.936103256Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 06:10:35.936528 containerd[1556]: time="2025-06-21T06:10:35.936308822Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 06:10:35.936618 containerd[1556]: time="2025-06-21T06:10:35.936600278Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 06:10:35.936783 containerd[1556]: time="2025-06-21T06:10:35.936747064Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 06:10:35.938328 containerd[1556]: time="2025-06-21T06:10:35.937003735Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 06:10:35.938328 containerd[1556]: time="2025-06-21T06:10:35.937049511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 06:10:35.938328 containerd[1556]: time="2025-06-21T06:10:35.937070771Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 06:10:35.938328 containerd[1556]: time="2025-06-21T06:10:35.937099745Z" level=info msg="runtime interface created" Jun 21 06:10:35.938328 containerd[1556]: time="2025-06-21T06:10:35.937106518Z" level=info msg="created NRI interface" Jun 21 06:10:35.938328 containerd[1556]: time="2025-06-21T06:10:35.937117458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 06:10:35.938328 containerd[1556]: time="2025-06-21T06:10:35.937140762Z" level=info msg="Connect containerd service" Jun 21 06:10:35.938328 containerd[1556]: time="2025-06-21T06:10:35.937181208Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 06:10:35.939750 containerd[1556]: time="2025-06-21T06:10:35.939726502Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 06:10:36.100999 systemd-networkd[1432]: eth0: Gained IPv6LL Jun 21 06:10:36.101926 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Jun 21 06:10:36.108121 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 06:10:36.108908 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 06:10:36.116952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:10:36.126959 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 06:10:36.194355 sshd_keygen[1553]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 06:10:36.207323 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.257629348Z" level=info msg="Start subscribing containerd event" Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.257693859Z" level=info msg="Start recovering state" Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.257875460Z" level=info msg="Start event monitor" Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.257895027Z" level=info msg="Start cni network conf syncer for default" Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.258145086Z" level=info msg="Start streaming server" Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.258164372Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.258172758Z" level=info msg="runtime interface starting up..." Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.258179751Z" level=info msg="starting plugins..." Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.258195160Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.259913492Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.259985678Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 06:10:36.260279 containerd[1556]: time="2025-06-21T06:10:36.260078261Z" level=info msg="containerd successfully booted in 0.411048s" Jun 21 06:10:36.260245 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 06:10:36.262235 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 06:10:36.269109 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 06:10:36.272509 systemd[1]: Started sshd@0-172.24.4.3:22-172.24.4.1:58906.service - OpenSSH per-connection server daemon (172.24.4.1:58906). Jun 21 06:10:36.314217 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 06:10:36.314487 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 06:10:36.320982 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 06:10:36.388209 tar[1545]: linux-amd64/LICENSE Jun 21 06:10:36.388604 tar[1545]: linux-amd64/README.md Jun 21 06:10:36.391321 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 06:10:36.399277 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 06:10:36.402170 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 06:10:36.402529 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 06:10:36.408526 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 06:10:36.471835 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:36.588529 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:37.367216 sshd[1643]: Accepted publickey for core from 172.24.4.1 port 58906 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:10:37.370717 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:37.390698 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 06:10:37.393945 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 06:10:37.413989 systemd-logind[1528]: New session 1 of user core. Jun 21 06:10:37.437616 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 06:10:37.442015 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 06:10:37.458631 (systemd)[1659]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 06:10:37.466291 systemd-logind[1528]: New session c1 of user core. Jun 21 06:10:37.708597 systemd[1659]: Queued start job for default target default.target. Jun 21 06:10:37.714859 systemd[1659]: Created slice app.slice - User Application Slice. Jun 21 06:10:37.714892 systemd[1659]: Reached target paths.target - Paths. Jun 21 06:10:37.715025 systemd[1659]: Reached target timers.target - Timers. Jun 21 06:10:37.727932 systemd[1659]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 06:10:37.739036 systemd[1659]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 06:10:37.740095 systemd[1659]: Reached target sockets.target - Sockets. Jun 21 06:10:37.740238 systemd[1659]: Reached target basic.target - Basic System. Jun 21 06:10:37.740321 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 06:10:37.741876 systemd[1659]: Reached target default.target - Main User Target. Jun 21 06:10:37.741907 systemd[1659]: Startup finished in 262ms. Jun 21 06:10:37.746561 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 06:10:38.229405 systemd[1]: Started sshd@1-172.24.4.3:22-172.24.4.1:40828.service - OpenSSH per-connection server daemon (172.24.4.1:40828). Jun 21 06:10:38.314989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:10:38.333459 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:10:38.499827 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:38.609831 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:39.514242 kubelet[1676]: E0621 06:10:39.514091 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:10:39.518148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:10:39.518467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:10:39.519847 systemd[1]: kubelet.service: Consumed 2.178s CPU time, 265.1M memory peak. Jun 21 06:10:40.136638 sshd[1670]: Accepted publickey for core from 172.24.4.1 port 40828 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:10:40.139424 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:40.152674 systemd-logind[1528]: New session 2 of user core. Jun 21 06:10:40.167219 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 06:10:40.893821 sshd[1687]: Connection closed by 172.24.4.1 port 40828 Jun 21 06:10:40.894183 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Jun 21 06:10:40.910545 systemd[1]: sshd@1-172.24.4.3:22-172.24.4.1:40828.service: Deactivated successfully. Jun 21 06:10:40.914926 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 06:10:40.918953 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Jun 21 06:10:40.924565 systemd[1]: Started sshd@2-172.24.4.3:22-172.24.4.1:40838.service - OpenSSH per-connection server daemon (172.24.4.1:40838). Jun 21 06:10:40.927294 systemd-logind[1528]: Removed session 2. Jun 21 06:10:41.474399 login[1652]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 21 06:10:41.482337 login[1653]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 21 06:10:41.487190 systemd-logind[1528]: New session 3 of user core. Jun 21 06:10:41.499286 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 06:10:41.508925 systemd-logind[1528]: New session 4 of user core. Jun 21 06:10:41.520688 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 06:10:42.274442 sshd[1693]: Accepted publickey for core from 172.24.4.1 port 40838 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:10:42.277096 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:42.289891 systemd-logind[1528]: New session 5 of user core. Jun 21 06:10:42.310228 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 06:10:42.530846 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:42.548826 coreos-metadata[1511]: Jun 21 06:10:42.548 WARN failed to locate config-drive, using the metadata service API instead Jun 21 06:10:42.601415 coreos-metadata[1511]: Jun 21 06:10:42.601 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jun 21 06:10:42.633835 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:10:42.652995 coreos-metadata[1600]: Jun 21 06:10:42.652 WARN failed to locate config-drive, using the metadata service API instead Jun 21 06:10:42.696432 coreos-metadata[1600]: Jun 21 06:10:42.696 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jun 21 06:10:42.777451 coreos-metadata[1511]: Jun 21 06:10:42.777 INFO Fetch successful Jun 21 06:10:42.777451 coreos-metadata[1511]: Jun 21 06:10:42.777 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 21 06:10:42.792238 coreos-metadata[1511]: Jun 21 06:10:42.792 INFO Fetch successful Jun 21 06:10:42.792847 coreos-metadata[1511]: Jun 21 06:10:42.792 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jun 21 06:10:42.808389 coreos-metadata[1511]: Jun 21 06:10:42.808 INFO Fetch successful Jun 21 06:10:42.808389 coreos-metadata[1511]: Jun 21 06:10:42.808 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jun 21 06:10:42.821047 coreos-metadata[1511]: Jun 21 06:10:42.820 INFO Fetch successful Jun 21 06:10:42.821047 coreos-metadata[1511]: Jun 21 06:10:42.821 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jun 21 06:10:42.837036 coreos-metadata[1511]: Jun 21 06:10:42.836 INFO Fetch successful Jun 21 06:10:42.837036 coreos-metadata[1511]: Jun 21 06:10:42.836 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jun 21 06:10:42.850077 coreos-metadata[1511]: Jun 21 06:10:42.849 INFO Fetch successful Jun 21 06:10:42.907435 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 21 06:10:42.910098 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 06:10:42.946052 sshd[1719]: Connection closed by 172.24.4.1 port 40838 Jun 21 06:10:42.945849 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Jun 21 06:10:42.952707 systemd[1]: sshd@2-172.24.4.3:22-172.24.4.1:40838.service: Deactivated successfully. Jun 21 06:10:42.957424 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 06:10:42.961704 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Jun 21 06:10:42.965962 systemd-logind[1528]: Removed session 5. Jun 21 06:10:42.998678 coreos-metadata[1600]: Jun 21 06:10:42.998 INFO Fetch successful Jun 21 06:10:42.998678 coreos-metadata[1600]: Jun 21 06:10:42.998 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 21 06:10:43.007726 coreos-metadata[1600]: Jun 21 06:10:43.007 INFO Fetch successful Jun 21 06:10:43.014056 unknown[1600]: wrote ssh authorized keys file for user: core Jun 21 06:10:43.066296 update-ssh-keys[1734]: Updated "/home/core/.ssh/authorized_keys" Jun 21 06:10:43.067590 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 21 06:10:43.070549 systemd[1]: Finished sshkeys.service. Jun 21 06:10:43.078400 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 06:10:43.079218 systemd[1]: Startup finished in 3.919s (kernel) + 16.643s (initrd) + 11.764s (userspace) = 32.327s. Jun 21 06:10:49.558434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 06:10:49.562136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:10:49.991529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:10:50.005637 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:10:50.145806 kubelet[1745]: E0621 06:10:50.145665 1745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:10:50.153078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:10:50.153261 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:10:50.153552 systemd[1]: kubelet.service: Consumed 340ms CPU time, 109.2M memory peak. Jun 21 06:10:52.970370 systemd[1]: Started sshd@3-172.24.4.3:22-172.24.4.1:47154.service - OpenSSH per-connection server daemon (172.24.4.1:47154). Jun 21 06:10:54.231366 sshd[1754]: Accepted publickey for core from 172.24.4.1 port 47154 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:10:54.234211 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:54.245172 systemd-logind[1528]: New session 6 of user core. Jun 21 06:10:54.259115 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 06:10:54.831453 sshd[1756]: Connection closed by 172.24.4.1 port 47154 Jun 21 06:10:54.832259 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Jun 21 06:10:54.845928 systemd[1]: sshd@3-172.24.4.3:22-172.24.4.1:47154.service: Deactivated successfully. Jun 21 06:10:54.848620 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 06:10:54.851487 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Jun 21 06:10:54.855239 systemd[1]: Started sshd@4-172.24.4.3:22-172.24.4.1:47166.service - OpenSSH per-connection server daemon (172.24.4.1:47166). Jun 21 06:10:54.860566 systemd-logind[1528]: Removed session 6. Jun 21 06:10:56.041935 sshd[1762]: Accepted publickey for core from 172.24.4.1 port 47166 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:10:56.044594 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:56.056365 systemd-logind[1528]: New session 7 of user core. Jun 21 06:10:56.065098 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 06:10:56.671046 sshd[1764]: Connection closed by 172.24.4.1 port 47166 Jun 21 06:10:56.672306 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Jun 21 06:10:56.690157 systemd[1]: sshd@4-172.24.4.3:22-172.24.4.1:47166.service: Deactivated successfully. Jun 21 06:10:56.694656 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 06:10:56.698470 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Jun 21 06:10:56.702483 systemd[1]: Started sshd@5-172.24.4.3:22-172.24.4.1:47178.service - OpenSSH per-connection server daemon (172.24.4.1:47178). Jun 21 06:10:56.705715 systemd-logind[1528]: Removed session 7. Jun 21 06:10:57.915808 sshd[1770]: Accepted publickey for core from 172.24.4.1 port 47178 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:10:57.918518 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:57.930879 systemd-logind[1528]: New session 8 of user core. Jun 21 06:10:57.943107 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 06:10:58.639106 sshd[1772]: Connection closed by 172.24.4.1 port 47178 Jun 21 06:10:58.640197 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jun 21 06:10:58.661676 systemd[1]: sshd@5-172.24.4.3:22-172.24.4.1:47178.service: Deactivated successfully. Jun 21 06:10:58.667176 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 06:10:58.669288 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Jun 21 06:10:58.675194 systemd[1]: Started sshd@6-172.24.4.3:22-172.24.4.1:47188.service - OpenSSH per-connection server daemon (172.24.4.1:47188). Jun 21 06:10:58.677731 systemd-logind[1528]: Removed session 8. Jun 21 06:10:59.930419 sshd[1778]: Accepted publickey for core from 172.24.4.1 port 47188 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:10:59.933640 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:10:59.944815 systemd-logind[1528]: New session 9 of user core. Jun 21 06:10:59.953065 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 06:11:00.308872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 21 06:11:00.314564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:11:00.532835 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 06:11:00.533589 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:11:00.571908 sudo[1784]: pam_unix(sudo:session): session closed for user root Jun 21 06:11:00.626142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:11:00.637189 (kubelet)[1790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:11:00.784180 sshd[1780]: Connection closed by 172.24.4.1 port 47188 Jun 21 06:11:00.787052 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:00.799122 systemd[1]: sshd@6-172.24.4.3:22-172.24.4.1:47188.service: Deactivated successfully. Jun 21 06:11:00.801239 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 06:11:00.803160 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Jun 21 06:11:00.809082 systemd[1]: Started sshd@7-172.24.4.3:22-172.24.4.1:47202.service - OpenSSH per-connection server daemon (172.24.4.1:47202). Jun 21 06:11:00.812171 systemd-logind[1528]: Removed session 9. Jun 21 06:11:00.815570 kubelet[1790]: E0621 06:11:00.815341 1790 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:11:00.820245 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:11:00.820513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:11:00.821709 systemd[1]: kubelet.service: Consumed 302ms CPU time, 108.3M memory peak. Jun 21 06:11:02.306320 sshd[1802]: Accepted publickey for core from 172.24.4.1 port 47202 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:11:02.310745 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:02.329203 systemd-logind[1528]: New session 10 of user core. Jun 21 06:11:02.345429 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 06:11:02.699995 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 06:11:02.700622 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:11:02.714106 sudo[1807]: pam_unix(sudo:session): session closed for user root Jun 21 06:11:02.725264 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 06:11:02.726041 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:11:02.745140 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 06:11:02.827860 augenrules[1829]: No rules Jun 21 06:11:02.831054 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 06:11:02.831700 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 06:11:02.834172 sudo[1806]: pam_unix(sudo:session): session closed for user root Jun 21 06:11:03.111864 sshd[1805]: Connection closed by 172.24.4.1 port 47202 Jun 21 06:11:03.113418 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:03.124037 systemd[1]: sshd@7-172.24.4.3:22-172.24.4.1:47202.service: Deactivated successfully. Jun 21 06:11:03.126344 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 06:11:03.130087 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Jun 21 06:11:03.136483 systemd[1]: Started sshd@8-172.24.4.3:22-172.24.4.1:48614.service - OpenSSH per-connection server daemon (172.24.4.1:48614). Jun 21 06:11:03.139282 systemd-logind[1528]: Removed session 10. Jun 21 06:11:04.568175 sshd[1838]: Accepted publickey for core from 172.24.4.1 port 48614 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:11:04.572505 sshd-session[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:11:04.586939 systemd-logind[1528]: New session 11 of user core. Jun 21 06:11:04.598307 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 06:11:05.133674 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 06:11:05.134451 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:11:06.144075 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 06:11:06.167635 (dockerd)[1859]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 06:11:07.688078 systemd-resolved[1433]: Clock change detected. Flushing caches. Jun 21 06:11:07.689017 systemd-timesyncd[1454]: Contacted time server 149.248.12.167:123 (2.flatcar.pool.ntp.org). Jun 21 06:11:07.691201 systemd-timesyncd[1454]: Initial clock synchronization to Sat 2025-06-21 06:11:07.686437 UTC. Jun 21 06:11:07.850706 dockerd[1859]: time="2025-06-21T06:11:07.850490648Z" level=info msg="Starting up" Jun 21 06:11:07.854221 dockerd[1859]: time="2025-06-21T06:11:07.854083767Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 06:11:07.961508 systemd[1]: var-lib-docker-metacopy\x2dcheck2159617195-merged.mount: Deactivated successfully. Jun 21 06:11:07.989460 dockerd[1859]: time="2025-06-21T06:11:07.989311837Z" level=info msg="Loading containers: start." Jun 21 06:11:08.025183 kernel: Initializing XFRM netlink socket Jun 21 06:11:08.586625 systemd-networkd[1432]: docker0: Link UP Jun 21 06:11:08.594390 dockerd[1859]: time="2025-06-21T06:11:08.594307546Z" level=info msg="Loading containers: done." Jun 21 06:11:08.625673 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck614756262-merged.mount: Deactivated successfully. Jun 21 06:11:08.628569 dockerd[1859]: time="2025-06-21T06:11:08.628462454Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 06:11:08.628569 dockerd[1859]: time="2025-06-21T06:11:08.628605752Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 06:11:08.629193 dockerd[1859]: time="2025-06-21T06:11:08.628776032Z" level=info msg="Initializing buildkit" Jun 21 06:11:08.683440 dockerd[1859]: time="2025-06-21T06:11:08.683345527Z" level=info msg="Completed buildkit initialization" Jun 21 06:11:08.702803 dockerd[1859]: time="2025-06-21T06:11:08.701691184Z" level=info msg="Daemon has completed initialization" Jun 21 06:11:08.702803 dockerd[1859]: time="2025-06-21T06:11:08.701868547Z" level=info msg="API listen on /run/docker.sock" Jun 21 06:11:08.703944 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 06:11:10.307054 containerd[1556]: time="2025-06-21T06:11:10.304895073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 21 06:11:11.210940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976748683.mount: Deactivated successfully. Jun 21 06:11:12.202013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 21 06:11:12.207658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:11:12.639022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:11:12.652545 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:11:12.734747 kubelet[2122]: E0621 06:11:12.734607 2122 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:11:12.737901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:11:12.738092 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:11:12.739382 systemd[1]: kubelet.service: Consumed 438ms CPU time, 108.2M memory peak. Jun 21 06:11:13.144871 containerd[1556]: time="2025-06-21T06:11:13.144777291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:13.146556 containerd[1556]: time="2025-06-21T06:11:13.146248049Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jun 21 06:11:13.147660 containerd[1556]: time="2025-06-21T06:11:13.147620183Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:13.151164 containerd[1556]: time="2025-06-21T06:11:13.151118965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:13.153800 containerd[1556]: time="2025-06-21T06:11:13.153744850Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.84644626s" Jun 21 06:11:13.153864 containerd[1556]: time="2025-06-21T06:11:13.153826533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 21 06:11:13.165350 containerd[1556]: time="2025-06-21T06:11:13.165261659Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 21 06:11:15.175777 containerd[1556]: time="2025-06-21T06:11:15.175498067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:15.179879 containerd[1556]: time="2025-06-21T06:11:15.179851352Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jun 21 06:11:15.181123 containerd[1556]: time="2025-06-21T06:11:15.181080096Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:15.194373 containerd[1556]: time="2025-06-21T06:11:15.194183512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:15.197003 containerd[1556]: time="2025-06-21T06:11:15.196172091Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.030837055s" Jun 21 06:11:15.197003 containerd[1556]: time="2025-06-21T06:11:15.196309880Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 21 06:11:15.198785 containerd[1556]: time="2025-06-21T06:11:15.198259296Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 21 06:11:17.286258 containerd[1556]: time="2025-06-21T06:11:17.286142420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:17.288452 containerd[1556]: time="2025-06-21T06:11:17.288413680Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jun 21 06:11:17.290233 containerd[1556]: time="2025-06-21T06:11:17.290187637Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:17.294140 containerd[1556]: time="2025-06-21T06:11:17.293403498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:17.294686 containerd[1556]: time="2025-06-21T06:11:17.294569295Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.095670019s" Jun 21 06:11:17.294686 containerd[1556]: time="2025-06-21T06:11:17.294619409Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 21 06:11:17.295173 containerd[1556]: time="2025-06-21T06:11:17.295145415Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 21 06:11:18.674613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount753901363.mount: Deactivated successfully. Jun 21 06:11:19.259448 containerd[1556]: time="2025-06-21T06:11:19.259208074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:19.260873 containerd[1556]: time="2025-06-21T06:11:19.260840686Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jun 21 06:11:19.262250 containerd[1556]: time="2025-06-21T06:11:19.262162455Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:19.266127 containerd[1556]: time="2025-06-21T06:11:19.265321210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:19.266127 containerd[1556]: time="2025-06-21T06:11:19.266057691Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.970878963s" Jun 21 06:11:19.266127 containerd[1556]: time="2025-06-21T06:11:19.266087437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 21 06:11:19.267027 containerd[1556]: time="2025-06-21T06:11:19.266982365Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 06:11:20.024632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492754213.mount: Deactivated successfully. Jun 21 06:11:21.456794 containerd[1556]: time="2025-06-21T06:11:21.456616213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:21.458870 containerd[1556]: time="2025-06-21T06:11:21.458845133Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jun 21 06:11:21.459128 containerd[1556]: time="2025-06-21T06:11:21.459087407Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:21.462448 containerd[1556]: time="2025-06-21T06:11:21.462394059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:21.463670 containerd[1556]: time="2025-06-21T06:11:21.463634766Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.196612917s" Jun 21 06:11:21.463743 containerd[1556]: time="2025-06-21T06:11:21.463670423Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 21 06:11:21.465546 containerd[1556]: time="2025-06-21T06:11:21.465456463Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 06:11:21.912017 update_engine[1531]: I20250621 06:11:21.911350 1531 update_attempter.cc:509] Updating boot flags... Jun 21 06:11:22.042087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045615711.mount: Deactivated successfully. Jun 21 06:11:22.068319 containerd[1556]: time="2025-06-21T06:11:22.068257255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 06:11:22.069228 containerd[1556]: time="2025-06-21T06:11:22.069153546Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 21 06:11:22.073127 containerd[1556]: time="2025-06-21T06:11:22.072425623Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 06:11:22.079288 containerd[1556]: time="2025-06-21T06:11:22.079256535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 06:11:22.080181 containerd[1556]: time="2025-06-21T06:11:22.080140753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 614.276936ms" Jun 21 06:11:22.080249 containerd[1556]: time="2025-06-21T06:11:22.080181920Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 06:11:22.081526 containerd[1556]: time="2025-06-21T06:11:22.081499110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 21 06:11:22.795242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 21 06:11:22.804641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:11:22.845081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3948162431.mount: Deactivated successfully. Jun 21 06:11:23.377400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:11:23.395047 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:11:23.496295 kubelet[2232]: E0621 06:11:23.496194 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:11:23.499337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:11:23.499486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:11:23.500556 systemd[1]: kubelet.service: Consumed 340ms CPU time, 110.2M memory peak. Jun 21 06:11:26.243783 containerd[1556]: time="2025-06-21T06:11:26.243544125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:26.247212 containerd[1556]: time="2025-06-21T06:11:26.247174754Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jun 21 06:11:26.248603 containerd[1556]: time="2025-06-21T06:11:26.248565082Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:26.253559 containerd[1556]: time="2025-06-21T06:11:26.253507020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:26.255771 containerd[1556]: time="2025-06-21T06:11:26.255709060Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.173992772s" Jun 21 06:11:26.255926 containerd[1556]: time="2025-06-21T06:11:26.255900750Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 21 06:11:30.384706 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:11:30.385552 systemd[1]: kubelet.service: Consumed 340ms CPU time, 110.2M memory peak. Jun 21 06:11:30.394621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:11:30.445964 systemd[1]: Reload requested from client PID 2315 ('systemctl') (unit session-11.scope)... Jun 21 06:11:30.446311 systemd[1]: Reloading... Jun 21 06:11:30.570174 zram_generator::config[2360]: No configuration found. Jun 21 06:11:30.711945 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:11:30.885312 systemd[1]: Reloading finished in 438 ms. Jun 21 06:11:31.297629 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 06:11:31.298284 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 06:11:31.300319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:11:31.300516 systemd[1]: kubelet.service: Consumed 326ms CPU time, 97.4M memory peak. Jun 21 06:11:31.307421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:11:32.065549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:11:32.093878 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 06:11:32.192150 kubelet[2424]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:11:32.192150 kubelet[2424]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 06:11:32.192150 kubelet[2424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:11:32.192150 kubelet[2424]: I0621 06:11:32.190919 2424 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 06:11:33.043225 kubelet[2424]: I0621 06:11:33.042517 2424 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 06:11:33.043225 kubelet[2424]: I0621 06:11:33.042701 2424 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 06:11:33.044974 kubelet[2424]: I0621 06:11:33.044930 2424 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 06:11:33.092530 kubelet[2424]: I0621 06:11:33.092432 2424 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 06:11:33.099188 kubelet[2424]: E0621 06:11:33.099018 2424 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.3:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.3:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:11:33.109679 kubelet[2424]: I0621 06:11:33.109522 2424 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 06:11:33.126718 kubelet[2424]: I0621 06:11:33.126658 2424 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 06:11:33.128174 kubelet[2424]: I0621 06:11:33.128071 2424 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 06:11:33.128476 kubelet[2424]: I0621 06:11:33.128400 2424 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 06:11:33.128929 kubelet[2424]: I0621 06:11:33.128462 2424 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-0-3-5f235c9307.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 06:11:33.129662 kubelet[2424]: I0621 06:11:33.128973 2424 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 06:11:33.129662 kubelet[2424]: I0621 06:11:33.129001 2424 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 06:11:33.129662 kubelet[2424]: I0621 06:11:33.129385 2424 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:11:33.133985 kubelet[2424]: I0621 06:11:33.133910 2424 kubelet.go:408] "Attempting to sync node with API server" Jun 21 06:11:33.133985 kubelet[2424]: I0621 06:11:33.133983 2424 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 06:11:33.134435 kubelet[2424]: I0621 06:11:33.134146 2424 kubelet.go:314] "Adding apiserver pod source" Jun 21 06:11:33.134435 kubelet[2424]: I0621 06:11:33.134268 2424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 06:11:33.148511 kubelet[2424]: W0621 06:11:33.147020 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.3:6443: connect: connection refused Jun 21 06:11:33.148511 kubelet[2424]: E0621 06:11:33.147153 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.3:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:11:33.148511 kubelet[2424]: W0621 06:11:33.148474 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-0-3-5f235c9307.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.3:6443: connect: connection refused Jun 21 06:11:33.149053 kubelet[2424]: E0621 06:11:33.148545 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-0-3-5f235c9307.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.3:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:11:33.150182 kubelet[2424]: I0621 06:11:33.150141 2424 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 06:11:33.151209 kubelet[2424]: I0621 06:11:33.151163 2424 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 06:11:33.152790 kubelet[2424]: W0621 06:11:33.152746 2424 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 06:11:33.157187 kubelet[2424]: I0621 06:11:33.156188 2424 server.go:1274] "Started kubelet" Jun 21 06:11:33.158332 kubelet[2424]: I0621 06:11:33.158280 2424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 06:11:33.169191 kubelet[2424]: I0621 06:11:33.168486 2424 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 06:11:33.170278 kubelet[2424]: I0621 06:11:33.170228 2424 server.go:449] "Adding debug handlers to kubelet server" Jun 21 06:11:33.177753 kubelet[2424]: I0621 06:11:33.176361 2424 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 06:11:33.177753 kubelet[2424]: I0621 06:11:33.176702 2424 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 06:11:33.177753 kubelet[2424]: I0621 06:11:33.177194 2424 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 06:11:33.180159 kubelet[2424]: I0621 06:11:33.179078 2424 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 06:11:33.180159 kubelet[2424]: E0621 06:11:33.179456 2424 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-0-3-5f235c9307.novalocal\" not found" Jun 21 06:11:33.185264 kubelet[2424]: I0621 06:11:33.185216 2424 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 06:11:33.185529 kubelet[2424]: I0621 06:11:33.185358 2424 reconciler.go:26] "Reconciler: start to sync state" Jun 21 06:11:33.186310 kubelet[2424]: E0621 06:11:33.181296 2424 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.3:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.3:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-0-0-3-5f235c9307.novalocal.184afa0257c24561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-0-3-5f235c9307.novalocal,UID:ci-4372-0-0-3-5f235c9307.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-0-3-5f235c9307.novalocal,},FirstTimestamp:2025-06-21 06:11:33.156087137 +0000 UTC m=+1.051029687,LastTimestamp:2025-06-21 06:11:33.156087137 +0000 UTC m=+1.051029687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-0-3-5f235c9307.novalocal,}" Jun 21 06:11:33.188188 kubelet[2424]: E0621 06:11:33.188076 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-0-3-5f235c9307.novalocal?timeout=10s\": dial tcp 172.24.4.3:6443: connect: connection refused" interval="200ms" Jun 21 06:11:33.206009 kubelet[2424]: I0621 06:11:33.205918 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 06:11:33.208485 kubelet[2424]: I0621 06:11:33.207277 2424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 06:11:33.208485 kubelet[2424]: I0621 06:11:33.207451 2424 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 06:11:33.208485 kubelet[2424]: I0621 06:11:33.207603 2424 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 06:11:33.208485 kubelet[2424]: E0621 06:11:33.207704 2424 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 06:11:33.211772 kubelet[2424]: W0621 06:11:33.211707 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.3:6443: connect: connection refused Jun 21 06:11:33.211873 kubelet[2424]: E0621 06:11:33.211779 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.3:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:11:33.213068 kubelet[2424]: I0621 06:11:33.213042 2424 factory.go:221] Registration of the containerd container factory successfully Jun 21 06:11:33.213068 kubelet[2424]: I0621 06:11:33.213060 2424 factory.go:221] Registration of the systemd container factory successfully Jun 21 06:11:33.213233 kubelet[2424]: I0621 06:11:33.213206 2424 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 06:11:33.218403 kubelet[2424]: E0621 06:11:33.218378 2424 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 06:11:33.219311 kubelet[2424]: W0621 06:11:33.219272 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.3:6443: connect: connection refused Jun 21 06:11:33.219469 kubelet[2424]: E0621 06:11:33.219447 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.3:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:11:33.249975 kubelet[2424]: I0621 06:11:33.249934 2424 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 06:11:33.249975 kubelet[2424]: I0621 06:11:33.249954 2424 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 06:11:33.250204 kubelet[2424]: I0621 06:11:33.249998 2424 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:11:33.256968 kubelet[2424]: I0621 06:11:33.256925 2424 policy_none.go:49] "None policy: Start" Jun 21 06:11:33.258258 kubelet[2424]: I0621 06:11:33.258215 2424 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 06:11:33.258386 kubelet[2424]: I0621 06:11:33.258280 2424 state_mem.go:35] "Initializing new in-memory state store" Jun 21 06:11:33.272914 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 06:11:33.279722 kubelet[2424]: E0621 06:11:33.279693 2424 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-0-3-5f235c9307.novalocal\" not found" Jun 21 06:11:33.287016 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 06:11:33.292149 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 06:11:33.310010 kubelet[2424]: E0621 06:11:33.308534 2424 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 21 06:11:33.313388 kubelet[2424]: I0621 06:11:33.313276 2424 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 06:11:33.315677 kubelet[2424]: I0621 06:11:33.313943 2424 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 06:11:33.315677 kubelet[2424]: I0621 06:11:33.313999 2424 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 06:11:33.315677 kubelet[2424]: I0621 06:11:33.314497 2424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 06:11:33.317914 kubelet[2424]: E0621 06:11:33.317870 2424 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-0-0-3-5f235c9307.novalocal\" not found" Jun 21 06:11:33.389855 kubelet[2424]: E0621 06:11:33.389738 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-0-3-5f235c9307.novalocal?timeout=10s\": dial tcp 172.24.4.3:6443: connect: connection refused" interval="400ms" Jun 21 06:11:33.418403 kubelet[2424]: I0621 06:11:33.418278 2424 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.419455 kubelet[2424]: E0621 06:11:33.419331 2424 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.3:6443/api/v1/nodes\": dial tcp 172.24.4.3:6443: connect: connection refused" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.541634 systemd[1]: Created slice kubepods-burstable-podac8e16a8263e46ccfc41d5a143506bdf.slice - libcontainer container kubepods-burstable-podac8e16a8263e46ccfc41d5a143506bdf.slice. Jun 21 06:11:33.570078 systemd[1]: Created slice kubepods-burstable-pod2b4a0852faae79772be9f97eaa16fb59.slice - libcontainer container kubepods-burstable-pod2b4a0852faae79772be9f97eaa16fb59.slice. Jun 21 06:11:33.585265 systemd[1]: Created slice kubepods-burstable-pod9221c29d0ab3c0030dbfc79ceb66cc9c.slice - libcontainer container kubepods-burstable-pod9221c29d0ab3c0030dbfc79ceb66cc9c.slice. Jun 21 06:11:33.624261 kubelet[2424]: I0621 06:11:33.624179 2424 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.625450 kubelet[2424]: E0621 06:11:33.625345 2424 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.3:6443/api/v1/nodes\": dial tcp 172.24.4.3:6443: connect: connection refused" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.688498 kubelet[2424]: I0621 06:11:33.687917 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac8e16a8263e46ccfc41d5a143506bdf-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"ac8e16a8263e46ccfc41d5a143506bdf\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.688498 kubelet[2424]: I0621 06:11:33.688015 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac8e16a8263e46ccfc41d5a143506bdf-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"ac8e16a8263e46ccfc41d5a143506bdf\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.688498 kubelet[2424]: I0621 06:11:33.688071 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b4a0852faae79772be9f97eaa16fb59-kubeconfig\") pod \"kube-scheduler-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"2b4a0852faae79772be9f97eaa16fb59\") " pod="kube-system/kube-scheduler-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.688498 kubelet[2424]: I0621 06:11:33.688177 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9221c29d0ab3c0030dbfc79ceb66cc9c-k8s-certs\") pod \"kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"9221c29d0ab3c0030dbfc79ceb66cc9c\") " pod="kube-system/kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.688498 kubelet[2424]: I0621 06:11:33.688228 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac8e16a8263e46ccfc41d5a143506bdf-ca-certs\") pod \"kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"ac8e16a8263e46ccfc41d5a143506bdf\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.689244 kubelet[2424]: I0621 06:11:33.688273 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac8e16a8263e46ccfc41d5a143506bdf-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"ac8e16a8263e46ccfc41d5a143506bdf\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.689244 kubelet[2424]: I0621 06:11:33.688321 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac8e16a8263e46ccfc41d5a143506bdf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"ac8e16a8263e46ccfc41d5a143506bdf\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.689244 kubelet[2424]: I0621 06:11:33.688371 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9221c29d0ab3c0030dbfc79ceb66cc9c-ca-certs\") pod \"kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"9221c29d0ab3c0030dbfc79ceb66cc9c\") " pod="kube-system/kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.689244 kubelet[2424]: I0621 06:11:33.688504 2424 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9221c29d0ab3c0030dbfc79ceb66cc9c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"9221c29d0ab3c0030dbfc79ceb66cc9c\") " pod="kube-system/kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:33.791088 kubelet[2424]: E0621 06:11:33.790982 2424 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.3:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-0-3-5f235c9307.novalocal?timeout=10s\": dial tcp 172.24.4.3:6443: connect: connection refused" interval="800ms" Jun 21 06:11:33.867174 containerd[1556]: time="2025-06-21T06:11:33.865528905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal,Uid:ac8e16a8263e46ccfc41d5a143506bdf,Namespace:kube-system,Attempt:0,}" Jun 21 06:11:33.882343 containerd[1556]: time="2025-06-21T06:11:33.882243403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-0-3-5f235c9307.novalocal,Uid:2b4a0852faae79772be9f97eaa16fb59,Namespace:kube-system,Attempt:0,}" Jun 21 06:11:33.892363 containerd[1556]: time="2025-06-21T06:11:33.892215566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal,Uid:9221c29d0ab3c0030dbfc79ceb66cc9c,Namespace:kube-system,Attempt:0,}" Jun 21 06:11:33.955415 kubelet[2424]: W0621 06:11:33.954578 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.3:6443: connect: connection refused Jun 21 06:11:33.955415 kubelet[2424]: E0621 06:11:33.954736 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.3:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.3:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:11:33.985127 containerd[1556]: time="2025-06-21T06:11:33.985045048Z" level=info msg="connecting to shim 25b7a4ac2db9ff1f41322a5a7bdd7d231e2b0c0f36d7feb56401f8a87ad92b81" address="unix:///run/containerd/s/60e667e31e692417be7cb70e50bdbf210e3165bbc33b87b349f158687d8e1926" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:11:34.011789 containerd[1556]: time="2025-06-21T06:11:34.011654794Z" level=info msg="connecting to shim 39327ec776d184087eef7a232cbe88f895424013466a7d3c6e9418959706de72" address="unix:///run/containerd/s/4dd7ec02bf3aba57efc5c3eb7d991cddf7dd18d0b0600f2ad327fb8cbb03e50d" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:11:34.015827 containerd[1556]: time="2025-06-21T06:11:34.015263773Z" level=info msg="connecting to shim 6884ec9eef238e93fd07fe95e026e1f0b63b1cdef2e4ccdb6270d4d3e68a151b" address="unix:///run/containerd/s/178f4eef201b1f78273f69ac4cb9e60b3e652585889faf39b103dc519b157b25" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:11:34.030461 kubelet[2424]: I0621 06:11:34.030429 2424 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:34.032520 kubelet[2424]: E0621 06:11:34.032434 2424 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.3:6443/api/v1/nodes\": dial tcp 172.24.4.3:6443: connect: connection refused" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:34.042361 systemd[1]: Started cri-containerd-25b7a4ac2db9ff1f41322a5a7bdd7d231e2b0c0f36d7feb56401f8a87ad92b81.scope - libcontainer container 25b7a4ac2db9ff1f41322a5a7bdd7d231e2b0c0f36d7feb56401f8a87ad92b81. Jun 21 06:11:34.060081 kubelet[2424]: E0621 06:11:34.059972 2424 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.3:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.3:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-0-0-3-5f235c9307.novalocal.184afa0257c24561 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-0-3-5f235c9307.novalocal,UID:ci-4372-0-0-3-5f235c9307.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-0-3-5f235c9307.novalocal,},FirstTimestamp:2025-06-21 06:11:33.156087137 +0000 UTC m=+1.051029687,LastTimestamp:2025-06-21 06:11:33.156087137 +0000 UTC m=+1.051029687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-0-3-5f235c9307.novalocal,}" Jun 21 06:11:34.066395 systemd[1]: Started cri-containerd-39327ec776d184087eef7a232cbe88f895424013466a7d3c6e9418959706de72.scope - libcontainer container 39327ec776d184087eef7a232cbe88f895424013466a7d3c6e9418959706de72. Jun 21 06:11:34.073467 systemd[1]: Started cri-containerd-6884ec9eef238e93fd07fe95e026e1f0b63b1cdef2e4ccdb6270d4d3e68a151b.scope - libcontainer container 6884ec9eef238e93fd07fe95e026e1f0b63b1cdef2e4ccdb6270d4d3e68a151b. Jun 21 06:11:34.095818 kubelet[2424]: W0621 06:11:34.095756 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.3:6443: connect: connection refused Jun 21 06:11:34.096026 kubelet[2424]: E0621 06:11:34.095993 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.3:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.3:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:11:34.154440 containerd[1556]: time="2025-06-21T06:11:34.153765042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-0-3-5f235c9307.novalocal,Uid:2b4a0852faae79772be9f97eaa16fb59,Namespace:kube-system,Attempt:0,} returns sandbox id \"39327ec776d184087eef7a232cbe88f895424013466a7d3c6e9418959706de72\"" Jun 21 06:11:34.165378 containerd[1556]: time="2025-06-21T06:11:34.165261844Z" level=info msg="CreateContainer within sandbox \"39327ec776d184087eef7a232cbe88f895424013466a7d3c6e9418959706de72\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 06:11:34.169238 containerd[1556]: time="2025-06-21T06:11:34.169192416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal,Uid:9221c29d0ab3c0030dbfc79ceb66cc9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6884ec9eef238e93fd07fe95e026e1f0b63b1cdef2e4ccdb6270d4d3e68a151b\"" Jun 21 06:11:34.176092 containerd[1556]: time="2025-06-21T06:11:34.176038566Z" level=info msg="CreateContainer within sandbox \"6884ec9eef238e93fd07fe95e026e1f0b63b1cdef2e4ccdb6270d4d3e68a151b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 06:11:34.182845 containerd[1556]: time="2025-06-21T06:11:34.182807101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal,Uid:ac8e16a8263e46ccfc41d5a143506bdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"25b7a4ac2db9ff1f41322a5a7bdd7d231e2b0c0f36d7feb56401f8a87ad92b81\"" Jun 21 06:11:34.189176 containerd[1556]: time="2025-06-21T06:11:34.189125160Z" level=info msg="CreateContainer within sandbox \"25b7a4ac2db9ff1f41322a5a7bdd7d231e2b0c0f36d7feb56401f8a87ad92b81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 06:11:34.204300 containerd[1556]: time="2025-06-21T06:11:34.204226873Z" level=info msg="Container 618b1eb34caa2742885e196095e5476161ff4bd878f295036664e207c566fe67: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:34.221142 containerd[1556]: time="2025-06-21T06:11:34.220574583Z" level=info msg="Container 0107b3e2ff96b204bca7f34726c999a7b2a602921a7451babee18cf2175ec0e7: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:34.227044 containerd[1556]: time="2025-06-21T06:11:34.226991298Z" level=info msg="Container 61a64ed272e1e0caac4d7eebe3e66a43bd47aa89b05f316af9f654fece1d52ab: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:34.242688 containerd[1556]: time="2025-06-21T06:11:34.242578913Z" level=info msg="CreateContainer within sandbox \"6884ec9eef238e93fd07fe95e026e1f0b63b1cdef2e4ccdb6270d4d3e68a151b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0107b3e2ff96b204bca7f34726c999a7b2a602921a7451babee18cf2175ec0e7\"" Jun 21 06:11:34.254753 containerd[1556]: time="2025-06-21T06:11:34.254125057Z" level=info msg="CreateContainer within sandbox \"39327ec776d184087eef7a232cbe88f895424013466a7d3c6e9418959706de72\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"618b1eb34caa2742885e196095e5476161ff4bd878f295036664e207c566fe67\"" Jun 21 06:11:34.255175 containerd[1556]: time="2025-06-21T06:11:34.255152564Z" level=info msg="StartContainer for \"0107b3e2ff96b204bca7f34726c999a7b2a602921a7451babee18cf2175ec0e7\"" Jun 21 06:11:34.258466 containerd[1556]: time="2025-06-21T06:11:34.257576280Z" level=info msg="CreateContainer within sandbox \"25b7a4ac2db9ff1f41322a5a7bdd7d231e2b0c0f36d7feb56401f8a87ad92b81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61a64ed272e1e0caac4d7eebe3e66a43bd47aa89b05f316af9f654fece1d52ab\"" Jun 21 06:11:34.258921 containerd[1556]: time="2025-06-21T06:11:34.258894633Z" level=info msg="connecting to shim 0107b3e2ff96b204bca7f34726c999a7b2a602921a7451babee18cf2175ec0e7" address="unix:///run/containerd/s/178f4eef201b1f78273f69ac4cb9e60b3e652585889faf39b103dc519b157b25" protocol=ttrpc version=3 Jun 21 06:11:34.260231 containerd[1556]: time="2025-06-21T06:11:34.260170045Z" level=info msg="StartContainer for \"61a64ed272e1e0caac4d7eebe3e66a43bd47aa89b05f316af9f654fece1d52ab\"" Jun 21 06:11:34.260673 containerd[1556]: time="2025-06-21T06:11:34.260651969Z" level=info msg="StartContainer for \"618b1eb34caa2742885e196095e5476161ff4bd878f295036664e207c566fe67\"" Jun 21 06:11:34.261840 containerd[1556]: time="2025-06-21T06:11:34.261816803Z" level=info msg="connecting to shim 618b1eb34caa2742885e196095e5476161ff4bd878f295036664e207c566fe67" address="unix:///run/containerd/s/4dd7ec02bf3aba57efc5c3eb7d991cddf7dd18d0b0600f2ad327fb8cbb03e50d" protocol=ttrpc version=3 Jun 21 06:11:34.264016 kubelet[2424]: W0621 06:11:34.263765 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-0-3-5f235c9307.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.3:6443: connect: connection refused Jun 21 06:11:34.264340 kubelet[2424]: E0621 06:11:34.264076 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.3:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-0-3-5f235c9307.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.3:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:11:34.269697 containerd[1556]: time="2025-06-21T06:11:34.269625118Z" level=info msg="connecting to shim 61a64ed272e1e0caac4d7eebe3e66a43bd47aa89b05f316af9f654fece1d52ab" address="unix:///run/containerd/s/60e667e31e692417be7cb70e50bdbf210e3165bbc33b87b349f158687d8e1926" protocol=ttrpc version=3 Jun 21 06:11:34.304528 systemd[1]: Started cri-containerd-0107b3e2ff96b204bca7f34726c999a7b2a602921a7451babee18cf2175ec0e7.scope - libcontainer container 0107b3e2ff96b204bca7f34726c999a7b2a602921a7451babee18cf2175ec0e7. Jun 21 06:11:34.314902 kubelet[2424]: W0621 06:11:34.314842 2424 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.3:6443: connect: connection refused Jun 21 06:11:34.315158 kubelet[2424]: E0621 06:11:34.315090 2424 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.3:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.3:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:11:34.317291 systemd[1]: Started cri-containerd-618b1eb34caa2742885e196095e5476161ff4bd878f295036664e207c566fe67.scope - libcontainer container 618b1eb34caa2742885e196095e5476161ff4bd878f295036664e207c566fe67. Jun 21 06:11:34.319275 systemd[1]: Started cri-containerd-61a64ed272e1e0caac4d7eebe3e66a43bd47aa89b05f316af9f654fece1d52ab.scope - libcontainer container 61a64ed272e1e0caac4d7eebe3e66a43bd47aa89b05f316af9f654fece1d52ab. Jun 21 06:11:34.407337 containerd[1556]: time="2025-06-21T06:11:34.406618028Z" level=info msg="StartContainer for \"0107b3e2ff96b204bca7f34726c999a7b2a602921a7451babee18cf2175ec0e7\" returns successfully" Jun 21 06:11:34.421267 containerd[1556]: time="2025-06-21T06:11:34.421185599Z" level=info msg="StartContainer for \"618b1eb34caa2742885e196095e5476161ff4bd878f295036664e207c566fe67\" returns successfully" Jun 21 06:11:34.441713 containerd[1556]: time="2025-06-21T06:11:34.441657254Z" level=info msg="StartContainer for \"61a64ed272e1e0caac4d7eebe3e66a43bd47aa89b05f316af9f654fece1d52ab\" returns successfully" Jun 21 06:11:34.836710 kubelet[2424]: I0621 06:11:34.836510 2424 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:36.265968 kubelet[2424]: E0621 06:11:36.265813 2424 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372-0-0-3-5f235c9307.novalocal\" not found" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:36.402128 kubelet[2424]: I0621 06:11:36.401960 2424 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:36.402128 kubelet[2424]: E0621 06:11:36.402015 2424 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4372-0-0-3-5f235c9307.novalocal\": node \"ci-4372-0-0-3-5f235c9307.novalocal\" not found" Jun 21 06:11:36.444214 kubelet[2424]: E0621 06:11:36.444139 2424 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-0-3-5f235c9307.novalocal\" not found" Jun 21 06:11:36.545497 kubelet[2424]: E0621 06:11:36.544780 2424 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-0-3-5f235c9307.novalocal\" not found" Jun 21 06:11:36.645291 kubelet[2424]: E0621 06:11:36.645210 2424 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4372-0-0-3-5f235c9307.novalocal\" not found" Jun 21 06:11:37.149544 kubelet[2424]: I0621 06:11:37.149456 2424 apiserver.go:52] "Watching apiserver" Jun 21 06:11:37.186228 kubelet[2424]: I0621 06:11:37.185966 2424 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 06:11:37.279777 kubelet[2424]: W0621 06:11:37.278943 2424 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 06:11:37.283799 kubelet[2424]: W0621 06:11:37.283740 2424 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 06:11:39.139662 systemd[1]: Reload requested from client PID 2699 ('systemctl') (unit session-11.scope)... Jun 21 06:11:39.139768 systemd[1]: Reloading... Jun 21 06:11:39.283147 zram_generator::config[2756]: No configuration found. Jun 21 06:11:39.392803 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:11:39.556405 systemd[1]: Reloading finished in 415 ms. Jun 21 06:11:39.585122 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:11:39.600468 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 06:11:39.600718 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:11:39.600796 systemd[1]: kubelet.service: Consumed 1.824s CPU time, 130.6M memory peak. Jun 21 06:11:39.604514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:11:39.905418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:11:39.922987 (kubelet)[2807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 06:11:40.034131 kubelet[2807]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:11:40.034131 kubelet[2807]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 21 06:11:40.034131 kubelet[2807]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:11:40.034131 kubelet[2807]: I0621 06:11:40.033191 2807 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 06:11:40.047175 kubelet[2807]: I0621 06:11:40.047065 2807 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 21 06:11:40.047175 kubelet[2807]: I0621 06:11:40.047177 2807 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 06:11:40.047864 kubelet[2807]: I0621 06:11:40.047819 2807 server.go:934] "Client rotation is on, will bootstrap in background" Jun 21 06:11:40.051990 kubelet[2807]: I0621 06:11:40.051940 2807 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 06:11:40.058882 kubelet[2807]: I0621 06:11:40.058597 2807 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 06:11:40.068389 kubelet[2807]: I0621 06:11:40.068365 2807 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 06:11:40.072723 kubelet[2807]: I0621 06:11:40.072601 2807 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 06:11:40.073031 kubelet[2807]: I0621 06:11:40.072985 2807 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 21 06:11:40.074125 kubelet[2807]: I0621 06:11:40.073462 2807 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 06:11:40.074125 kubelet[2807]: I0621 06:11:40.073495 2807 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-0-3-5f235c9307.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 06:11:40.074125 kubelet[2807]: I0621 06:11:40.073773 2807 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 06:11:40.074125 kubelet[2807]: I0621 06:11:40.073785 2807 container_manager_linux.go:300] "Creating device plugin manager" Jun 21 06:11:40.074453 kubelet[2807]: I0621 06:11:40.073877 2807 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:11:40.074453 kubelet[2807]: I0621 06:11:40.073977 2807 kubelet.go:408] "Attempting to sync node with API server" Jun 21 06:11:40.074453 kubelet[2807]: I0621 06:11:40.073995 2807 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 06:11:40.074453 kubelet[2807]: I0621 06:11:40.074080 2807 kubelet.go:314] "Adding apiserver pod source" Jun 21 06:11:40.074686 kubelet[2807]: I0621 06:11:40.074671 2807 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 06:11:40.078475 kubelet[2807]: I0621 06:11:40.078401 2807 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 06:11:40.081670 kubelet[2807]: I0621 06:11:40.081631 2807 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 06:11:40.083048 kubelet[2807]: I0621 06:11:40.083010 2807 server.go:1274] "Started kubelet" Jun 21 06:11:40.086959 kubelet[2807]: I0621 06:11:40.086925 2807 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 06:11:40.090112 kubelet[2807]: I0621 06:11:40.088823 2807 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 06:11:40.100092 kubelet[2807]: I0621 06:11:40.095736 2807 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 06:11:40.100092 kubelet[2807]: I0621 06:11:40.091423 2807 server.go:449] "Adding debug handlers to kubelet server" Jun 21 06:11:40.117133 kubelet[2807]: I0621 06:11:40.116052 2807 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 06:11:40.129987 kubelet[2807]: I0621 06:11:40.129955 2807 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 21 06:11:40.130909 kubelet[2807]: I0621 06:11:40.130213 2807 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 06:11:40.131906 kubelet[2807]: I0621 06:11:40.131880 2807 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 21 06:11:40.132453 kubelet[2807]: I0621 06:11:40.132017 2807 reconciler.go:26] "Reconciler: start to sync state" Jun 21 06:11:40.139334 kubelet[2807]: I0621 06:11:40.137381 2807 factory.go:221] Registration of the systemd container factory successfully Jun 21 06:11:40.139838 kubelet[2807]: I0621 06:11:40.139794 2807 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 06:11:40.142244 kubelet[2807]: I0621 06:11:40.142183 2807 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 06:11:40.142358 kubelet[2807]: I0621 06:11:40.142341 2807 factory.go:221] Registration of the containerd container factory successfully Jun 21 06:11:40.143568 kubelet[2807]: I0621 06:11:40.143521 2807 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 06:11:40.143688 kubelet[2807]: I0621 06:11:40.143575 2807 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 21 06:11:40.143688 kubelet[2807]: I0621 06:11:40.143610 2807 kubelet.go:2321] "Starting kubelet main sync loop" Jun 21 06:11:40.143688 kubelet[2807]: E0621 06:11:40.143652 2807 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 06:11:40.151116 kubelet[2807]: E0621 06:11:40.151048 2807 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 06:11:40.208368 sudo[2838]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 21 06:11:40.208804 sudo[2838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 21 06:11:40.233926 kubelet[2807]: I0621 06:11:40.233526 2807 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 21 06:11:40.233926 kubelet[2807]: I0621 06:11:40.233627 2807 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 21 06:11:40.233926 kubelet[2807]: I0621 06:11:40.233710 2807 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:11:40.234620 kubelet[2807]: I0621 06:11:40.234589 2807 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 06:11:40.234768 kubelet[2807]: I0621 06:11:40.234732 2807 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 06:11:40.234841 kubelet[2807]: I0621 06:11:40.234775 2807 policy_none.go:49] "None policy: Start" Jun 21 06:11:40.236456 kubelet[2807]: I0621 06:11:40.236136 2807 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 21 06:11:40.236456 kubelet[2807]: I0621 06:11:40.236163 2807 state_mem.go:35] "Initializing new in-memory state store" Jun 21 06:11:40.236456 kubelet[2807]: I0621 06:11:40.236299 2807 state_mem.go:75] "Updated machine memory state" Jun 21 06:11:40.245428 kubelet[2807]: E0621 06:11:40.245396 2807 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 21 06:11:40.247520 kubelet[2807]: I0621 06:11:40.247259 2807 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 06:11:40.247520 kubelet[2807]: I0621 06:11:40.247484 2807 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 06:11:40.247671 kubelet[2807]: I0621 06:11:40.247513 2807 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 06:11:40.250474 kubelet[2807]: I0621 06:11:40.248997 2807 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 06:11:40.371355 kubelet[2807]: I0621 06:11:40.371147 2807 kubelet_node_status.go:72] "Attempting to register node" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.388605 kubelet[2807]: I0621 06:11:40.388573 2807 kubelet_node_status.go:111] "Node was previously registered" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.389304 kubelet[2807]: I0621 06:11:40.389277 2807 kubelet_node_status.go:75] "Successfully registered node" node="ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.456501 kubelet[2807]: W0621 06:11:40.456453 2807 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 06:11:40.462857 kubelet[2807]: W0621 06:11:40.462601 2807 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 06:11:40.464137 kubelet[2807]: E0621 06:11:40.463901 2807 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.464137 kubelet[2807]: W0621 06:11:40.464050 2807 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 06:11:40.464137 kubelet[2807]: E0621 06:11:40.464083 2807 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4372-0-0-3-5f235c9307.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.633957 kubelet[2807]: I0621 06:11:40.633592 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9221c29d0ab3c0030dbfc79ceb66cc9c-ca-certs\") pod \"kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"9221c29d0ab3c0030dbfc79ceb66cc9c\") " pod="kube-system/kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.633957 kubelet[2807]: I0621 06:11:40.633635 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac8e16a8263e46ccfc41d5a143506bdf-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"ac8e16a8263e46ccfc41d5a143506bdf\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.633957 kubelet[2807]: I0621 06:11:40.633658 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b4a0852faae79772be9f97eaa16fb59-kubeconfig\") pod \"kube-scheduler-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"2b4a0852faae79772be9f97eaa16fb59\") " pod="kube-system/kube-scheduler-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.633957 kubelet[2807]: I0621 06:11:40.633676 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9221c29d0ab3c0030dbfc79ceb66cc9c-k8s-certs\") pod \"kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"9221c29d0ab3c0030dbfc79ceb66cc9c\") " pod="kube-system/kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.633957 kubelet[2807]: I0621 06:11:40.633696 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9221c29d0ab3c0030dbfc79ceb66cc9c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"9221c29d0ab3c0030dbfc79ceb66cc9c\") " pod="kube-system/kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.634358 kubelet[2807]: I0621 06:11:40.633724 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac8e16a8263e46ccfc41d5a143506bdf-ca-certs\") pod \"kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"ac8e16a8263e46ccfc41d5a143506bdf\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.634358 kubelet[2807]: I0621 06:11:40.633747 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac8e16a8263e46ccfc41d5a143506bdf-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"ac8e16a8263e46ccfc41d5a143506bdf\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.634358 kubelet[2807]: I0621 06:11:40.633765 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac8e16a8263e46ccfc41d5a143506bdf-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"ac8e16a8263e46ccfc41d5a143506bdf\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.634358 kubelet[2807]: I0621 06:11:40.633785 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac8e16a8263e46ccfc41d5a143506bdf-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal\" (UID: \"ac8e16a8263e46ccfc41d5a143506bdf\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:40.839781 sudo[2838]: pam_unix(sudo:session): session closed for user root Jun 21 06:11:41.084478 kubelet[2807]: I0621 06:11:41.084416 2807 apiserver.go:52] "Watching apiserver" Jun 21 06:11:41.133060 kubelet[2807]: I0621 06:11:41.132947 2807 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 21 06:11:41.216502 kubelet[2807]: W0621 06:11:41.216451 2807 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 06:11:41.216679 kubelet[2807]: E0621 06:11:41.216579 2807 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal" Jun 21 06:11:41.257112 kubelet[2807]: I0621 06:11:41.256719 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-0-0-3-5f235c9307.novalocal" podStartSLOduration=4.256600693 podStartE2EDuration="4.256600693s" podCreationTimestamp="2025-06-21 06:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:11:41.253964607 +0000 UTC m=+1.297451573" watchObservedRunningTime="2025-06-21 06:11:41.256600693 +0000 UTC m=+1.300087660" Jun 21 06:11:41.290586 kubelet[2807]: I0621 06:11:41.290465 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-0-0-3-5f235c9307.novalocal" podStartSLOduration=4.290443784 podStartE2EDuration="4.290443784s" podCreationTimestamp="2025-06-21 06:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:11:41.276955665 +0000 UTC m=+1.320442581" watchObservedRunningTime="2025-06-21 06:11:41.290443784 +0000 UTC m=+1.333930721" Jun 21 06:11:43.789145 sudo[1841]: pam_unix(sudo:session): session closed for user root Jun 21 06:11:44.059403 sshd[1840]: Connection closed by 172.24.4.1 port 48614 Jun 21 06:11:44.064483 sshd-session[1838]: pam_unix(sshd:session): session closed for user core Jun 21 06:11:44.089083 systemd[1]: sshd@8-172.24.4.3:22-172.24.4.1:48614.service: Deactivated successfully. Jun 21 06:11:44.096340 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 06:11:44.096909 systemd[1]: session-11.scope: Consumed 8.040s CPU time, 269.3M memory peak. Jun 21 06:11:44.102489 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Jun 21 06:11:44.112355 systemd-logind[1528]: Removed session 11. Jun 21 06:11:44.293397 kubelet[2807]: I0621 06:11:44.293242 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-0-0-3-5f235c9307.novalocal" podStartSLOduration=4.293194997 podStartE2EDuration="4.293194997s" podCreationTimestamp="2025-06-21 06:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:11:41.290967774 +0000 UTC m=+1.334454690" watchObservedRunningTime="2025-06-21 06:11:44.293194997 +0000 UTC m=+4.336681974" Jun 21 06:11:44.909463 kubelet[2807]: I0621 06:11:44.909173 2807 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 06:11:44.911076 containerd[1556]: time="2025-06-21T06:11:44.910768460Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 06:11:44.911898 kubelet[2807]: I0621 06:11:44.911670 2807 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 06:11:45.820406 systemd[1]: Created slice kubepods-besteffort-pod4445d814_15c8_4c4a_808f_ceb8fac9c5dd.slice - libcontainer container kubepods-besteffort-pod4445d814_15c8_4c4a_808f_ceb8fac9c5dd.slice. Jun 21 06:11:45.837404 systemd[1]: Created slice kubepods-burstable-poddf2a5456_69f1_438f_ad4e_506147b5233b.slice - libcontainer container kubepods-burstable-poddf2a5456_69f1_438f_ad4e_506147b5233b.slice. Jun 21 06:11:45.969877 kubelet[2807]: I0621 06:11:45.969825 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-bpf-maps\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973043 kubelet[2807]: I0621 06:11:45.970175 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4445d814-15c8-4c4a-808f-ceb8fac9c5dd-lib-modules\") pod \"kube-proxy-bdmfx\" (UID: \"4445d814-15c8-4c4a-808f-ceb8fac9c5dd\") " pod="kube-system/kube-proxy-bdmfx" Jun 21 06:11:45.973043 kubelet[2807]: I0621 06:11:45.970226 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-xtables-lock\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973043 kubelet[2807]: I0621 06:11:45.970248 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pqfk\" (UniqueName: \"kubernetes.io/projected/df2a5456-69f1-438f-ad4e-506147b5233b-kube-api-access-8pqfk\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973043 kubelet[2807]: I0621 06:11:45.970267 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df2a5456-69f1-438f-ad4e-506147b5233b-hubble-tls\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973043 kubelet[2807]: I0621 06:11:45.970304 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cni-path\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973043 kubelet[2807]: I0621 06:11:45.970326 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df2a5456-69f1-438f-ad4e-506147b5233b-clustermesh-secrets\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973569 kubelet[2807]: I0621 06:11:45.970344 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4445d814-15c8-4c4a-808f-ceb8fac9c5dd-kube-proxy\") pod \"kube-proxy-bdmfx\" (UID: \"4445d814-15c8-4c4a-808f-ceb8fac9c5dd\") " pod="kube-system/kube-proxy-bdmfx" Jun 21 06:11:45.973569 kubelet[2807]: I0621 06:11:45.970469 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-hostproc\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973569 kubelet[2807]: I0621 06:11:45.970524 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-etc-cni-netd\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973569 kubelet[2807]: I0621 06:11:45.971180 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-host-proc-sys-kernel\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973569 kubelet[2807]: I0621 06:11:45.971207 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-run\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973569 kubelet[2807]: I0621 06:11:45.971280 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-cgroup\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973893 kubelet[2807]: I0621 06:11:45.971330 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-lib-modules\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.973893 kubelet[2807]: I0621 06:11:45.971350 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4445d814-15c8-4c4a-808f-ceb8fac9c5dd-xtables-lock\") pod \"kube-proxy-bdmfx\" (UID: \"4445d814-15c8-4c4a-808f-ceb8fac9c5dd\") " pod="kube-system/kube-proxy-bdmfx" Jun 21 06:11:45.973893 kubelet[2807]: I0621 06:11:45.971434 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p985j\" (UniqueName: \"kubernetes.io/projected/4445d814-15c8-4c4a-808f-ceb8fac9c5dd-kube-api-access-p985j\") pod \"kube-proxy-bdmfx\" (UID: \"4445d814-15c8-4c4a-808f-ceb8fac9c5dd\") " pod="kube-system/kube-proxy-bdmfx" Jun 21 06:11:45.976482 kubelet[2807]: I0621 06:11:45.974875 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-config-path\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.976482 kubelet[2807]: I0621 06:11:45.974918 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-host-proc-sys-net\") pod \"cilium-2v4vw\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " pod="kube-system/cilium-2v4vw" Jun 21 06:11:45.980605 systemd[1]: Created slice kubepods-besteffort-podae40e130_c1fa_48c0_a26f_872a9f26ba99.slice - libcontainer container kubepods-besteffort-podae40e130_c1fa_48c0_a26f_872a9f26ba99.slice. Jun 21 06:11:46.076302 kubelet[2807]: I0621 06:11:46.076009 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nfmg\" (UniqueName: \"kubernetes.io/projected/ae40e130-c1fa-48c0-a26f-872a9f26ba99-kube-api-access-2nfmg\") pod \"cilium-operator-5d85765b45-bddn2\" (UID: \"ae40e130-c1fa-48c0-a26f-872a9f26ba99\") " pod="kube-system/cilium-operator-5d85765b45-bddn2" Jun 21 06:11:46.080068 kubelet[2807]: I0621 06:11:46.079957 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae40e130-c1fa-48c0-a26f-872a9f26ba99-cilium-config-path\") pod \"cilium-operator-5d85765b45-bddn2\" (UID: \"ae40e130-c1fa-48c0-a26f-872a9f26ba99\") " pod="kube-system/cilium-operator-5d85765b45-bddn2" Jun 21 06:11:46.146012 containerd[1556]: time="2025-06-21T06:11:46.145963105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdmfx,Uid:4445d814-15c8-4c4a-808f-ceb8fac9c5dd,Namespace:kube-system,Attempt:0,}" Jun 21 06:11:46.193191 containerd[1556]: time="2025-06-21T06:11:46.193046583Z" level=info msg="connecting to shim a8f3e3f7de8eb28159f2ce2dcf64d0e1aecf5d0958ea830599cefb9ef714815a" address="unix:///run/containerd/s/fb6284541ef3795aa3900a26208a7145f45951713caed7255ed3ad6715e3d85c" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:11:46.225290 systemd[1]: Started cri-containerd-a8f3e3f7de8eb28159f2ce2dcf64d0e1aecf5d0958ea830599cefb9ef714815a.scope - libcontainer container a8f3e3f7de8eb28159f2ce2dcf64d0e1aecf5d0958ea830599cefb9ef714815a. Jun 21 06:11:46.257063 containerd[1556]: time="2025-06-21T06:11:46.257007371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdmfx,Uid:4445d814-15c8-4c4a-808f-ceb8fac9c5dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8f3e3f7de8eb28159f2ce2dcf64d0e1aecf5d0958ea830599cefb9ef714815a\"" Jun 21 06:11:46.261984 containerd[1556]: time="2025-06-21T06:11:46.261927607Z" level=info msg="CreateContainer within sandbox \"a8f3e3f7de8eb28159f2ce2dcf64d0e1aecf5d0958ea830599cefb9ef714815a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 06:11:46.278493 containerd[1556]: time="2025-06-21T06:11:46.278431574Z" level=info msg="Container 56849f6e1c85041f6ebc5bd1d871d50a09003fc7772e1434123c05b579fc1cf3: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:46.286814 containerd[1556]: time="2025-06-21T06:11:46.286768972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bddn2,Uid:ae40e130-c1fa-48c0-a26f-872a9f26ba99,Namespace:kube-system,Attempt:0,}" Jun 21 06:11:46.298033 containerd[1556]: time="2025-06-21T06:11:46.297849914Z" level=info msg="CreateContainer within sandbox \"a8f3e3f7de8eb28159f2ce2dcf64d0e1aecf5d0958ea830599cefb9ef714815a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"56849f6e1c85041f6ebc5bd1d871d50a09003fc7772e1434123c05b579fc1cf3\"" Jun 21 06:11:46.300165 containerd[1556]: time="2025-06-21T06:11:46.300084108Z" level=info msg="StartContainer for \"56849f6e1c85041f6ebc5bd1d871d50a09003fc7772e1434123c05b579fc1cf3\"" Jun 21 06:11:46.303363 containerd[1556]: time="2025-06-21T06:11:46.303315233Z" level=info msg="connecting to shim 56849f6e1c85041f6ebc5bd1d871d50a09003fc7772e1434123c05b579fc1cf3" address="unix:///run/containerd/s/fb6284541ef3795aa3900a26208a7145f45951713caed7255ed3ad6715e3d85c" protocol=ttrpc version=3 Jun 21 06:11:46.333314 systemd[1]: Started cri-containerd-56849f6e1c85041f6ebc5bd1d871d50a09003fc7772e1434123c05b579fc1cf3.scope - libcontainer container 56849f6e1c85041f6ebc5bd1d871d50a09003fc7772e1434123c05b579fc1cf3. Jun 21 06:11:46.334928 containerd[1556]: time="2025-06-21T06:11:46.334609433Z" level=info msg="connecting to shim 31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3" address="unix:///run/containerd/s/34155ca9e11a7f35df3a0bd3c04fcaf8574a705cc590c4df17efca98e3f9b954" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:11:46.365256 systemd[1]: Started cri-containerd-31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3.scope - libcontainer container 31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3. Jun 21 06:11:46.405205 containerd[1556]: time="2025-06-21T06:11:46.405116638Z" level=info msg="StartContainer for \"56849f6e1c85041f6ebc5bd1d871d50a09003fc7772e1434123c05b579fc1cf3\" returns successfully" Jun 21 06:11:46.444012 containerd[1556]: time="2025-06-21T06:11:46.443398779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2v4vw,Uid:df2a5456-69f1-438f-ad4e-506147b5233b,Namespace:kube-system,Attempt:0,}" Jun 21 06:11:46.450742 containerd[1556]: time="2025-06-21T06:11:46.450698483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bddn2,Uid:ae40e130-c1fa-48c0-a26f-872a9f26ba99,Namespace:kube-system,Attempt:0,} returns sandbox id \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\"" Jun 21 06:11:46.454570 containerd[1556]: time="2025-06-21T06:11:46.454528665Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 21 06:11:46.488086 containerd[1556]: time="2025-06-21T06:11:46.488021336Z" level=info msg="connecting to shim 6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb" address="unix:///run/containerd/s/764a5da295820048734e7173a74b52d63c1a5b7aaf223f88a065325f58ed88a8" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:11:46.517358 systemd[1]: Started cri-containerd-6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb.scope - libcontainer container 6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb. Jun 21 06:11:46.563501 containerd[1556]: time="2025-06-21T06:11:46.563348144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2v4vw,Uid:df2a5456-69f1-438f-ad4e-506147b5233b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\"" Jun 21 06:11:47.250612 kubelet[2807]: I0621 06:11:47.249985 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bdmfx" podStartSLOduration=2.249945832 podStartE2EDuration="2.249945832s" podCreationTimestamp="2025-06-21 06:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:11:47.248715343 +0000 UTC m=+7.292202309" watchObservedRunningTime="2025-06-21 06:11:47.249945832 +0000 UTC m=+7.293432798" Jun 21 06:11:48.026594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310237100.mount: Deactivated successfully. Jun 21 06:11:48.815178 containerd[1556]: time="2025-06-21T06:11:48.814001559Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:48.815895 containerd[1556]: time="2025-06-21T06:11:48.815834861Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 21 06:11:48.817026 containerd[1556]: time="2025-06-21T06:11:48.816968369Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:48.818510 containerd[1556]: time="2025-06-21T06:11:48.818458867Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.36387772s" Jun 21 06:11:48.818510 containerd[1556]: time="2025-06-21T06:11:48.818495521Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 21 06:11:48.821389 containerd[1556]: time="2025-06-21T06:11:48.821341409Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 21 06:11:48.826979 containerd[1556]: time="2025-06-21T06:11:48.826754040Z" level=info msg="CreateContainer within sandbox \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 21 06:11:48.844202 containerd[1556]: time="2025-06-21T06:11:48.843341370Z" level=info msg="Container 1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:48.852032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3957706560.mount: Deactivated successfully. Jun 21 06:11:48.858186 containerd[1556]: time="2025-06-21T06:11:48.858077597Z" level=info msg="CreateContainer within sandbox \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\"" Jun 21 06:11:48.859276 containerd[1556]: time="2025-06-21T06:11:48.859234246Z" level=info msg="StartContainer for \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\"" Jun 21 06:11:48.860757 containerd[1556]: time="2025-06-21T06:11:48.860710679Z" level=info msg="connecting to shim 1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1" address="unix:///run/containerd/s/34155ca9e11a7f35df3a0bd3c04fcaf8574a705cc590c4df17efca98e3f9b954" protocol=ttrpc version=3 Jun 21 06:11:48.900268 systemd[1]: Started cri-containerd-1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1.scope - libcontainer container 1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1. Jun 21 06:11:48.953145 containerd[1556]: time="2025-06-21T06:11:48.952383871Z" level=info msg="StartContainer for \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\" returns successfully" Jun 21 06:11:49.256568 kubelet[2807]: I0621 06:11:49.256502 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-bddn2" podStartSLOduration=1.887844529 podStartE2EDuration="4.256481947s" podCreationTimestamp="2025-06-21 06:11:45 +0000 UTC" firstStartedPulling="2025-06-21 06:11:46.452536374 +0000 UTC m=+6.496023301" lastFinishedPulling="2025-06-21 06:11:48.821173753 +0000 UTC m=+8.864660719" observedRunningTime="2025-06-21 06:11:49.25530815 +0000 UTC m=+9.298795076" watchObservedRunningTime="2025-06-21 06:11:49.256481947 +0000 UTC m=+9.299968873" Jun 21 06:11:53.683805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131093569.mount: Deactivated successfully. Jun 21 06:11:56.508956 containerd[1556]: time="2025-06-21T06:11:56.508693009Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:56.511562 containerd[1556]: time="2025-06-21T06:11:56.511501468Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 21 06:11:56.513197 containerd[1556]: time="2025-06-21T06:11:56.512189501Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:11:56.514976 containerd[1556]: time="2025-06-21T06:11:56.513852318Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.692469407s" Jun 21 06:11:56.514976 containerd[1556]: time="2025-06-21T06:11:56.513904553Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 21 06:11:56.526142 containerd[1556]: time="2025-06-21T06:11:56.526040267Z" level=info msg="CreateContainer within sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 06:11:56.538645 containerd[1556]: time="2025-06-21T06:11:56.536362503Z" level=info msg="Container 395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:56.551351 containerd[1556]: time="2025-06-21T06:11:56.551284716Z" level=info msg="CreateContainer within sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\"" Jun 21 06:11:56.553263 containerd[1556]: time="2025-06-21T06:11:56.553214346Z" level=info msg="StartContainer for \"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\"" Jun 21 06:11:56.555023 containerd[1556]: time="2025-06-21T06:11:56.554969589Z" level=info msg="connecting to shim 395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa" address="unix:///run/containerd/s/764a5da295820048734e7173a74b52d63c1a5b7aaf223f88a065325f58ed88a8" protocol=ttrpc version=3 Jun 21 06:11:56.602324 systemd[1]: Started cri-containerd-395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa.scope - libcontainer container 395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa. Jun 21 06:11:56.650213 containerd[1556]: time="2025-06-21T06:11:56.650153156Z" level=info msg="StartContainer for \"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\" returns successfully" Jun 21 06:11:56.659736 systemd[1]: cri-containerd-395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa.scope: Deactivated successfully. Jun 21 06:11:56.662938 containerd[1556]: time="2025-06-21T06:11:56.662802329Z" level=info msg="received exit event container_id:\"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\" id:\"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\" pid:3270 exited_at:{seconds:1750486316 nanos:661926926}" Jun 21 06:11:56.663998 containerd[1556]: time="2025-06-21T06:11:56.663859290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\" id:\"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\" pid:3270 exited_at:{seconds:1750486316 nanos:661926926}" Jun 21 06:11:56.687871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa-rootfs.mount: Deactivated successfully. Jun 21 06:11:58.298434 containerd[1556]: time="2025-06-21T06:11:58.297474797Z" level=info msg="CreateContainer within sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 06:11:58.332166 containerd[1556]: time="2025-06-21T06:11:58.331566591Z" level=info msg="Container 75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:58.354385 containerd[1556]: time="2025-06-21T06:11:58.354314049Z" level=info msg="CreateContainer within sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\"" Jun 21 06:11:58.355852 containerd[1556]: time="2025-06-21T06:11:58.355801861Z" level=info msg="StartContainer for \"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\"" Jun 21 06:11:58.359330 containerd[1556]: time="2025-06-21T06:11:58.359282133Z" level=info msg="connecting to shim 75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083" address="unix:///run/containerd/s/764a5da295820048734e7173a74b52d63c1a5b7aaf223f88a065325f58ed88a8" protocol=ttrpc version=3 Jun 21 06:11:58.408277 systemd[1]: Started cri-containerd-75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083.scope - libcontainer container 75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083. Jun 21 06:11:58.443886 containerd[1556]: time="2025-06-21T06:11:58.443742417Z" level=info msg="StartContainer for \"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\" returns successfully" Jun 21 06:11:58.466663 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 06:11:58.466987 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:11:58.469279 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:11:58.471411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:11:58.476157 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 06:11:58.477597 systemd[1]: cri-containerd-75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083.scope: Deactivated successfully. Jun 21 06:11:58.481476 containerd[1556]: time="2025-06-21T06:11:58.481129126Z" level=info msg="received exit event container_id:\"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\" id:\"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\" pid:3316 exited_at:{seconds:1750486318 nanos:476173645}" Jun 21 06:11:58.483814 containerd[1556]: time="2025-06-21T06:11:58.483769372Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\" id:\"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\" pid:3316 exited_at:{seconds:1750486318 nanos:476173645}" Jun 21 06:11:58.506719 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:11:59.311494 containerd[1556]: time="2025-06-21T06:11:59.310086265Z" level=info msg="CreateContainer within sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 06:11:59.338283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083-rootfs.mount: Deactivated successfully. Jun 21 06:11:59.367534 containerd[1556]: time="2025-06-21T06:11:59.367461914Z" level=info msg="Container 107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:11:59.381928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2123789127.mount: Deactivated successfully. Jun 21 06:11:59.397375 containerd[1556]: time="2025-06-21T06:11:59.397335588Z" level=info msg="CreateContainer within sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\"" Jun 21 06:11:59.398784 containerd[1556]: time="2025-06-21T06:11:59.398741216Z" level=info msg="StartContainer for \"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\"" Jun 21 06:11:59.401419 containerd[1556]: time="2025-06-21T06:11:59.401394976Z" level=info msg="connecting to shim 107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8" address="unix:///run/containerd/s/764a5da295820048734e7173a74b52d63c1a5b7aaf223f88a065325f58ed88a8" protocol=ttrpc version=3 Jun 21 06:11:59.429372 systemd[1]: Started cri-containerd-107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8.scope - libcontainer container 107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8. Jun 21 06:11:59.471789 systemd[1]: cri-containerd-107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8.scope: Deactivated successfully. Jun 21 06:11:59.473800 containerd[1556]: time="2025-06-21T06:11:59.473726918Z" level=info msg="TaskExit event in podsandbox handler container_id:\"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\" id:\"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\" pid:3362 exited_at:{seconds:1750486319 nanos:473146401}" Jun 21 06:11:59.474219 containerd[1556]: time="2025-06-21T06:11:59.474189209Z" level=info msg="received exit event container_id:\"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\" id:\"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\" pid:3362 exited_at:{seconds:1750486319 nanos:473146401}" Jun 21 06:11:59.493439 containerd[1556]: time="2025-06-21T06:11:59.493391298Z" level=info msg="StartContainer for \"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\" returns successfully" Jun 21 06:11:59.510679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8-rootfs.mount: Deactivated successfully. Jun 21 06:12:00.324694 containerd[1556]: time="2025-06-21T06:12:00.324548102Z" level=info msg="CreateContainer within sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 06:12:00.357567 containerd[1556]: time="2025-06-21T06:12:00.356440861Z" level=info msg="Container 03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:12:00.378932 containerd[1556]: time="2025-06-21T06:12:00.378807818Z" level=info msg="CreateContainer within sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\"" Jun 21 06:12:00.381351 containerd[1556]: time="2025-06-21T06:12:00.380263933Z" level=info msg="StartContainer for \"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\"" Jun 21 06:12:00.383177 containerd[1556]: time="2025-06-21T06:12:00.382504268Z" level=info msg="connecting to shim 03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f" address="unix:///run/containerd/s/764a5da295820048734e7173a74b52d63c1a5b7aaf223f88a065325f58ed88a8" protocol=ttrpc version=3 Jun 21 06:12:00.427298 systemd[1]: Started cri-containerd-03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f.scope - libcontainer container 03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f. Jun 21 06:12:00.467002 systemd[1]: cri-containerd-03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f.scope: Deactivated successfully. Jun 21 06:12:00.469370 containerd[1556]: time="2025-06-21T06:12:00.469325181Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\" id:\"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\" pid:3400 exited_at:{seconds:1750486320 nanos:468437212}" Jun 21 06:12:00.473588 containerd[1556]: time="2025-06-21T06:12:00.473534427Z" level=info msg="received exit event container_id:\"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\" id:\"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\" pid:3400 exited_at:{seconds:1750486320 nanos:468437212}" Jun 21 06:12:00.483951 containerd[1556]: time="2025-06-21T06:12:00.483821456Z" level=info msg="StartContainer for \"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\" returns successfully" Jun 21 06:12:00.507777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f-rootfs.mount: Deactivated successfully. Jun 21 06:12:01.356367 containerd[1556]: time="2025-06-21T06:12:01.356248210Z" level=info msg="CreateContainer within sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 06:12:01.396193 containerd[1556]: time="2025-06-21T06:12:01.393498529Z" level=info msg="Container 2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:12:01.403348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2500580196.mount: Deactivated successfully. Jun 21 06:12:01.425518 containerd[1556]: time="2025-06-21T06:12:01.425472740Z" level=info msg="CreateContainer within sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\"" Jun 21 06:12:01.426644 containerd[1556]: time="2025-06-21T06:12:01.426607843Z" level=info msg="StartContainer for \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\"" Jun 21 06:12:01.429537 containerd[1556]: time="2025-06-21T06:12:01.429499305Z" level=info msg="connecting to shim 2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564" address="unix:///run/containerd/s/764a5da295820048734e7173a74b52d63c1a5b7aaf223f88a065325f58ed88a8" protocol=ttrpc version=3 Jun 21 06:12:01.451372 systemd[1]: Started cri-containerd-2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564.scope - libcontainer container 2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564. Jun 21 06:12:01.505511 containerd[1556]: time="2025-06-21T06:12:01.505447978Z" level=info msg="StartContainer for \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" returns successfully" Jun 21 06:12:01.591396 containerd[1556]: time="2025-06-21T06:12:01.591335749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" id:\"cc37977a2f2a28ac53c02389c54a6707a8c419b627192712a3216a739dfbba43\" pid:3467 exited_at:{seconds:1750486321 nanos:590819796}" Jun 21 06:12:01.654328 kubelet[2807]: I0621 06:12:01.652022 2807 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 21 06:12:01.726479 systemd[1]: Created slice kubepods-burstable-pod737c0edd_9a62_4c6c_a6ce_d372859cd740.slice - libcontainer container kubepods-burstable-pod737c0edd_9a62_4c6c_a6ce_d372859cd740.slice. Jun 21 06:12:01.735490 systemd[1]: Created slice kubepods-burstable-pod010448dd_f2fd_4ce5_8312_00893bce1f4b.slice - libcontainer container kubepods-burstable-pod010448dd_f2fd_4ce5_8312_00893bce1f4b.slice. Jun 21 06:12:01.892960 kubelet[2807]: I0621 06:12:01.892804 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lbdc\" (UniqueName: \"kubernetes.io/projected/737c0edd-9a62-4c6c-a6ce-d372859cd740-kube-api-access-4lbdc\") pod \"coredns-7c65d6cfc9-c45hj\" (UID: \"737c0edd-9a62-4c6c-a6ce-d372859cd740\") " pod="kube-system/coredns-7c65d6cfc9-c45hj" Jun 21 06:12:01.892960 kubelet[2807]: I0621 06:12:01.892867 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/737c0edd-9a62-4c6c-a6ce-d372859cd740-config-volume\") pod \"coredns-7c65d6cfc9-c45hj\" (UID: \"737c0edd-9a62-4c6c-a6ce-d372859cd740\") " pod="kube-system/coredns-7c65d6cfc9-c45hj" Jun 21 06:12:01.892960 kubelet[2807]: I0621 06:12:01.892895 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/010448dd-f2fd-4ce5-8312-00893bce1f4b-config-volume\") pod \"coredns-7c65d6cfc9-drsnx\" (UID: \"010448dd-f2fd-4ce5-8312-00893bce1f4b\") " pod="kube-system/coredns-7c65d6cfc9-drsnx" Jun 21 06:12:01.892960 kubelet[2807]: I0621 06:12:01.892914 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc9fw\" (UniqueName: \"kubernetes.io/projected/010448dd-f2fd-4ce5-8312-00893bce1f4b-kube-api-access-dc9fw\") pod \"coredns-7c65d6cfc9-drsnx\" (UID: \"010448dd-f2fd-4ce5-8312-00893bce1f4b\") " pod="kube-system/coredns-7c65d6cfc9-drsnx" Jun 21 06:12:02.034880 containerd[1556]: time="2025-06-21T06:12:02.034822625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c45hj,Uid:737c0edd-9a62-4c6c-a6ce-d372859cd740,Namespace:kube-system,Attempt:0,}" Jun 21 06:12:02.043934 containerd[1556]: time="2025-06-21T06:12:02.043516343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-drsnx,Uid:010448dd-f2fd-4ce5-8312-00893bce1f4b,Namespace:kube-system,Attempt:0,}" Jun 21 06:12:03.854530 systemd-networkd[1432]: cilium_host: Link UP Jun 21 06:12:03.858635 systemd-networkd[1432]: cilium_net: Link UP Jun 21 06:12:03.865049 systemd-networkd[1432]: cilium_host: Gained carrier Jun 21 06:12:03.870546 systemd-networkd[1432]: cilium_net: Gained carrier Jun 21 06:12:03.970334 systemd-networkd[1432]: cilium_vxlan: Link UP Jun 21 06:12:03.970352 systemd-networkd[1432]: cilium_vxlan: Gained carrier Jun 21 06:12:04.061308 systemd-networkd[1432]: cilium_host: Gained IPv6LL Jun 21 06:12:04.109260 systemd-networkd[1432]: cilium_net: Gained IPv6LL Jun 21 06:12:04.346249 kernel: NET: Registered PF_ALG protocol family Jun 21 06:12:05.357914 systemd-networkd[1432]: lxc_health: Link UP Jun 21 06:12:05.365504 systemd-networkd[1432]: lxc_health: Gained carrier Jun 21 06:12:05.614648 systemd-networkd[1432]: lxc33a63626af53: Link UP Jun 21 06:12:05.621173 kernel: eth0: renamed from tmpb82b8 Jun 21 06:12:05.628779 systemd-networkd[1432]: lxc26357d2aded8: Link UP Jun 21 06:12:05.641173 kernel: eth0: renamed from tmp1d85e Jun 21 06:12:05.641877 systemd-networkd[1432]: lxc33a63626af53: Gained carrier Jun 21 06:12:05.645335 systemd-networkd[1432]: lxc26357d2aded8: Gained carrier Jun 21 06:12:05.693315 systemd-networkd[1432]: cilium_vxlan: Gained IPv6LL Jun 21 06:12:06.474543 kubelet[2807]: I0621 06:12:06.474431 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2v4vw" podStartSLOduration=11.522358512 podStartE2EDuration="21.474368961s" podCreationTimestamp="2025-06-21 06:11:45 +0000 UTC" firstStartedPulling="2025-06-21 06:11:46.565496578 +0000 UTC m=+6.608983504" lastFinishedPulling="2025-06-21 06:11:56.517506987 +0000 UTC m=+16.560993953" observedRunningTime="2025-06-21 06:12:02.406452221 +0000 UTC m=+22.449939227" watchObservedRunningTime="2025-06-21 06:12:06.474368961 +0000 UTC m=+26.517855887" Jun 21 06:12:06.909272 systemd-networkd[1432]: lxc26357d2aded8: Gained IPv6LL Jun 21 06:12:06.973277 systemd-networkd[1432]: lxc_health: Gained IPv6LL Jun 21 06:12:07.421329 systemd-networkd[1432]: lxc33a63626af53: Gained IPv6LL Jun 21 06:12:10.446013 containerd[1556]: time="2025-06-21T06:12:10.442292616Z" level=info msg="connecting to shim 1d85e17f176aa9d046f24611c24c45d537382b9fdcaa2b2ea4caf28c377438f9" address="unix:///run/containerd/s/d11353b7b308278b1e7ecf3beac3929c2d332ab8966fb67d53e15b5e42ed530b" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:12:10.451638 containerd[1556]: time="2025-06-21T06:12:10.449792758Z" level=info msg="connecting to shim b82b8722bd38c7e02960e8259ad5f9994acc7a550861baa88cf55c8bfc63abed" address="unix:///run/containerd/s/4333ce8aabe47e74c536a2a35766723d322247e2b3be427b6467c5124fcd49b3" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:12:10.520495 systemd[1]: Started cri-containerd-1d85e17f176aa9d046f24611c24c45d537382b9fdcaa2b2ea4caf28c377438f9.scope - libcontainer container 1d85e17f176aa9d046f24611c24c45d537382b9fdcaa2b2ea4caf28c377438f9. Jun 21 06:12:10.523017 systemd[1]: Started cri-containerd-b82b8722bd38c7e02960e8259ad5f9994acc7a550861baa88cf55c8bfc63abed.scope - libcontainer container b82b8722bd38c7e02960e8259ad5f9994acc7a550861baa88cf55c8bfc63abed. Jun 21 06:12:10.615137 containerd[1556]: time="2025-06-21T06:12:10.614009993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-drsnx,Uid:010448dd-f2fd-4ce5-8312-00893bce1f4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b82b8722bd38c7e02960e8259ad5f9994acc7a550861baa88cf55c8bfc63abed\"" Jun 21 06:12:10.630330 containerd[1556]: time="2025-06-21T06:12:10.630284703Z" level=info msg="CreateContainer within sandbox \"b82b8722bd38c7e02960e8259ad5f9994acc7a550861baa88cf55c8bfc63abed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 06:12:10.638379 containerd[1556]: time="2025-06-21T06:12:10.638304215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-c45hj,Uid:737c0edd-9a62-4c6c-a6ce-d372859cd740,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d85e17f176aa9d046f24611c24c45d537382b9fdcaa2b2ea4caf28c377438f9\"" Jun 21 06:12:10.643477 containerd[1556]: time="2025-06-21T06:12:10.643415273Z" level=info msg="CreateContainer within sandbox \"1d85e17f176aa9d046f24611c24c45d537382b9fdcaa2b2ea4caf28c377438f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 06:12:10.673486 containerd[1556]: time="2025-06-21T06:12:10.673260534Z" level=info msg="Container b76f4d8a5a6f8943b0b17efa180cdce999e131ba62db1fd6eaa0380d863cdef7: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:12:10.676188 containerd[1556]: time="2025-06-21T06:12:10.675856211Z" level=info msg="Container 50d858fcfe62c58f9888a3033ba21f195894673e8d9c29ba99487984fb2fac07: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:12:10.685564 containerd[1556]: time="2025-06-21T06:12:10.685500086Z" level=info msg="CreateContainer within sandbox \"1d85e17f176aa9d046f24611c24c45d537382b9fdcaa2b2ea4caf28c377438f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b76f4d8a5a6f8943b0b17efa180cdce999e131ba62db1fd6eaa0380d863cdef7\"" Jun 21 06:12:10.686384 containerd[1556]: time="2025-06-21T06:12:10.686338415Z" level=info msg="StartContainer for \"b76f4d8a5a6f8943b0b17efa180cdce999e131ba62db1fd6eaa0380d863cdef7\"" Jun 21 06:12:10.689141 containerd[1556]: time="2025-06-21T06:12:10.687937741Z" level=info msg="connecting to shim b76f4d8a5a6f8943b0b17efa180cdce999e131ba62db1fd6eaa0380d863cdef7" address="unix:///run/containerd/s/d11353b7b308278b1e7ecf3beac3929c2d332ab8966fb67d53e15b5e42ed530b" protocol=ttrpc version=3 Jun 21 06:12:10.697952 containerd[1556]: time="2025-06-21T06:12:10.697154416Z" level=info msg="CreateContainer within sandbox \"b82b8722bd38c7e02960e8259ad5f9994acc7a550861baa88cf55c8bfc63abed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"50d858fcfe62c58f9888a3033ba21f195894673e8d9c29ba99487984fb2fac07\"" Jun 21 06:12:10.699792 containerd[1556]: time="2025-06-21T06:12:10.699752989Z" level=info msg="StartContainer for \"50d858fcfe62c58f9888a3033ba21f195894673e8d9c29ba99487984fb2fac07\"" Jun 21 06:12:10.703460 containerd[1556]: time="2025-06-21T06:12:10.703413444Z" level=info msg="connecting to shim 50d858fcfe62c58f9888a3033ba21f195894673e8d9c29ba99487984fb2fac07" address="unix:///run/containerd/s/4333ce8aabe47e74c536a2a35766723d322247e2b3be427b6467c5124fcd49b3" protocol=ttrpc version=3 Jun 21 06:12:10.720444 systemd[1]: Started cri-containerd-b76f4d8a5a6f8943b0b17efa180cdce999e131ba62db1fd6eaa0380d863cdef7.scope - libcontainer container b76f4d8a5a6f8943b0b17efa180cdce999e131ba62db1fd6eaa0380d863cdef7. Jun 21 06:12:10.738252 systemd[1]: Started cri-containerd-50d858fcfe62c58f9888a3033ba21f195894673e8d9c29ba99487984fb2fac07.scope - libcontainer container 50d858fcfe62c58f9888a3033ba21f195894673e8d9c29ba99487984fb2fac07. Jun 21 06:12:10.787792 containerd[1556]: time="2025-06-21T06:12:10.787732398Z" level=info msg="StartContainer for \"b76f4d8a5a6f8943b0b17efa180cdce999e131ba62db1fd6eaa0380d863cdef7\" returns successfully" Jun 21 06:12:10.789339 containerd[1556]: time="2025-06-21T06:12:10.789263087Z" level=info msg="StartContainer for \"50d858fcfe62c58f9888a3033ba21f195894673e8d9c29ba99487984fb2fac07\" returns successfully" Jun 21 06:12:11.494286 kubelet[2807]: I0621 06:12:11.494002 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-c45hj" podStartSLOduration=26.493851738 podStartE2EDuration="26.493851738s" podCreationTimestamp="2025-06-21 06:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:12:11.460828536 +0000 UTC m=+31.504315512" watchObservedRunningTime="2025-06-21 06:12:11.493851738 +0000 UTC m=+31.537338684" Jun 21 06:12:11.495120 kubelet[2807]: I0621 06:12:11.494377 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-drsnx" podStartSLOduration=26.494363834 podStartE2EDuration="26.494363834s" podCreationTimestamp="2025-06-21 06:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:12:11.491930053 +0000 UTC m=+31.535417029" watchObservedRunningTime="2025-06-21 06:12:11.494363834 +0000 UTC m=+31.537850810" Jun 21 06:13:16.358022 systemd[1]: Started sshd@9-172.24.4.3:22-172.24.4.1:59786.service - OpenSSH per-connection server daemon (172.24.4.1:59786). Jun 21 06:13:17.535771 sshd[4123]: Accepted publickey for core from 172.24.4.1 port 59786 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:13:17.542409 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:13:17.564281 systemd-logind[1528]: New session 12 of user core. Jun 21 06:13:17.572489 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 06:13:18.396154 sshd[4128]: Connection closed by 172.24.4.1 port 59786 Jun 21 06:13:18.396883 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Jun 21 06:13:18.418815 systemd[1]: sshd@9-172.24.4.3:22-172.24.4.1:59786.service: Deactivated successfully. Jun 21 06:13:18.428523 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 06:13:18.433879 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Jun 21 06:13:18.437256 systemd-logind[1528]: Removed session 12. Jun 21 06:13:23.418187 systemd[1]: Started sshd@10-172.24.4.3:22-172.24.4.1:59802.service - OpenSSH per-connection server daemon (172.24.4.1:59802). Jun 21 06:13:24.757379 sshd[4140]: Accepted publickey for core from 172.24.4.1 port 59802 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:13:24.765027 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:13:24.786687 systemd-logind[1528]: New session 13 of user core. Jun 21 06:13:24.796617 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 06:13:25.647727 sshd[4142]: Connection closed by 172.24.4.1 port 59802 Jun 21 06:13:25.650157 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Jun 21 06:13:25.666650 systemd[1]: sshd@10-172.24.4.3:22-172.24.4.1:59802.service: Deactivated successfully. Jun 21 06:13:25.678810 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 06:13:25.681622 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Jun 21 06:13:25.687173 systemd-logind[1528]: Removed session 13. Jun 21 06:13:30.680396 systemd[1]: Started sshd@11-172.24.4.3:22-172.24.4.1:52336.service - OpenSSH per-connection server daemon (172.24.4.1:52336). Jun 21 06:13:31.934410 sshd[4154]: Accepted publickey for core from 172.24.4.1 port 52336 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:13:31.937297 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:13:31.952785 systemd-logind[1528]: New session 14 of user core. Jun 21 06:13:31.967441 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 06:13:32.959172 sshd[4156]: Connection closed by 172.24.4.1 port 52336 Jun 21 06:13:32.959943 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Jun 21 06:13:32.972325 systemd[1]: sshd@11-172.24.4.3:22-172.24.4.1:52336.service: Deactivated successfully. Jun 21 06:13:32.977461 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 06:13:32.979592 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Jun 21 06:13:32.984187 systemd[1]: Started sshd@12-172.24.4.3:22-172.24.4.1:52350.service - OpenSSH per-connection server daemon (172.24.4.1:52350). Jun 21 06:13:32.987811 systemd-logind[1528]: Removed session 14. Jun 21 06:13:34.511536 sshd[4168]: Accepted publickey for core from 172.24.4.1 port 52350 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:13:34.516184 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:13:34.532575 systemd-logind[1528]: New session 15 of user core. Jun 21 06:13:34.546515 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 06:13:35.324126 sshd[4170]: Connection closed by 172.24.4.1 port 52350 Jun 21 06:13:35.325624 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jun 21 06:13:35.345394 systemd[1]: sshd@12-172.24.4.3:22-172.24.4.1:52350.service: Deactivated successfully. Jun 21 06:13:35.349587 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 06:13:35.351350 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Jun 21 06:13:35.356287 systemd[1]: Started sshd@13-172.24.4.3:22-172.24.4.1:59382.service - OpenSSH per-connection server daemon (172.24.4.1:59382). Jun 21 06:13:35.359636 systemd-logind[1528]: Removed session 15. Jun 21 06:13:36.821358 sshd[4180]: Accepted publickey for core from 172.24.4.1 port 59382 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:13:36.825062 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:13:36.841541 systemd-logind[1528]: New session 16 of user core. Jun 21 06:13:36.851588 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 06:13:37.585906 sshd[4182]: Connection closed by 172.24.4.1 port 59382 Jun 21 06:13:37.586893 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Jun 21 06:13:37.592304 systemd[1]: sshd@13-172.24.4.3:22-172.24.4.1:59382.service: Deactivated successfully. Jun 21 06:13:37.595696 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 06:13:37.597557 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Jun 21 06:13:37.600768 systemd-logind[1528]: Removed session 16. Jun 21 06:13:42.625704 systemd[1]: Started sshd@14-172.24.4.3:22-172.24.4.1:59394.service - OpenSSH per-connection server daemon (172.24.4.1:59394). Jun 21 06:13:43.876221 sshd[4196]: Accepted publickey for core from 172.24.4.1 port 59394 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:13:43.880846 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:13:43.912271 systemd-logind[1528]: New session 17 of user core. Jun 21 06:13:43.921503 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 06:13:44.735425 sshd[4198]: Connection closed by 172.24.4.1 port 59394 Jun 21 06:13:44.737042 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Jun 21 06:13:44.745659 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Jun 21 06:13:44.746031 systemd[1]: sshd@14-172.24.4.3:22-172.24.4.1:59394.service: Deactivated successfully. Jun 21 06:13:44.753641 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 06:13:44.760438 systemd-logind[1528]: Removed session 17. Jun 21 06:13:49.790465 systemd[1]: Started sshd@15-172.24.4.3:22-172.24.4.1:54670.service - OpenSSH per-connection server daemon (172.24.4.1:54670). Jun 21 06:13:51.112789 sshd[4212]: Accepted publickey for core from 172.24.4.1 port 54670 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:13:51.117805 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:13:51.130182 systemd-logind[1528]: New session 18 of user core. Jun 21 06:13:51.143507 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 06:13:51.874250 sshd[4214]: Connection closed by 172.24.4.1 port 54670 Jun 21 06:13:51.876782 sshd-session[4212]: pam_unix(sshd:session): session closed for user core Jun 21 06:13:51.894485 systemd[1]: sshd@15-172.24.4.3:22-172.24.4.1:54670.service: Deactivated successfully. Jun 21 06:13:51.900913 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 06:13:51.905595 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Jun 21 06:13:51.911957 systemd[1]: Started sshd@16-172.24.4.3:22-172.24.4.1:54686.service - OpenSSH per-connection server daemon (172.24.4.1:54686). Jun 21 06:13:51.914191 systemd-logind[1528]: Removed session 18. Jun 21 06:13:53.883063 sshd[4226]: Accepted publickey for core from 172.24.4.1 port 54686 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:13:53.888014 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:13:53.902236 systemd-logind[1528]: New session 19 of user core. Jun 21 06:13:53.914498 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 06:13:54.917228 sshd[4228]: Connection closed by 172.24.4.1 port 54686 Jun 21 06:13:54.919040 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Jun 21 06:13:54.942271 systemd[1]: sshd@16-172.24.4.3:22-172.24.4.1:54686.service: Deactivated successfully. Jun 21 06:13:54.948029 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 06:13:54.952254 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Jun 21 06:13:54.960803 systemd[1]: Started sshd@17-172.24.4.3:22-172.24.4.1:40062.service - OpenSSH per-connection server daemon (172.24.4.1:40062). Jun 21 06:13:54.964326 systemd-logind[1528]: Removed session 19. Jun 21 06:13:56.341689 sshd[4237]: Accepted publickey for core from 172.24.4.1 port 40062 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:13:56.345328 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:13:56.359179 systemd-logind[1528]: New session 20 of user core. Jun 21 06:13:56.365488 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 06:13:59.334149 sshd[4239]: Connection closed by 172.24.4.1 port 40062 Jun 21 06:13:59.336561 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Jun 21 06:13:59.363015 systemd[1]: sshd@17-172.24.4.3:22-172.24.4.1:40062.service: Deactivated successfully. Jun 21 06:13:59.371021 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 06:13:59.375289 systemd-logind[1528]: Session 20 logged out. Waiting for processes to exit. Jun 21 06:13:59.383840 systemd[1]: Started sshd@18-172.24.4.3:22-172.24.4.1:40072.service - OpenSSH per-connection server daemon (172.24.4.1:40072). Jun 21 06:13:59.391954 systemd-logind[1528]: Removed session 20. Jun 21 06:14:00.727598 sshd[4256]: Accepted publickey for core from 172.24.4.1 port 40072 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:00.733405 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:00.751356 systemd-logind[1528]: New session 21 of user core. Jun 21 06:14:00.759484 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 06:14:01.766250 sshd[4258]: Connection closed by 172.24.4.1 port 40072 Jun 21 06:14:01.768717 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:01.797933 systemd[1]: sshd@18-172.24.4.3:22-172.24.4.1:40072.service: Deactivated successfully. Jun 21 06:14:01.804236 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 06:14:01.809495 systemd-logind[1528]: Session 21 logged out. Waiting for processes to exit. Jun 21 06:14:01.813885 systemd-logind[1528]: Removed session 21. Jun 21 06:14:01.817475 systemd[1]: Started sshd@19-172.24.4.3:22-172.24.4.1:40078.service - OpenSSH per-connection server daemon (172.24.4.1:40078). Jun 21 06:14:03.093192 sshd[4267]: Accepted publickey for core from 172.24.4.1 port 40078 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:03.097638 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:03.107973 systemd-logind[1528]: New session 22 of user core. Jun 21 06:14:03.122532 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 06:14:03.820183 sshd[4269]: Connection closed by 172.24.4.1 port 40078 Jun 21 06:14:03.820606 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:03.827250 systemd[1]: sshd@19-172.24.4.3:22-172.24.4.1:40078.service: Deactivated successfully. Jun 21 06:14:03.830865 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 06:14:03.835323 systemd-logind[1528]: Session 22 logged out. Waiting for processes to exit. Jun 21 06:14:03.837526 systemd-logind[1528]: Removed session 22. Jun 21 06:14:08.841180 systemd[1]: Started sshd@20-172.24.4.3:22-172.24.4.1:49952.service - OpenSSH per-connection server daemon (172.24.4.1:49952). Jun 21 06:14:10.275973 sshd[4284]: Accepted publickey for core from 172.24.4.1 port 49952 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:10.280490 sshd-session[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:10.297981 systemd-logind[1528]: New session 23 of user core. Jun 21 06:14:10.311620 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 06:14:11.263199 sshd[4286]: Connection closed by 172.24.4.1 port 49952 Jun 21 06:14:11.262521 sshd-session[4284]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:11.274998 systemd[1]: sshd@20-172.24.4.3:22-172.24.4.1:49952.service: Deactivated successfully. Jun 21 06:14:11.286488 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 06:14:11.292258 systemd-logind[1528]: Session 23 logged out. Waiting for processes to exit. Jun 21 06:14:11.299230 systemd-logind[1528]: Removed session 23. Jun 21 06:14:16.290213 systemd[1]: Started sshd@21-172.24.4.3:22-172.24.4.1:44298.service - OpenSSH per-connection server daemon (172.24.4.1:44298). Jun 21 06:14:17.343663 sshd[4298]: Accepted publickey for core from 172.24.4.1 port 44298 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:17.347433 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:17.361526 systemd-logind[1528]: New session 24 of user core. Jun 21 06:14:17.384676 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 21 06:14:18.189053 sshd[4302]: Connection closed by 172.24.4.1 port 44298 Jun 21 06:14:18.188402 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:18.197993 systemd[1]: sshd@21-172.24.4.3:22-172.24.4.1:44298.service: Deactivated successfully. Jun 21 06:14:18.202498 systemd[1]: session-24.scope: Deactivated successfully. Jun 21 06:14:18.205852 systemd-logind[1528]: Session 24 logged out. Waiting for processes to exit. Jun 21 06:14:18.210009 systemd-logind[1528]: Removed session 24. Jun 21 06:14:23.218073 systemd[1]: Started sshd@22-172.24.4.3:22-172.24.4.1:44314.service - OpenSSH per-connection server daemon (172.24.4.1:44314). Jun 21 06:14:24.270282 sshd[4314]: Accepted publickey for core from 172.24.4.1 port 44314 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:24.278564 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:24.301675 systemd-logind[1528]: New session 25 of user core. Jun 21 06:14:24.314635 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 21 06:14:24.937806 sshd[4316]: Connection closed by 172.24.4.1 port 44314 Jun 21 06:14:24.940351 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:24.962301 systemd[1]: sshd@22-172.24.4.3:22-172.24.4.1:44314.service: Deactivated successfully. Jun 21 06:14:24.970334 systemd[1]: session-25.scope: Deactivated successfully. Jun 21 06:14:24.989749 systemd-logind[1528]: Session 25 logged out. Waiting for processes to exit. Jun 21 06:14:24.995525 systemd[1]: Started sshd@23-172.24.4.3:22-172.24.4.1:42214.service - OpenSSH per-connection server daemon (172.24.4.1:42214). Jun 21 06:14:25.002608 systemd-logind[1528]: Removed session 25. Jun 21 06:14:25.980688 sshd[4328]: Accepted publickey for core from 172.24.4.1 port 42214 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:25.984202 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:25.998585 systemd-logind[1528]: New session 26 of user core. Jun 21 06:14:26.011395 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 21 06:14:28.610693 containerd[1556]: time="2025-06-21T06:14:28.610241479Z" level=info msg="StopContainer for \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\" with timeout 30 (s)" Jun 21 06:14:28.613886 containerd[1556]: time="2025-06-21T06:14:28.613769814Z" level=info msg="Stop container \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\" with signal terminated" Jun 21 06:14:28.645296 systemd[1]: cri-containerd-1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1.scope: Deactivated successfully. Jun 21 06:14:28.652148 containerd[1556]: time="2025-06-21T06:14:28.651932413Z" level=info msg="received exit event container_id:\"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\" id:\"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\" pid:3208 exited_at:{seconds:1750486468 nanos:650625232}" Jun 21 06:14:28.652861 containerd[1556]: time="2025-06-21T06:14:28.652650299Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\" id:\"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\" pid:3208 exited_at:{seconds:1750486468 nanos:650625232}" Jun 21 06:14:28.667719 containerd[1556]: time="2025-06-21T06:14:28.667596110Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 06:14:28.674385 containerd[1556]: time="2025-06-21T06:14:28.674336999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" id:\"0e1dee8a8b6a44f6973dd0cc79c9e0aec5d65e77349dd345bd8c9d685174b8f5\" pid:4357 exited_at:{seconds:1750486468 nanos:673427924}" Jun 21 06:14:28.679304 containerd[1556]: time="2025-06-21T06:14:28.679218410Z" level=info msg="StopContainer for \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" with timeout 2 (s)" Jun 21 06:14:28.679912 containerd[1556]: time="2025-06-21T06:14:28.679890070Z" level=info msg="Stop container \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" with signal terminated" Jun 21 06:14:28.699012 systemd-networkd[1432]: lxc_health: Link DOWN Jun 21 06:14:28.699019 systemd-networkd[1432]: lxc_health: Lost carrier Jun 21 06:14:28.709425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1-rootfs.mount: Deactivated successfully. Jun 21 06:14:28.724256 systemd[1]: cri-containerd-2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564.scope: Deactivated successfully. Jun 21 06:14:28.725593 systemd[1]: cri-containerd-2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564.scope: Consumed 9.493s CPU time, 125.4M memory peak, 128K read from disk, 13.3M written to disk. Jun 21 06:14:28.735623 containerd[1556]: time="2025-06-21T06:14:28.731756793Z" level=info msg="received exit event container_id:\"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" id:\"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" pid:3440 exited_at:{seconds:1750486468 nanos:731400444}" Jun 21 06:14:28.735623 containerd[1556]: time="2025-06-21T06:14:28.732140531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" id:\"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" pid:3440 exited_at:{seconds:1750486468 nanos:731400444}" Jun 21 06:14:28.752612 containerd[1556]: time="2025-06-21T06:14:28.752491377Z" level=info msg="StopContainer for \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\" returns successfully" Jun 21 06:14:28.754823 containerd[1556]: time="2025-06-21T06:14:28.754760251Z" level=info msg="StopPodSandbox for \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\"" Jun 21 06:14:28.756081 containerd[1556]: time="2025-06-21T06:14:28.755195155Z" level=info msg="Container to stop \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:14:28.766905 systemd[1]: cri-containerd-31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3.scope: Deactivated successfully. Jun 21 06:14:28.773912 containerd[1556]: time="2025-06-21T06:14:28.773858366Z" level=info msg="TaskExit event in podsandbox handler container_id:\"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\" id:\"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\" pid:2976 exit_status:137 exited_at:{seconds:1750486468 nanos:773494425}" Jun 21 06:14:28.783739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564-rootfs.mount: Deactivated successfully. Jun 21 06:14:28.831890 containerd[1556]: time="2025-06-21T06:14:28.831843721Z" level=info msg="StopContainer for \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" returns successfully" Jun 21 06:14:28.833183 containerd[1556]: time="2025-06-21T06:14:28.833068898Z" level=info msg="StopPodSandbox for \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\"" Jun 21 06:14:28.833456 containerd[1556]: time="2025-06-21T06:14:28.833352249Z" level=info msg="Container to stop \"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:14:28.833456 containerd[1556]: time="2025-06-21T06:14:28.833403896Z" level=info msg="Container to stop \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:14:28.833456 containerd[1556]: time="2025-06-21T06:14:28.833417341Z" level=info msg="Container to stop \"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:14:28.834227 containerd[1556]: time="2025-06-21T06:14:28.833669154Z" level=info msg="Container to stop \"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:14:28.834227 containerd[1556]: time="2025-06-21T06:14:28.833703027Z" level=info msg="Container to stop \"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:14:28.839984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3-rootfs.mount: Deactivated successfully. Jun 21 06:14:28.848009 systemd[1]: cri-containerd-6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb.scope: Deactivated successfully. Jun 21 06:14:28.856763 containerd[1556]: time="2025-06-21T06:14:28.856710753Z" level=info msg="shim disconnected" id=31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3 namespace=k8s.io Jun 21 06:14:28.856763 containerd[1556]: time="2025-06-21T06:14:28.856757942Z" level=warning msg="cleaning up after shim disconnected" id=31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3 namespace=k8s.io Jun 21 06:14:28.857053 containerd[1556]: time="2025-06-21T06:14:28.856773421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 06:14:28.893126 containerd[1556]: time="2025-06-21T06:14:28.891199444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" id:\"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" pid:3045 exit_status:137 exited_at:{seconds:1750486468 nanos:851404374}" Jun 21 06:14:28.892690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3-shm.mount: Deactivated successfully. Jun 21 06:14:28.896173 containerd[1556]: time="2025-06-21T06:14:28.891395111Z" level=info msg="TearDown network for sandbox \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\" successfully" Jun 21 06:14:28.896173 containerd[1556]: time="2025-06-21T06:14:28.895590657Z" level=info msg="StopPodSandbox for \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\" returns successfully" Jun 21 06:14:28.896173 containerd[1556]: time="2025-06-21T06:14:28.891595326Z" level=info msg="received exit event sandbox_id:\"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\" exit_status:137 exited_at:{seconds:1750486468 nanos:773494425}" Jun 21 06:14:28.905728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb-rootfs.mount: Deactivated successfully. Jun 21 06:14:28.938364 containerd[1556]: time="2025-06-21T06:14:28.937433026Z" level=info msg="received exit event sandbox_id:\"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" exit_status:137 exited_at:{seconds:1750486468 nanos:851404374}" Jun 21 06:14:28.940725 containerd[1556]: time="2025-06-21T06:14:28.937711388Z" level=info msg="TearDown network for sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" successfully" Jun 21 06:14:28.940887 containerd[1556]: time="2025-06-21T06:14:28.940826527Z" level=info msg="StopPodSandbox for \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" returns successfully" Jun 21 06:14:28.942075 containerd[1556]: time="2025-06-21T06:14:28.941958180Z" level=info msg="shim disconnected" id=6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb namespace=k8s.io Jun 21 06:14:28.945878 containerd[1556]: time="2025-06-21T06:14:28.942005979Z" level=warning msg="cleaning up after shim disconnected" id=6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb namespace=k8s.io Jun 21 06:14:28.945999 containerd[1556]: time="2025-06-21T06:14:28.945864533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 06:14:29.069650 kubelet[2807]: I0621 06:14:29.069449 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-hostproc\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.069650 kubelet[2807]: I0621 06:14:29.069631 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-etc-cni-netd\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.073213 kubelet[2807]: I0621 06:14:29.069739 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-config-path\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.073213 kubelet[2807]: I0621 06:14:29.069918 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df2a5456-69f1-438f-ad4e-506147b5233b-hubble-tls\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.073213 kubelet[2807]: I0621 06:14:29.069994 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-run\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.073213 kubelet[2807]: I0621 06:14:29.070036 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cni-path\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.073213 kubelet[2807]: I0621 06:14:29.070088 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-cgroup\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.073213 kubelet[2807]: I0621 06:14:29.070165 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-host-proc-sys-net\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.073970 kubelet[2807]: I0621 06:14:29.070242 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-xtables-lock\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.073970 kubelet[2807]: I0621 06:14:29.070306 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pqfk\" (UniqueName: \"kubernetes.io/projected/df2a5456-69f1-438f-ad4e-506147b5233b-kube-api-access-8pqfk\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.073970 kubelet[2807]: I0621 06:14:29.070356 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df2a5456-69f1-438f-ad4e-506147b5233b-clustermesh-secrets\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.073970 kubelet[2807]: I0621 06:14:29.070397 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-host-proc-sys-kernel\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.073970 kubelet[2807]: I0621 06:14:29.070446 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae40e130-c1fa-48c0-a26f-872a9f26ba99-cilium-config-path\") pod \"ae40e130-c1fa-48c0-a26f-872a9f26ba99\" (UID: \"ae40e130-c1fa-48c0-a26f-872a9f26ba99\") " Jun 21 06:14:29.073970 kubelet[2807]: I0621 06:14:29.070492 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-bpf-maps\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.074583 kubelet[2807]: I0621 06:14:29.070534 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nfmg\" (UniqueName: \"kubernetes.io/projected/ae40e130-c1fa-48c0-a26f-872a9f26ba99-kube-api-access-2nfmg\") pod \"ae40e130-c1fa-48c0-a26f-872a9f26ba99\" (UID: \"ae40e130-c1fa-48c0-a26f-872a9f26ba99\") " Jun 21 06:14:29.074583 kubelet[2807]: I0621 06:14:29.070673 2807 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-lib-modules\") pod \"df2a5456-69f1-438f-ad4e-506147b5233b\" (UID: \"df2a5456-69f1-438f-ad4e-506147b5233b\") " Jun 21 06:14:29.074583 kubelet[2807]: I0621 06:14:29.071494 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:14:29.074583 kubelet[2807]: I0621 06:14:29.071678 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:14:29.077713 kubelet[2807]: I0621 06:14:29.077243 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:14:29.077713 kubelet[2807]: I0621 06:14:29.077421 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cni-path" (OuterVolumeSpecName: "cni-path") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:14:29.077713 kubelet[2807]: I0621 06:14:29.077429 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:14:29.077713 kubelet[2807]: I0621 06:14:29.077473 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:14:29.077713 kubelet[2807]: I0621 06:14:29.077518 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:14:29.078379 kubelet[2807]: I0621 06:14:29.077551 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-hostproc" (OuterVolumeSpecName: "hostproc") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:14:29.080068 kubelet[2807]: I0621 06:14:29.079969 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:14:29.083551 kubelet[2807]: I0621 06:14:29.083472 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 21 06:14:29.105051 kubelet[2807]: I0621 06:14:29.104919 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 21 06:14:29.109533 kubelet[2807]: I0621 06:14:29.109453 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2a5456-69f1-438f-ad4e-506147b5233b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 06:14:29.109815 kubelet[2807]: I0621 06:14:29.109729 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2a5456-69f1-438f-ad4e-506147b5233b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 21 06:14:29.111340 kubelet[2807]: I0621 06:14:29.111253 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2a5456-69f1-438f-ad4e-506147b5233b-kube-api-access-8pqfk" (OuterVolumeSpecName: "kube-api-access-8pqfk") pod "df2a5456-69f1-438f-ad4e-506147b5233b" (UID: "df2a5456-69f1-438f-ad4e-506147b5233b"). InnerVolumeSpecName "kube-api-access-8pqfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 06:14:29.111663 kubelet[2807]: I0621 06:14:29.111579 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae40e130-c1fa-48c0-a26f-872a9f26ba99-kube-api-access-2nfmg" (OuterVolumeSpecName: "kube-api-access-2nfmg") pod "ae40e130-c1fa-48c0-a26f-872a9f26ba99" (UID: "ae40e130-c1fa-48c0-a26f-872a9f26ba99"). InnerVolumeSpecName "kube-api-access-2nfmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 21 06:14:29.115008 kubelet[2807]: I0621 06:14:29.114949 2807 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae40e130-c1fa-48c0-a26f-872a9f26ba99-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ae40e130-c1fa-48c0-a26f-872a9f26ba99" (UID: "ae40e130-c1fa-48c0-a26f-872a9f26ba99"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 21 06:14:29.172026 kubelet[2807]: I0621 06:14:29.171510 2807 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-host-proc-sys-kernel\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.172026 kubelet[2807]: I0621 06:14:29.171608 2807 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae40e130-c1fa-48c0-a26f-872a9f26ba99-cilium-config-path\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.172026 kubelet[2807]: I0621 06:14:29.171640 2807 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pqfk\" (UniqueName: \"kubernetes.io/projected/df2a5456-69f1-438f-ad4e-506147b5233b-kube-api-access-8pqfk\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.172026 kubelet[2807]: I0621 06:14:29.171667 2807 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/df2a5456-69f1-438f-ad4e-506147b5233b-clustermesh-secrets\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.172026 kubelet[2807]: I0621 06:14:29.171709 2807 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-bpf-maps\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.172026 kubelet[2807]: I0621 06:14:29.171735 2807 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nfmg\" (UniqueName: \"kubernetes.io/projected/ae40e130-c1fa-48c0-a26f-872a9f26ba99-kube-api-access-2nfmg\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.172026 kubelet[2807]: I0621 06:14:29.171763 2807 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-lib-modules\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.172960 kubelet[2807]: I0621 06:14:29.171788 2807 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-hostproc\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.173479 kubelet[2807]: I0621 06:14:29.173165 2807 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-etc-cni-netd\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.173479 kubelet[2807]: I0621 06:14:29.173213 2807 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-config-path\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.173479 kubelet[2807]: I0621 06:14:29.173263 2807 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/df2a5456-69f1-438f-ad4e-506147b5233b-hubble-tls\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.173479 kubelet[2807]: I0621 06:14:29.173290 2807 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-run\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.173479 kubelet[2807]: I0621 06:14:29.173312 2807 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-xtables-lock\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.173479 kubelet[2807]: I0621 06:14:29.173349 2807 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cni-path\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.173479 kubelet[2807]: I0621 06:14:29.173373 2807 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-cilium-cgroup\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.175878 kubelet[2807]: I0621 06:14:29.173396 2807 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/df2a5456-69f1-438f-ad4e-506147b5233b-host-proc-sys-net\") on node \"ci-4372-0-0-3-5f235c9307.novalocal\" DevicePath \"\"" Jun 21 06:14:29.191477 kubelet[2807]: I0621 06:14:29.191183 2807 scope.go:117] "RemoveContainer" containerID="2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564" Jun 21 06:14:29.200420 containerd[1556]: time="2025-06-21T06:14:29.200323198Z" level=info msg="RemoveContainer for \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\"" Jun 21 06:14:29.218689 systemd[1]: Removed slice kubepods-burstable-poddf2a5456_69f1_438f_ad4e_506147b5233b.slice - libcontainer container kubepods-burstable-poddf2a5456_69f1_438f_ad4e_506147b5233b.slice. Jun 21 06:14:29.218991 systemd[1]: kubepods-burstable-poddf2a5456_69f1_438f_ad4e_506147b5233b.slice: Consumed 9.590s CPU time, 125.9M memory peak, 128K read from disk, 13.3M written to disk. Jun 21 06:14:29.231891 systemd[1]: Removed slice kubepods-besteffort-podae40e130_c1fa_48c0_a26f_872a9f26ba99.slice - libcontainer container kubepods-besteffort-podae40e130_c1fa_48c0_a26f_872a9f26ba99.slice. Jun 21 06:14:29.242264 containerd[1556]: time="2025-06-21T06:14:29.242207605Z" level=info msg="RemoveContainer for \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" returns successfully" Jun 21 06:14:29.243211 kubelet[2807]: I0621 06:14:29.243088 2807 scope.go:117] "RemoveContainer" containerID="03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f" Jun 21 06:14:29.249226 containerd[1556]: time="2025-06-21T06:14:29.249180517Z" level=info msg="RemoveContainer for \"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\"" Jun 21 06:14:29.276165 containerd[1556]: time="2025-06-21T06:14:29.276076374Z" level=info msg="RemoveContainer for \"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\" returns successfully" Jun 21 06:14:29.277093 kubelet[2807]: I0621 06:14:29.277007 2807 scope.go:117] "RemoveContainer" containerID="107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8" Jun 21 06:14:29.281695 containerd[1556]: time="2025-06-21T06:14:29.281659301Z" level=info msg="RemoveContainer for \"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\"" Jun 21 06:14:29.470721 containerd[1556]: time="2025-06-21T06:14:29.470610218Z" level=info msg="RemoveContainer for \"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\" returns successfully" Jun 21 06:14:29.476969 kubelet[2807]: I0621 06:14:29.476895 2807 scope.go:117] "RemoveContainer" containerID="75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083" Jun 21 06:14:29.497013 containerd[1556]: time="2025-06-21T06:14:29.496905758Z" level=info msg="RemoveContainer for \"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\"" Jun 21 06:14:29.534734 containerd[1556]: time="2025-06-21T06:14:29.534678597Z" level=info msg="RemoveContainer for \"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\" returns successfully" Jun 21 06:14:29.535514 kubelet[2807]: I0621 06:14:29.535475 2807 scope.go:117] "RemoveContainer" containerID="395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa" Jun 21 06:14:29.539262 containerd[1556]: time="2025-06-21T06:14:29.539095208Z" level=info msg="RemoveContainer for \"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\"" Jun 21 06:14:29.544461 containerd[1556]: time="2025-06-21T06:14:29.544411865Z" level=info msg="RemoveContainer for \"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\" returns successfully" Jun 21 06:14:29.546419 kubelet[2807]: I0621 06:14:29.546364 2807 scope.go:117] "RemoveContainer" containerID="2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564" Jun 21 06:14:29.547266 containerd[1556]: time="2025-06-21T06:14:29.546981874Z" level=error msg="ContainerStatus for \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\": not found" Jun 21 06:14:29.547666 kubelet[2807]: E0621 06:14:29.547602 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\": not found" containerID="2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564" Jun 21 06:14:29.548038 kubelet[2807]: I0621 06:14:29.547791 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564"} err="failed to get container status \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\": rpc error: code = NotFound desc = an error occurred when try to find container \"2799e5a2dcfe8b5561069da9ef188711e4dd23cb1cb03852a40e79c033a6d564\": not found" Jun 21 06:14:29.548159 kubelet[2807]: I0621 06:14:29.548129 2807 scope.go:117] "RemoveContainer" containerID="03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f" Jun 21 06:14:29.548455 containerd[1556]: time="2025-06-21T06:14:29.548407888Z" level=error msg="ContainerStatus for \"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\": not found" Jun 21 06:14:29.548998 kubelet[2807]: E0621 06:14:29.548825 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\": not found" containerID="03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f" Jun 21 06:14:29.548998 kubelet[2807]: I0621 06:14:29.548876 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f"} err="failed to get container status \"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"03a73fd7f866c69abc7732907a5f77719256c4851ec2fcf2fe978f561ec14e2f\": not found" Jun 21 06:14:29.548998 kubelet[2807]: I0621 06:14:29.548923 2807 scope.go:117] "RemoveContainer" containerID="107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8" Jun 21 06:14:29.549259 containerd[1556]: time="2025-06-21T06:14:29.549233876Z" level=error msg="ContainerStatus for \"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\": not found" Jun 21 06:14:29.549512 kubelet[2807]: E0621 06:14:29.549493 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\": not found" containerID="107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8" Jun 21 06:14:29.549782 kubelet[2807]: I0621 06:14:29.549644 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8"} err="failed to get container status \"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"107b15393bd45d8f16375db22379d62aefdea5536035ac5b8b7f8115099005d8\": not found" Jun 21 06:14:29.549782 kubelet[2807]: I0621 06:14:29.549683 2807 scope.go:117] "RemoveContainer" containerID="75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083" Jun 21 06:14:29.549965 containerd[1556]: time="2025-06-21T06:14:29.549940401Z" level=error msg="ContainerStatus for \"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\": not found" Jun 21 06:14:29.550284 kubelet[2807]: E0621 06:14:29.550255 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\": not found" containerID="75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083" Jun 21 06:14:29.550425 kubelet[2807]: I0621 06:14:29.550397 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083"} err="failed to get container status \"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\": rpc error: code = NotFound desc = an error occurred when try to find container \"75dc193c140b8d770735bfdd8a71721f80054b5de477043a624ed552b1b8a083\": not found" Jun 21 06:14:29.550660 kubelet[2807]: I0621 06:14:29.550570 2807 scope.go:117] "RemoveContainer" containerID="395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa" Jun 21 06:14:29.550873 containerd[1556]: time="2025-06-21T06:14:29.550839236Z" level=error msg="ContainerStatus for \"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\": not found" Jun 21 06:14:29.551165 kubelet[2807]: E0621 06:14:29.551146 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\": not found" containerID="395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa" Jun 21 06:14:29.551335 kubelet[2807]: I0621 06:14:29.551230 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa"} err="failed to get container status \"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"395869878fbc0b7c654e58c32374d81620e5e7ea8e40261e6e27c4f5bcd7d7aa\": not found" Jun 21 06:14:29.551335 kubelet[2807]: I0621 06:14:29.551263 2807 scope.go:117] "RemoveContainer" containerID="1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1" Jun 21 06:14:29.553372 containerd[1556]: time="2025-06-21T06:14:29.553348351Z" level=info msg="RemoveContainer for \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\"" Jun 21 06:14:29.558012 containerd[1556]: time="2025-06-21T06:14:29.557910273Z" level=info msg="RemoveContainer for \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\" returns successfully" Jun 21 06:14:29.558165 kubelet[2807]: I0621 06:14:29.558141 2807 scope.go:117] "RemoveContainer" containerID="1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1" Jun 21 06:14:29.558391 containerd[1556]: time="2025-06-21T06:14:29.558345829Z" level=error msg="ContainerStatus for \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\": not found" Jun 21 06:14:29.558698 kubelet[2807]: E0621 06:14:29.558624 2807 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\": not found" containerID="1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1" Jun 21 06:14:29.558761 kubelet[2807]: I0621 06:14:29.558699 2807 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1"} err="failed to get container status \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"1aabcbb62a84002debe245b810954d293ead1b4bfd317c8b17daefeb682493e1\": not found" Jun 21 06:14:29.711789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb-shm.mount: Deactivated successfully. Jun 21 06:14:29.712235 systemd[1]: var-lib-kubelet-pods-ae40e130\x2dc1fa\x2d48c0\x2da26f\x2d872a9f26ba99-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2nfmg.mount: Deactivated successfully. Jun 21 06:14:29.712654 systemd[1]: var-lib-kubelet-pods-df2a5456\x2d69f1\x2d438f\x2dad4e\x2d506147b5233b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8pqfk.mount: Deactivated successfully. Jun 21 06:14:29.712913 systemd[1]: var-lib-kubelet-pods-df2a5456\x2d69f1\x2d438f\x2dad4e\x2d506147b5233b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 21 06:14:29.713173 systemd[1]: var-lib-kubelet-pods-df2a5456\x2d69f1\x2d438f\x2dad4e\x2d506147b5233b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 21 06:14:30.152662 kubelet[2807]: I0621 06:14:30.152547 2807 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae40e130-c1fa-48c0-a26f-872a9f26ba99" path="/var/lib/kubelet/pods/ae40e130-c1fa-48c0-a26f-872a9f26ba99/volumes" Jun 21 06:14:30.154866 kubelet[2807]: I0621 06:14:30.154816 2807 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df2a5456-69f1-438f-ad4e-506147b5233b" path="/var/lib/kubelet/pods/df2a5456-69f1-438f-ad4e-506147b5233b/volumes" Jun 21 06:14:30.386729 kubelet[2807]: E0621 06:14:30.386543 2807 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 21 06:14:30.591348 sshd[4330]: Connection closed by 172.24.4.1 port 42214 Jun 21 06:14:30.592748 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:30.612794 systemd[1]: sshd@23-172.24.4.3:22-172.24.4.1:42214.service: Deactivated successfully. Jun 21 06:14:30.619053 systemd[1]: session-26.scope: Deactivated successfully. Jun 21 06:14:30.620048 systemd[1]: session-26.scope: Consumed 1.591s CPU time, 26M memory peak. Jun 21 06:14:30.622031 systemd-logind[1528]: Session 26 logged out. Waiting for processes to exit. Jun 21 06:14:30.631300 systemd[1]: Started sshd@24-172.24.4.3:22-172.24.4.1:42216.service - OpenSSH per-connection server daemon (172.24.4.1:42216). Jun 21 06:14:30.641080 systemd-logind[1528]: Removed session 26. Jun 21 06:14:31.877604 sshd[4486]: Accepted publickey for core from 172.24.4.1 port 42216 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:31.881065 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:31.896213 systemd-logind[1528]: New session 27 of user core. Jun 21 06:14:31.908473 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 21 06:14:33.696511 kubelet[2807]: E0621 06:14:33.696245 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df2a5456-69f1-438f-ad4e-506147b5233b" containerName="mount-cgroup" Jun 21 06:14:33.702595 kubelet[2807]: E0621 06:14:33.697153 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df2a5456-69f1-438f-ad4e-506147b5233b" containerName="mount-bpf-fs" Jun 21 06:14:33.702595 kubelet[2807]: E0621 06:14:33.697177 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df2a5456-69f1-438f-ad4e-506147b5233b" containerName="clean-cilium-state" Jun 21 06:14:33.702595 kubelet[2807]: E0621 06:14:33.697214 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df2a5456-69f1-438f-ad4e-506147b5233b" containerName="cilium-agent" Jun 21 06:14:33.702595 kubelet[2807]: E0621 06:14:33.697223 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ae40e130-c1fa-48c0-a26f-872a9f26ba99" containerName="cilium-operator" Jun 21 06:14:33.702595 kubelet[2807]: E0621 06:14:33.697236 2807 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="df2a5456-69f1-438f-ad4e-506147b5233b" containerName="apply-sysctl-overwrites" Jun 21 06:14:33.702595 kubelet[2807]: I0621 06:14:33.697395 2807 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae40e130-c1fa-48c0-a26f-872a9f26ba99" containerName="cilium-operator" Jun 21 06:14:33.702595 kubelet[2807]: I0621 06:14:33.697412 2807 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2a5456-69f1-438f-ad4e-506147b5233b" containerName="cilium-agent" Jun 21 06:14:33.707500 kubelet[2807]: I0621 06:14:33.707444 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/646008df-0d39-45f6-a4d1-7ac2caf09624-host-proc-sys-net\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.707809 kubelet[2807]: I0621 06:14:33.707667 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/646008df-0d39-45f6-a4d1-7ac2caf09624-etc-cni-netd\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.708222 kubelet[2807]: I0621 06:14:33.708153 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/646008df-0d39-45f6-a4d1-7ac2caf09624-hostproc\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.708446 kubelet[2807]: I0621 06:14:33.708332 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/646008df-0d39-45f6-a4d1-7ac2caf09624-cilium-cgroup\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.708632 kubelet[2807]: I0621 06:14:33.708364 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/646008df-0d39-45f6-a4d1-7ac2caf09624-cni-path\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.708632 kubelet[2807]: I0621 06:14:33.708574 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/646008df-0d39-45f6-a4d1-7ac2caf09624-cilium-ipsec-secrets\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.710064 kubelet[2807]: I0621 06:14:33.709995 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs8vl\" (UniqueName: \"kubernetes.io/projected/646008df-0d39-45f6-a4d1-7ac2caf09624-kube-api-access-hs8vl\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.710243 kubelet[2807]: I0621 06:14:33.710212 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/646008df-0d39-45f6-a4d1-7ac2caf09624-bpf-maps\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.710415 kubelet[2807]: I0621 06:14:33.710397 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/646008df-0d39-45f6-a4d1-7ac2caf09624-lib-modules\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.710578 kubelet[2807]: I0621 06:14:33.710561 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/646008df-0d39-45f6-a4d1-7ac2caf09624-xtables-lock\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.710790 kubelet[2807]: I0621 06:14:33.710725 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/646008df-0d39-45f6-a4d1-7ac2caf09624-host-proc-sys-kernel\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.710947 kubelet[2807]: I0621 06:14:33.710912 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/646008df-0d39-45f6-a4d1-7ac2caf09624-clustermesh-secrets\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.711092 kubelet[2807]: I0621 06:14:33.711075 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/646008df-0d39-45f6-a4d1-7ac2caf09624-cilium-config-path\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.711262 kubelet[2807]: I0621 06:14:33.711245 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/646008df-0d39-45f6-a4d1-7ac2caf09624-hubble-tls\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.711490 kubelet[2807]: I0621 06:14:33.711451 2807 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/646008df-0d39-45f6-a4d1-7ac2caf09624-cilium-run\") pod \"cilium-bm2wz\" (UID: \"646008df-0d39-45f6-a4d1-7ac2caf09624\") " pod="kube-system/cilium-bm2wz" Jun 21 06:14:33.717352 kubelet[2807]: W0621 06:14:33.717215 2807 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4372-0-0-3-5f235c9307.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4372-0-0-3-5f235c9307.novalocal' and this object Jun 21 06:14:33.717674 kubelet[2807]: E0621 06:14:33.717626 2807 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4372-0-0-3-5f235c9307.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372-0-0-3-5f235c9307.novalocal' and this object" logger="UnhandledError" Jun 21 06:14:33.718027 kubelet[2807]: W0621 06:14:33.717971 2807 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4372-0-0-3-5f235c9307.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4372-0-0-3-5f235c9307.novalocal' and this object Jun 21 06:14:33.718027 kubelet[2807]: E0621 06:14:33.717997 2807 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4372-0-0-3-5f235c9307.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372-0-0-3-5f235c9307.novalocal' and this object" logger="UnhandledError" Jun 21 06:14:33.718444 kubelet[2807]: W0621 06:14:33.718388 2807 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4372-0-0-3-5f235c9307.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4372-0-0-3-5f235c9307.novalocal' and this object Jun 21 06:14:33.718605 kubelet[2807]: E0621 06:14:33.718572 2807 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4372-0-0-3-5f235c9307.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372-0-0-3-5f235c9307.novalocal' and this object" logger="UnhandledError" Jun 21 06:14:33.719992 kubelet[2807]: W0621 06:14:33.719897 2807 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4372-0-0-3-5f235c9307.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4372-0-0-3-5f235c9307.novalocal' and this object Jun 21 06:14:33.719992 kubelet[2807]: E0621 06:14:33.719958 2807 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4372-0-0-3-5f235c9307.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4372-0-0-3-5f235c9307.novalocal' and this object" logger="UnhandledError" Jun 21 06:14:33.722222 systemd[1]: Created slice kubepods-burstable-pod646008df_0d39_45f6_a4d1_7ac2caf09624.slice - libcontainer container kubepods-burstable-pod646008df_0d39_45f6_a4d1_7ac2caf09624.slice. Jun 21 06:14:33.778145 kubelet[2807]: I0621 06:14:33.777355 2807 setters.go:600] "Node became not ready" node="ci-4372-0-0-3-5f235c9307.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-21T06:14:33Z","lastTransitionTime":"2025-06-21T06:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 21 06:14:33.892146 sshd[4488]: Connection closed by 172.24.4.1 port 42216 Jun 21 06:14:33.894812 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:33.926067 systemd[1]: sshd@24-172.24.4.3:22-172.24.4.1:42216.service: Deactivated successfully. Jun 21 06:14:33.938197 systemd[1]: session-27.scope: Deactivated successfully. Jun 21 06:14:33.938987 systemd[1]: session-27.scope: Consumed 1.229s CPU time, 23.5M memory peak. Jun 21 06:14:33.942063 systemd-logind[1528]: Session 27 logged out. Waiting for processes to exit. Jun 21 06:14:33.951794 systemd[1]: Started sshd@25-172.24.4.3:22-172.24.4.1:42142.service - OpenSSH per-connection server daemon (172.24.4.1:42142). Jun 21 06:14:33.965717 systemd-logind[1528]: Removed session 27. Jun 21 06:14:34.816189 kubelet[2807]: E0621 06:14:34.814551 2807 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jun 21 06:14:34.816189 kubelet[2807]: E0621 06:14:34.814773 2807 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-bm2wz: failed to sync secret cache: timed out waiting for the condition Jun 21 06:14:34.830140 kubelet[2807]: E0621 06:14:34.817556 2807 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jun 21 06:14:34.832046 kubelet[2807]: E0621 06:14:34.831872 2807 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/646008df-0d39-45f6-a4d1-7ac2caf09624-cilium-config-path podName:646008df-0d39-45f6-a4d1-7ac2caf09624 nodeName:}" failed. No retries permitted until 2025-06-21 06:14:35.331656647 +0000 UTC m=+175.375143613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/646008df-0d39-45f6-a4d1-7ac2caf09624-cilium-config-path") pod "cilium-bm2wz" (UID: "646008df-0d39-45f6-a4d1-7ac2caf09624") : failed to sync configmap cache: timed out waiting for the condition Jun 21 06:14:34.837042 kubelet[2807]: E0621 06:14:34.830595 2807 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jun 21 06:14:34.839698 kubelet[2807]: E0621 06:14:34.839636 2807 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/646008df-0d39-45f6-a4d1-7ac2caf09624-cilium-ipsec-secrets podName:646008df-0d39-45f6-a4d1-7ac2caf09624 nodeName:}" failed. No retries permitted until 2025-06-21 06:14:35.339329533 +0000 UTC m=+175.382816499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/646008df-0d39-45f6-a4d1-7ac2caf09624-cilium-ipsec-secrets") pod "cilium-bm2wz" (UID: "646008df-0d39-45f6-a4d1-7ac2caf09624") : failed to sync secret cache: timed out waiting for the condition Jun 21 06:14:34.846558 kubelet[2807]: E0621 06:14:34.846488 2807 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/646008df-0d39-45f6-a4d1-7ac2caf09624-hubble-tls podName:646008df-0d39-45f6-a4d1-7ac2caf09624 nodeName:}" failed. No retries permitted until 2025-06-21 06:14:35.346443992 +0000 UTC m=+175.389930958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/646008df-0d39-45f6-a4d1-7ac2caf09624-hubble-tls") pod "cilium-bm2wz" (UID: "646008df-0d39-45f6-a4d1-7ac2caf09624") : failed to sync secret cache: timed out waiting for the condition Jun 21 06:14:35.348211 sshd[4499]: Accepted publickey for core from 172.24.4.1 port 42142 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:35.354906 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:35.395276 kubelet[2807]: E0621 06:14:35.393841 2807 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 21 06:14:35.395639 systemd-logind[1528]: New session 28 of user core. Jun 21 06:14:35.411554 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 21 06:14:35.536639 containerd[1556]: time="2025-06-21T06:14:35.536224510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bm2wz,Uid:646008df-0d39-45f6-a4d1-7ac2caf09624,Namespace:kube-system,Attempt:0,}" Jun 21 06:14:35.615212 containerd[1556]: time="2025-06-21T06:14:35.614378941Z" level=info msg="connecting to shim 6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb" address="unix:///run/containerd/s/b15c1852a7911166dbad068e010d0c51e0e506ad6467b2e35838be444527e14c" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:14:35.664398 systemd[1]: Started cri-containerd-6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb.scope - libcontainer container 6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb. Jun 21 06:14:35.735720 containerd[1556]: time="2025-06-21T06:14:35.735654508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bm2wz,Uid:646008df-0d39-45f6-a4d1-7ac2caf09624,Namespace:kube-system,Attempt:0,} returns sandbox id \"6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb\"" Jun 21 06:14:35.742047 containerd[1556]: time="2025-06-21T06:14:35.741970300Z" level=info msg="CreateContainer within sandbox \"6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 06:14:35.756628 containerd[1556]: time="2025-06-21T06:14:35.755481282Z" level=info msg="Container 1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:14:35.776491 containerd[1556]: time="2025-06-21T06:14:35.776443143Z" level=info msg="CreateContainer within sandbox \"6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5\"" Jun 21 06:14:35.778211 containerd[1556]: time="2025-06-21T06:14:35.777951971Z" level=info msg="StartContainer for \"1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5\"" Jun 21 06:14:35.781296 containerd[1556]: time="2025-06-21T06:14:35.781210010Z" level=info msg="connecting to shim 1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5" address="unix:///run/containerd/s/b15c1852a7911166dbad068e010d0c51e0e506ad6467b2e35838be444527e14c" protocol=ttrpc version=3 Jun 21 06:14:35.810580 systemd[1]: Started cri-containerd-1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5.scope - libcontainer container 1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5. Jun 21 06:14:35.872286 containerd[1556]: time="2025-06-21T06:14:35.872092394Z" level=info msg="StartContainer for \"1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5\" returns successfully" Jun 21 06:14:35.886437 systemd[1]: cri-containerd-1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5.scope: Deactivated successfully. Jun 21 06:14:35.889822 containerd[1556]: time="2025-06-21T06:14:35.889784645Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5\" id:\"1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5\" pid:4564 exited_at:{seconds:1750486475 nanos:888832510}" Jun 21 06:14:35.889961 containerd[1556]: time="2025-06-21T06:14:35.889864815Z" level=info msg="received exit event container_id:\"1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5\" id:\"1dfcda1f31a2d127ad3898a6b3ebf85f6441a96cd56ccec1dd0559e7e8ae85d5\" pid:4564 exited_at:{seconds:1750486475 nanos:888832510}" Jun 21 06:14:35.940859 sshd[4503]: Connection closed by 172.24.4.1 port 42142 Jun 21 06:14:35.942134 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:35.952089 systemd[1]: sshd@25-172.24.4.3:22-172.24.4.1:42142.service: Deactivated successfully. Jun 21 06:14:35.954702 systemd[1]: session-28.scope: Deactivated successfully. Jun 21 06:14:35.955752 systemd-logind[1528]: Session 28 logged out. Waiting for processes to exit. Jun 21 06:14:35.959205 systemd[1]: Started sshd@26-172.24.4.3:22-172.24.4.1:42158.service - OpenSSH per-connection server daemon (172.24.4.1:42158). Jun 21 06:14:35.961646 systemd-logind[1528]: Removed session 28. Jun 21 06:14:36.267060 containerd[1556]: time="2025-06-21T06:14:36.266905694Z" level=info msg="CreateContainer within sandbox \"6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 06:14:36.410984 containerd[1556]: time="2025-06-21T06:14:36.410862507Z" level=info msg="Container 4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:14:36.419149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014746724.mount: Deactivated successfully. Jun 21 06:14:36.451627 containerd[1556]: time="2025-06-21T06:14:36.451500258Z" level=info msg="CreateContainer within sandbox \"6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4\"" Jun 21 06:14:36.456279 containerd[1556]: time="2025-06-21T06:14:36.455273723Z" level=info msg="StartContainer for \"4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4\"" Jun 21 06:14:36.459748 containerd[1556]: time="2025-06-21T06:14:36.459677670Z" level=info msg="connecting to shim 4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4" address="unix:///run/containerd/s/b15c1852a7911166dbad068e010d0c51e0e506ad6467b2e35838be444527e14c" protocol=ttrpc version=3 Jun 21 06:14:36.499331 systemd[1]: Started cri-containerd-4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4.scope - libcontainer container 4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4. Jun 21 06:14:36.546039 containerd[1556]: time="2025-06-21T06:14:36.545908143Z" level=info msg="StartContainer for \"4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4\" returns successfully" Jun 21 06:14:36.554391 systemd[1]: cri-containerd-4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4.scope: Deactivated successfully. Jun 21 06:14:36.555864 containerd[1556]: time="2025-06-21T06:14:36.555741139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4\" id:\"4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4\" pid:4615 exited_at:{seconds:1750486476 nanos:554861649}" Jun 21 06:14:36.555864 containerd[1556]: time="2025-06-21T06:14:36.555747811Z" level=info msg="received exit event container_id:\"4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4\" id:\"4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4\" pid:4615 exited_at:{seconds:1750486476 nanos:554861649}" Jun 21 06:14:36.583849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4dada617d3a37d41d8e87a515f369209c93985a66e9af503c0656801a7977fe4-rootfs.mount: Deactivated successfully. Jun 21 06:14:37.272506 containerd[1556]: time="2025-06-21T06:14:37.272427517Z" level=info msg="CreateContainer within sandbox \"6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 06:14:37.295612 containerd[1556]: time="2025-06-21T06:14:37.294618321Z" level=info msg="Container e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:14:37.301864 sshd[4601]: Accepted publickey for core from 172.24.4.1 port 42158 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:37.305472 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:37.315450 systemd-logind[1528]: New session 29 of user core. Jun 21 06:14:37.320510 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 21 06:14:37.324928 containerd[1556]: time="2025-06-21T06:14:37.323964283Z" level=info msg="CreateContainer within sandbox \"6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b\"" Jun 21 06:14:37.327672 containerd[1556]: time="2025-06-21T06:14:37.327579731Z" level=info msg="StartContainer for \"e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b\"" Jun 21 06:14:37.336667 containerd[1556]: time="2025-06-21T06:14:37.336178002Z" level=info msg="connecting to shim e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b" address="unix:///run/containerd/s/b15c1852a7911166dbad068e010d0c51e0e506ad6467b2e35838be444527e14c" protocol=ttrpc version=3 Jun 21 06:14:37.380325 systemd[1]: Started cri-containerd-e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b.scope - libcontainer container e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b. Jun 21 06:14:37.447952 systemd[1]: cri-containerd-e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b.scope: Deactivated successfully. Jun 21 06:14:37.453000 containerd[1556]: time="2025-06-21T06:14:37.452580804Z" level=info msg="StartContainer for \"e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b\" returns successfully" Jun 21 06:14:37.453564 containerd[1556]: time="2025-06-21T06:14:37.453257363Z" level=info msg="received exit event container_id:\"e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b\" id:\"e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b\" pid:4660 exited_at:{seconds:1750486477 nanos:452920171}" Jun 21 06:14:37.453645 containerd[1556]: time="2025-06-21T06:14:37.453615214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b\" id:\"e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b\" pid:4660 exited_at:{seconds:1750486477 nanos:452920171}" Jun 21 06:14:37.492027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e851de990ad8156fa668c5064ca4925573b33aa8e5ad89f15c6a74792482889b-rootfs.mount: Deactivated successfully. Jun 21 06:14:38.298176 containerd[1556]: time="2025-06-21T06:14:38.297045435Z" level=info msg="CreateContainer within sandbox \"6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 06:14:38.348557 containerd[1556]: time="2025-06-21T06:14:38.345743410Z" level=info msg="Container e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:14:38.379551 containerd[1556]: time="2025-06-21T06:14:38.379398980Z" level=info msg="CreateContainer within sandbox \"6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177\"" Jun 21 06:14:38.382547 containerd[1556]: time="2025-06-21T06:14:38.382509161Z" level=info msg="StartContainer for \"e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177\"" Jun 21 06:14:38.384503 containerd[1556]: time="2025-06-21T06:14:38.384468995Z" level=info msg="connecting to shim e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177" address="unix:///run/containerd/s/b15c1852a7911166dbad068e010d0c51e0e506ad6467b2e35838be444527e14c" protocol=ttrpc version=3 Jun 21 06:14:38.418312 systemd[1]: Started cri-containerd-e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177.scope - libcontainer container e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177. Jun 21 06:14:38.454770 systemd[1]: cri-containerd-e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177.scope: Deactivated successfully. Jun 21 06:14:38.460517 containerd[1556]: time="2025-06-21T06:14:38.460477033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177\" id:\"e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177\" pid:4704 exited_at:{seconds:1750486478 nanos:458156312}" Jun 21 06:14:38.460869 containerd[1556]: time="2025-06-21T06:14:38.460277719Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod646008df_0d39_45f6_a4d1_7ac2caf09624.slice/cri-containerd-e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177.scope/memory.events\": no such file or directory" Jun 21 06:14:38.469118 containerd[1556]: time="2025-06-21T06:14:38.469046089Z" level=info msg="received exit event container_id:\"e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177\" id:\"e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177\" pid:4704 exited_at:{seconds:1750486478 nanos:458156312}" Jun 21 06:14:38.480536 containerd[1556]: time="2025-06-21T06:14:38.480463014Z" level=info msg="StartContainer for \"e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177\" returns successfully" Jun 21 06:14:38.503130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8148d4ff3aa344d9b35ac6468b3b3417c9d99d4209691fca95130559a0a9177-rootfs.mount: Deactivated successfully. Jun 21 06:14:39.321095 containerd[1556]: time="2025-06-21T06:14:39.320922650Z" level=info msg="CreateContainer within sandbox \"6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 06:14:39.344479 containerd[1556]: time="2025-06-21T06:14:39.344390830Z" level=info msg="Container 720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:14:39.375012 containerd[1556]: time="2025-06-21T06:14:39.374043636Z" level=info msg="CreateContainer within sandbox \"6500ed36ab29732ddcbbb49dadecc3330c5cec25e99b62914a717dbeb9eadbeb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22\"" Jun 21 06:14:39.376626 containerd[1556]: time="2025-06-21T06:14:39.376566957Z" level=info msg="StartContainer for \"720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22\"" Jun 21 06:14:39.381245 containerd[1556]: time="2025-06-21T06:14:39.381052907Z" level=info msg="connecting to shim 720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22" address="unix:///run/containerd/s/b15c1852a7911166dbad068e010d0c51e0e506ad6467b2e35838be444527e14c" protocol=ttrpc version=3 Jun 21 06:14:39.415289 systemd[1]: Started cri-containerd-720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22.scope - libcontainer container 720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22. Jun 21 06:14:39.474957 containerd[1556]: time="2025-06-21T06:14:39.474892187Z" level=info msg="StartContainer for \"720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22\" returns successfully" Jun 21 06:14:39.601304 containerd[1556]: time="2025-06-21T06:14:39.601070629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22\" id:\"cb11a5925d6d9fc54363fe1809a74d9618b3eb48b1deed3e012d76731deec8bd\" pid:4773 exited_at:{seconds:1750486479 nanos:600649800}" Jun 21 06:14:39.992289 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 06:14:40.065301 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jun 21 06:14:40.107633 containerd[1556]: time="2025-06-21T06:14:40.107567260Z" level=info msg="StopPodSandbox for \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\"" Jun 21 06:14:40.109185 containerd[1556]: time="2025-06-21T06:14:40.108826861Z" level=info msg="TearDown network for sandbox \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\" successfully" Jun 21 06:14:40.109185 containerd[1556]: time="2025-06-21T06:14:40.108871425Z" level=info msg="StopPodSandbox for \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\" returns successfully" Jun 21 06:14:40.110576 containerd[1556]: time="2025-06-21T06:14:40.110517020Z" level=info msg="RemovePodSandbox for \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\"" Jun 21 06:14:40.110920 containerd[1556]: time="2025-06-21T06:14:40.110732334Z" level=info msg="Forcibly stopping sandbox \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\"" Jun 21 06:14:40.111765 containerd[1556]: time="2025-06-21T06:14:40.111185123Z" level=info msg="TearDown network for sandbox \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\" successfully" Jun 21 06:14:40.114708 containerd[1556]: time="2025-06-21T06:14:40.114673362Z" level=info msg="Ensure that sandbox 31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3 in task-service has been cleanup successfully" Jun 21 06:14:40.120133 containerd[1556]: time="2025-06-21T06:14:40.119958201Z" level=info msg="RemovePodSandbox \"31c7708d9d8778c5f8611a59702adcde82ffe9f236566fd74e95e8053138cba3\" returns successfully" Jun 21 06:14:40.123328 containerd[1556]: time="2025-06-21T06:14:40.123277294Z" level=info msg="StopPodSandbox for \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\"" Jun 21 06:14:40.123714 containerd[1556]: time="2025-06-21T06:14:40.123682384Z" level=info msg="TearDown network for sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" successfully" Jun 21 06:14:40.123848 containerd[1556]: time="2025-06-21T06:14:40.123827806Z" level=info msg="StopPodSandbox for \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" returns successfully" Jun 21 06:14:40.124401 containerd[1556]: time="2025-06-21T06:14:40.124349926Z" level=info msg="RemovePodSandbox for \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\"" Jun 21 06:14:40.124482 containerd[1556]: time="2025-06-21T06:14:40.124411030Z" level=info msg="Forcibly stopping sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\"" Jun 21 06:14:40.125065 containerd[1556]: time="2025-06-21T06:14:40.124906278Z" level=info msg="TearDown network for sandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" successfully" Jun 21 06:14:40.139309 containerd[1556]: time="2025-06-21T06:14:40.138221994Z" level=info msg="Ensure that sandbox 6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb in task-service has been cleanup successfully" Jun 21 06:14:40.145168 containerd[1556]: time="2025-06-21T06:14:40.145120267Z" level=info msg="RemovePodSandbox \"6dfc661a58ecbb3ad9b4aa662b5964f260d2e4f791e98ee933abdc81864b67bb\" returns successfully" Jun 21 06:14:40.370747 kubelet[2807]: I0621 06:14:40.370490 2807 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bm2wz" podStartSLOduration=7.369912541 podStartE2EDuration="7.369912541s" podCreationTimestamp="2025-06-21 06:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:14:40.368092098 +0000 UTC m=+180.411579024" watchObservedRunningTime="2025-06-21 06:14:40.369912541 +0000 UTC m=+180.413399458" Jun 21 06:14:42.524650 containerd[1556]: time="2025-06-21T06:14:42.524586051Z" level=info msg="TaskExit event in podsandbox handler container_id:\"720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22\" id:\"04aa2a4a81f75a7be561061ff8d4dd28f227177bc6ca57695648b8846abd0c79\" pid:4971 exit_status:1 exited_at:{seconds:1750486482 nanos:522927150}" Jun 21 06:14:43.638974 systemd-networkd[1432]: lxc_health: Link UP Jun 21 06:14:43.646160 systemd-networkd[1432]: lxc_health: Gained carrier Jun 21 06:14:44.736322 containerd[1556]: time="2025-06-21T06:14:44.736260987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22\" id:\"210214b4a8753615871df57d62a12230ce882a3b2c93315d2685c47615c13608\" pid:5314 exited_at:{seconds:1750486484 nanos:733386498}" Jun 21 06:14:45.373536 systemd-networkd[1432]: lxc_health: Gained IPv6LL Jun 21 06:14:47.013245 containerd[1556]: time="2025-06-21T06:14:47.012255401Z" level=info msg="TaskExit event in podsandbox handler container_id:\"720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22\" id:\"fb3a6cfa39b011cc4b4857216f6bf08803cdad4dba655bec274050792a00cb46\" pid:5344 exited_at:{seconds:1750486487 nanos:10065806}" Jun 21 06:14:49.243835 containerd[1556]: time="2025-06-21T06:14:49.243772267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"720fdd23e9d35a22d78329753e2da31986c9964106a20be45dd7cf6f9613ea22\" id:\"f23473f4f050e8b85d57cd992f7994c96a9d7204475d23cbbf9a66fede98e33e\" pid:5381 exited_at:{seconds:1750486489 nanos:243138608}" Jun 21 06:14:49.248459 kubelet[2807]: E0621 06:14:49.248015 2807 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44560->127.0.0.1:39925: write tcp 127.0.0.1:44560->127.0.0.1:39925: write: broken pipe Jun 21 06:14:49.470461 sshd[4646]: Connection closed by 172.24.4.1 port 42158 Jun 21 06:14:49.474696 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:49.491148 systemd[1]: sshd@26-172.24.4.3:22-172.24.4.1:42158.service: Deactivated successfully. Jun 21 06:14:49.497895 systemd[1]: session-29.scope: Deactivated successfully. Jun 21 06:14:49.502505 systemd-logind[1528]: Session 29 logged out. Waiting for processes to exit. Jun 21 06:14:49.506295 systemd-logind[1528]: Removed session 29.