May 16 06:02:01.043340 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu May 15 22:16:42 -00 2025 May 16 06:02:01.043370 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ffa0077ec5e89092631d817251b58c64c9261c447bd6e8bcef43c52d5e74873e May 16 06:02:01.043381 kernel: BIOS-provided physical RAM map: May 16 06:02:01.043390 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 16 06:02:01.043397 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 16 06:02:01.043409 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 16 06:02:01.043419 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable May 16 06:02:01.043427 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved May 16 06:02:01.043435 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 16 06:02:01.043443 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 16 06:02:01.043451 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable May 16 06:02:01.043460 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 16 06:02:01.043468 kernel: NX (Execute Disable) protection: active May 16 06:02:01.043476 kernel: APIC: Static calls initialized May 16 06:02:01.043488 kernel: SMBIOS 3.0.0 present. May 16 06:02:01.043497 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 May 16 06:02:01.043505 kernel: Hypervisor detected: KVM May 16 06:02:01.043514 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 16 06:02:01.043522 kernel: kvm-clock: using sched offset of 3508322545 cycles May 16 06:02:01.043533 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 16 06:02:01.043542 kernel: tsc: Detected 1996.249 MHz processor May 16 06:02:01.043551 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 16 06:02:01.043561 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 16 06:02:01.043570 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 May 16 06:02:01.043579 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 16 06:02:01.043589 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 16 06:02:01.043597 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 May 16 06:02:01.043606 kernel: ACPI: Early table checksum verification disabled May 16 06:02:01.043617 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) May 16 06:02:01.043626 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 06:02:01.043635 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 06:02:01.043644 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 06:02:01.043654 kernel: ACPI: FACS 0x00000000BFFE0000 000040 May 16 06:02:01.043663 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 06:02:01.043672 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 06:02:01.043680 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] May 16 06:02:01.043689 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] May 16 06:02:01.043700 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] May 16 06:02:01.043709 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] May 16 06:02:01.043718 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] May 16 06:02:01.043730 kernel: No NUMA configuration found May 16 06:02:01.043739 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] May 16 06:02:01.043748 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] May 16 06:02:01.043760 kernel: Zone ranges: May 16 06:02:01.043769 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 16 06:02:01.043778 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 16 06:02:01.043788 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] May 16 06:02:01.043797 kernel: Movable zone start for each node May 16 06:02:01.043806 kernel: Early memory node ranges May 16 06:02:01.043815 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 16 06:02:01.043824 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] May 16 06:02:01.043835 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] May 16 06:02:01.043845 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] May 16 06:02:01.043855 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 16 06:02:01.043864 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 16 06:02:01.043873 kernel: On node 0, zone Normal: 35 pages in unavailable ranges May 16 06:02:01.043883 kernel: ACPI: PM-Timer IO Port: 0x608 May 16 06:02:01.043892 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 16 06:02:01.043902 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 16 06:02:01.043911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 16 06:02:01.043922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 16 06:02:01.043931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 16 06:02:01.043941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 16 06:02:01.043950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 16 06:02:01.043959 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 16 06:02:01.043968 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 16 06:02:01.043978 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 16 06:02:01.043987 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices May 16 06:02:01.043996 kernel: Booting paravirtualized kernel on KVM May 16 06:02:01.044007 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 16 06:02:01.044018 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 16 06:02:01.044028 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 16 06:02:01.044036 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 16 06:02:01.044044 kernel: pcpu-alloc: [0] 0 1 May 16 06:02:01.044053 kernel: kvm-guest: PV spinlocks disabled, no host support May 16 06:02:01.044063 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ffa0077ec5e89092631d817251b58c64c9261c447bd6e8bcef43c52d5e74873e May 16 06:02:01.044072 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 06:02:01.044082 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 06:02:01.044091 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 06:02:01.044100 kernel: Fallback order for Node 0: 0 May 16 06:02:01.044108 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 May 16 06:02:01.044117 kernel: Policy zone: Normal May 16 06:02:01.044126 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 06:02:01.044134 kernel: software IO TLB: area num 2. May 16 06:02:01.044143 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 229356K reserved, 0K cma-reserved) May 16 06:02:01.044152 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 16 06:02:01.044162 kernel: ftrace: allocating 37922 entries in 149 pages May 16 06:02:01.044171 kernel: ftrace: allocated 149 pages with 4 groups May 16 06:02:01.044179 kernel: Dynamic Preempt: voluntary May 16 06:02:01.044188 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 06:02:01.044198 kernel: rcu: RCU event tracing is enabled. May 16 06:02:01.044206 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 16 06:02:01.044215 kernel: Trampoline variant of Tasks RCU enabled. May 16 06:02:01.044224 kernel: Rude variant of Tasks RCU enabled. May 16 06:02:01.044247 kernel: Tracing variant of Tasks RCU enabled. May 16 06:02:01.044257 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 06:02:01.044268 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 16 06:02:01.044276 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 16 06:02:01.044285 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 06:02:01.044294 kernel: Console: colour VGA+ 80x25 May 16 06:02:01.044302 kernel: printk: console [tty0] enabled May 16 06:02:01.044311 kernel: printk: console [ttyS0] enabled May 16 06:02:01.044319 kernel: ACPI: Core revision 20230628 May 16 06:02:01.044328 kernel: APIC: Switch to symmetric I/O mode setup May 16 06:02:01.044337 kernel: x2apic enabled May 16 06:02:01.044347 kernel: APIC: Switched APIC routing to: physical x2apic May 16 06:02:01.044356 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 16 06:02:01.044365 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 16 06:02:01.044373 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) May 16 06:02:01.044382 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 16 06:02:01.044391 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 16 06:02:01.044400 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 16 06:02:01.044408 kernel: Spectre V2 : Mitigation: Retpolines May 16 06:02:01.044417 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 16 06:02:01.044427 kernel: Speculative Store Bypass: Vulnerable May 16 06:02:01.044436 kernel: x86/fpu: x87 FPU will use FXSAVE May 16 06:02:01.044445 kernel: Freeing SMP alternatives memory: 32K May 16 06:02:01.044453 kernel: pid_max: default: 32768 minimum: 301 May 16 06:02:01.044468 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 16 06:02:01.044478 kernel: landlock: Up and running. May 16 06:02:01.044488 kernel: SELinux: Initializing. May 16 06:02:01.044497 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 06:02:01.044506 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 06:02:01.044515 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) May 16 06:02:01.044524 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 06:02:01.044536 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 06:02:01.044545 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 16 06:02:01.044554 kernel: Performance Events: AMD PMU driver. May 16 06:02:01.044563 kernel: ... version: 0 May 16 06:02:01.044572 kernel: ... bit width: 48 May 16 06:02:01.044583 kernel: ... generic registers: 4 May 16 06:02:01.044592 kernel: ... value mask: 0000ffffffffffff May 16 06:02:01.044601 kernel: ... max period: 00007fffffffffff May 16 06:02:01.044610 kernel: ... fixed-purpose events: 0 May 16 06:02:01.044619 kernel: ... event mask: 000000000000000f May 16 06:02:01.044628 kernel: signal: max sigframe size: 1440 May 16 06:02:01.044637 kernel: rcu: Hierarchical SRCU implementation. May 16 06:02:01.044646 kernel: rcu: Max phase no-delay instances is 400. May 16 06:02:01.044655 kernel: smp: Bringing up secondary CPUs ... May 16 06:02:01.044666 kernel: smpboot: x86: Booting SMP configuration: May 16 06:02:01.044675 kernel: .... node #0, CPUs: #1 May 16 06:02:01.044684 kernel: smp: Brought up 1 node, 2 CPUs May 16 06:02:01.044694 kernel: smpboot: Max logical packages: 2 May 16 06:02:01.044703 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) May 16 06:02:01.044711 kernel: devtmpfs: initialized May 16 06:02:01.044720 kernel: x86/mm: Memory block size: 128MB May 16 06:02:01.044730 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 06:02:01.044739 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 16 06:02:01.044748 kernel: pinctrl core: initialized pinctrl subsystem May 16 06:02:01.044759 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 06:02:01.044768 kernel: audit: initializing netlink subsys (disabled) May 16 06:02:01.044777 kernel: audit: type=2000 audit(1747375320.583:1): state=initialized audit_enabled=0 res=1 May 16 06:02:01.044786 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 06:02:01.044796 kernel: thermal_sys: Registered thermal governor 'user_space' May 16 06:02:01.044805 kernel: cpuidle: using governor menu May 16 06:02:01.044814 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 06:02:01.044823 kernel: dca service started, version 1.12.1 May 16 06:02:01.044832 kernel: PCI: Using configuration type 1 for base access May 16 06:02:01.044844 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 16 06:02:01.044853 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 06:02:01.044862 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 16 06:02:01.044871 kernel: ACPI: Added _OSI(Module Device) May 16 06:02:01.044880 kernel: ACPI: Added _OSI(Processor Device) May 16 06:02:01.044889 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 06:02:01.044898 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 06:02:01.044907 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 06:02:01.044916 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 16 06:02:01.044928 kernel: ACPI: Interpreter enabled May 16 06:02:01.044937 kernel: ACPI: PM: (supports S0 S3 S5) May 16 06:02:01.044946 kernel: ACPI: Using IOAPIC for interrupt routing May 16 06:02:01.044955 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 16 06:02:01.044964 kernel: PCI: Using E820 reservations for host bridge windows May 16 06:02:01.044973 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 16 06:02:01.044982 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 06:02:01.045143 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 16 06:02:01.049224 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 16 06:02:01.049351 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 16 06:02:01.049366 kernel: acpiphp: Slot [3] registered May 16 06:02:01.049375 kernel: acpiphp: Slot [4] registered May 16 06:02:01.049384 kernel: acpiphp: Slot [5] registered May 16 06:02:01.049393 kernel: acpiphp: Slot [6] registered May 16 06:02:01.049402 kernel: acpiphp: Slot [7] registered May 16 06:02:01.049411 kernel: acpiphp: Slot [8] registered May 16 06:02:01.049424 kernel: acpiphp: Slot [9] registered May 16 06:02:01.049433 kernel: acpiphp: Slot [10] registered May 16 06:02:01.049442 kernel: acpiphp: Slot [11] registered May 16 06:02:01.049452 kernel: acpiphp: Slot [12] registered May 16 06:02:01.049461 kernel: acpiphp: Slot [13] registered May 16 06:02:01.049469 kernel: acpiphp: Slot [14] registered May 16 06:02:01.049478 kernel: acpiphp: Slot [15] registered May 16 06:02:01.049487 kernel: acpiphp: Slot [16] registered May 16 06:02:01.049496 kernel: acpiphp: Slot [17] registered May 16 06:02:01.049508 kernel: acpiphp: Slot [18] registered May 16 06:02:01.049517 kernel: acpiphp: Slot [19] registered May 16 06:02:01.049526 kernel: acpiphp: Slot [20] registered May 16 06:02:01.049535 kernel: acpiphp: Slot [21] registered May 16 06:02:01.049544 kernel: acpiphp: Slot [22] registered May 16 06:02:01.049553 kernel: acpiphp: Slot [23] registered May 16 06:02:01.049562 kernel: acpiphp: Slot [24] registered May 16 06:02:01.049571 kernel: acpiphp: Slot [25] registered May 16 06:02:01.049580 kernel: acpiphp: Slot [26] registered May 16 06:02:01.049588 kernel: acpiphp: Slot [27] registered May 16 06:02:01.049599 kernel: acpiphp: Slot [28] registered May 16 06:02:01.049608 kernel: acpiphp: Slot [29] registered May 16 06:02:01.049617 kernel: acpiphp: Slot [30] registered May 16 06:02:01.049626 kernel: acpiphp: Slot [31] registered May 16 06:02:01.049635 kernel: PCI host bridge to bus 0000:00 May 16 06:02:01.049734 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 16 06:02:01.049819 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 16 06:02:01.049901 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 16 06:02:01.049991 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 16 06:02:01.050075 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] May 16 06:02:01.050158 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 06:02:01.050299 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 16 06:02:01.050417 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 16 06:02:01.050521 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 16 06:02:01.050638 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] May 16 06:02:01.050740 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 16 06:02:01.050841 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 16 06:02:01.050941 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 16 06:02:01.051041 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 16 06:02:01.051151 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 16 06:02:01.051277 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 16 06:02:01.051387 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 16 06:02:01.051502 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 16 06:02:01.051603 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 16 06:02:01.051705 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] May 16 06:02:01.051807 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] May 16 06:02:01.051910 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] May 16 06:02:01.052013 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 16 06:02:01.052124 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 16 06:02:01.052221 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] May 16 06:02:01.056134 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] May 16 06:02:01.056249 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] May 16 06:02:01.056349 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] May 16 06:02:01.056453 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 May 16 06:02:01.056554 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] May 16 06:02:01.056648 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] May 16 06:02:01.056743 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] May 16 06:02:01.056846 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 May 16 06:02:01.056940 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] May 16 06:02:01.057033 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] May 16 06:02:01.057135 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 May 16 06:02:01.057252 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] May 16 06:02:01.057354 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] May 16 06:02:01.057449 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] May 16 06:02:01.057463 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 16 06:02:01.057473 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 16 06:02:01.057482 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 16 06:02:01.057491 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 16 06:02:01.057500 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 16 06:02:01.057509 kernel: iommu: Default domain type: Translated May 16 06:02:01.057523 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 16 06:02:01.057532 kernel: PCI: Using ACPI for IRQ routing May 16 06:02:01.057541 kernel: PCI: pci_cache_line_size set to 64 bytes May 16 06:02:01.057550 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 16 06:02:01.057559 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] May 16 06:02:01.057657 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 16 06:02:01.057752 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 16 06:02:01.057846 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 16 06:02:01.057862 kernel: vgaarb: loaded May 16 06:02:01.057872 kernel: clocksource: Switched to clocksource kvm-clock May 16 06:02:01.057881 kernel: VFS: Disk quotas dquot_6.6.0 May 16 06:02:01.057890 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 06:02:01.057900 kernel: pnp: PnP ACPI init May 16 06:02:01.058008 kernel: pnp 00:03: [dma 2] May 16 06:02:01.058023 kernel: pnp: PnP ACPI: found 5 devices May 16 06:02:01.058032 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 16 06:02:01.058041 kernel: NET: Registered PF_INET protocol family May 16 06:02:01.058054 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 06:02:01.058063 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 06:02:01.058072 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 06:02:01.058082 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 06:02:01.058091 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 06:02:01.058100 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 06:02:01.058110 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 06:02:01.058119 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 06:02:01.058130 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 06:02:01.058139 kernel: NET: Registered PF_XDP protocol family May 16 06:02:01.058223 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 16 06:02:01.060133 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 16 06:02:01.060221 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 16 06:02:01.060332 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] May 16 06:02:01.060416 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] May 16 06:02:01.060513 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 16 06:02:01.060609 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 16 06:02:01.060627 kernel: PCI: CLS 0 bytes, default 64 May 16 06:02:01.060636 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 16 06:02:01.060646 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) May 16 06:02:01.060655 kernel: Initialise system trusted keyrings May 16 06:02:01.060664 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 06:02:01.060673 kernel: Key type asymmetric registered May 16 06:02:01.060682 kernel: Asymmetric key parser 'x509' registered May 16 06:02:01.060691 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 16 06:02:01.060703 kernel: io scheduler mq-deadline registered May 16 06:02:01.060712 kernel: io scheduler kyber registered May 16 06:02:01.060721 kernel: io scheduler bfq registered May 16 06:02:01.060730 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 16 06:02:01.060740 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 16 06:02:01.060749 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 16 06:02:01.060759 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 16 06:02:01.060768 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 16 06:02:01.060777 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 06:02:01.060788 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 16 06:02:01.060797 kernel: random: crng init done May 16 06:02:01.060807 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 16 06:02:01.060816 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 16 06:02:01.060825 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 16 06:02:01.060921 kernel: rtc_cmos 00:04: RTC can wake from S4 May 16 06:02:01.061008 kernel: rtc_cmos 00:04: registered as rtc0 May 16 06:02:01.061022 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 16 06:02:01.061108 kernel: rtc_cmos 00:04: setting system clock to 2025-05-16T06:02:00 UTC (1747375320) May 16 06:02:01.061193 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram May 16 06:02:01.061206 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 16 06:02:01.061215 kernel: NET: Registered PF_INET6 protocol family May 16 06:02:01.061225 kernel: Segment Routing with IPv6 May 16 06:02:01.063255 kernel: In-situ OAM (IOAM) with IPv6 May 16 06:02:01.063273 kernel: NET: Registered PF_PACKET protocol family May 16 06:02:01.063283 kernel: Key type dns_resolver registered May 16 06:02:01.063293 kernel: IPI shorthand broadcast: enabled May 16 06:02:01.063307 kernel: sched_clock: Marking stable (967007693, 168192392)->(1167439913, -32239828) May 16 06:02:01.063317 kernel: registered taskstats version 1 May 16 06:02:01.063326 kernel: Loading compiled-in X.509 certificates May 16 06:02:01.063336 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 22e80ca6ad28c00533ea5eb0843f23994a6e2a11' May 16 06:02:01.063346 kernel: Key type .fscrypt registered May 16 06:02:01.063356 kernel: Key type fscrypt-provisioning registered May 16 06:02:01.063366 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 06:02:01.063375 kernel: ima: Allocated hash algorithm: sha1 May 16 06:02:01.063385 kernel: ima: No architecture policies found May 16 06:02:01.063397 kernel: clk: Disabling unused clocks May 16 06:02:01.063407 kernel: Freeing unused kernel image (initmem) memory: 43484K May 16 06:02:01.063417 kernel: Write protecting the kernel read-only data: 38912k May 16 06:02:01.063427 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K May 16 06:02:01.063436 kernel: Run /init as init process May 16 06:02:01.063446 kernel: with arguments: May 16 06:02:01.063455 kernel: /init May 16 06:02:01.063465 kernel: with environment: May 16 06:02:01.063474 kernel: HOME=/ May 16 06:02:01.063486 kernel: TERM=linux May 16 06:02:01.063495 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 06:02:01.063507 systemd[1]: Successfully made /usr/ read-only. May 16 06:02:01.063521 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 06:02:01.063532 systemd[1]: Detected virtualization kvm. May 16 06:02:01.063542 systemd[1]: Detected architecture x86-64. May 16 06:02:01.063552 systemd[1]: Running in initrd. May 16 06:02:01.063565 systemd[1]: No hostname configured, using default hostname. May 16 06:02:01.063575 systemd[1]: Hostname set to . May 16 06:02:01.063586 systemd[1]: Initializing machine ID from VM UUID. May 16 06:02:01.063596 systemd[1]: Queued start job for default target initrd.target. May 16 06:02:01.063606 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 06:02:01.063617 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 06:02:01.063629 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 06:02:01.063648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 06:02:01.063661 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 06:02:01.063673 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 06:02:01.063685 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 06:02:01.063696 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 06:02:01.063709 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 06:02:01.063719 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 06:02:01.063730 systemd[1]: Reached target paths.target - Path Units. May 16 06:02:01.063740 systemd[1]: Reached target slices.target - Slice Units. May 16 06:02:01.063751 systemd[1]: Reached target swap.target - Swaps. May 16 06:02:01.063761 systemd[1]: Reached target timers.target - Timer Units. May 16 06:02:01.063772 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 06:02:01.063783 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 06:02:01.063794 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 06:02:01.063807 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 06:02:01.063818 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 06:02:01.063829 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 06:02:01.063839 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 06:02:01.063850 systemd[1]: Reached target sockets.target - Socket Units. May 16 06:02:01.063861 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 06:02:01.063871 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 06:02:01.063882 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 06:02:01.063893 systemd[1]: Starting systemd-fsck-usr.service... May 16 06:02:01.063905 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 06:02:01.063916 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 06:02:01.063927 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 06:02:01.063938 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 06:02:01.063948 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 06:02:01.063985 systemd-journald[185]: Collecting audit messages is disabled. May 16 06:02:01.064011 systemd[1]: Finished systemd-fsck-usr.service. May 16 06:02:01.064023 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 06:02:01.064037 systemd-journald[185]: Journal started May 16 06:02:01.064062 systemd-journald[185]: Runtime Journal (/run/log/journal/3ce72df2ba0b4fa8b48a2c0ca1355291) is 8M, max 78.3M, 70.3M free. May 16 06:02:01.058722 systemd-modules-load[186]: Inserted module 'overlay' May 16 06:02:01.104277 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 06:02:01.104301 kernel: Bridge firewalling registered May 16 06:02:01.088873 systemd-modules-load[186]: Inserted module 'br_netfilter' May 16 06:02:01.106957 systemd[1]: Started systemd-journald.service - Journal Service. May 16 06:02:01.107843 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 06:02:01.108623 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 06:02:01.109742 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 06:02:01.119445 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 06:02:01.122399 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 06:02:01.123834 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 06:02:01.125930 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 06:02:01.146329 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 06:02:01.155417 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 06:02:01.156844 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 06:02:01.159619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 06:02:01.160988 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 06:02:01.169058 dracut-cmdline[218]: dracut-dracut-053 May 16 06:02:01.172430 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ffa0077ec5e89092631d817251b58c64c9261c447bd6e8bcef43c52d5e74873e May 16 06:02:01.174538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 06:02:01.217074 systemd-resolved[231]: Positive Trust Anchors: May 16 06:02:01.217808 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 06:02:01.217851 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 06:02:01.221292 systemd-resolved[231]: Defaulting to hostname 'linux'. May 16 06:02:01.222874 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 06:02:01.224738 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 06:02:01.264277 kernel: SCSI subsystem initialized May 16 06:02:01.274318 kernel: Loading iSCSI transport class v2.0-870. May 16 06:02:01.287305 kernel: iscsi: registered transport (tcp) May 16 06:02:01.311396 kernel: iscsi: registered transport (qla4xxx) May 16 06:02:01.311505 kernel: QLogic iSCSI HBA Driver May 16 06:02:01.365597 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 06:02:01.375518 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 06:02:01.428866 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 06:02:01.428966 kernel: device-mapper: uevent: version 1.0.3 May 16 06:02:01.428998 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 16 06:02:01.491379 kernel: raid6: sse2x4 gen() 5204 MB/s May 16 06:02:01.509340 kernel: raid6: sse2x2 gen() 5998 MB/s May 16 06:02:01.527722 kernel: raid6: sse2x1 gen() 10180 MB/s May 16 06:02:01.527796 kernel: raid6: using algorithm sse2x1 gen() 10180 MB/s May 16 06:02:01.546662 kernel: raid6: .... xor() 7404 MB/s, rmw enabled May 16 06:02:01.546724 kernel: raid6: using ssse3x2 recovery algorithm May 16 06:02:01.569663 kernel: xor: measuring software checksum speed May 16 06:02:01.569725 kernel: prefetch64-sse : 17200 MB/sec May 16 06:02:01.570897 kernel: generic_sse : 16751 MB/sec May 16 06:02:01.570941 kernel: xor: using function: prefetch64-sse (17200 MB/sec) May 16 06:02:01.755316 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 06:02:01.773286 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 06:02:01.782551 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 06:02:01.797451 systemd-udevd[407]: Using default interface naming scheme 'v255'. May 16 06:02:01.802366 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 06:02:01.812539 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 06:02:01.835667 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation May 16 06:02:01.881075 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 06:02:01.889533 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 06:02:01.939753 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 06:02:01.951650 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 06:02:01.981679 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 06:02:02.001801 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 06:02:02.003143 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 06:02:02.003643 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 06:02:02.011498 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 06:02:02.026764 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 06:02:02.067767 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues May 16 06:02:02.068319 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 06:02:02.069043 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 06:02:02.070462 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 06:02:02.083463 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) May 16 06:02:02.083616 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 06:02:02.083630 kernel: GPT:17805311 != 20971519 May 16 06:02:02.083642 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 06:02:02.083653 kernel: GPT:17805311 != 20971519 May 16 06:02:02.083664 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 06:02:02.083679 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 06:02:02.074604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 06:02:02.090224 kernel: libata version 3.00 loaded. May 16 06:02:02.074740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 06:02:02.079698 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 06:02:02.094296 kernel: ata_piix 0000:00:01.1: version 2.13 May 16 06:02:02.094467 kernel: scsi host0: ata_piix May 16 06:02:02.092318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 06:02:02.108044 kernel: scsi host1: ata_piix May 16 06:02:02.108232 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 May 16 06:02:02.108287 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 May 16 06:02:02.154345 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 06:02:02.160395 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 06:02:02.173426 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 06:02:02.290019 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (460) May 16 06:02:02.301344 kernel: BTRFS: device fsid 7e35ecc6-4b22-44da-ae37-cf2eabf14492 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (474) May 16 06:02:02.335037 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 06:02:02.346189 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 06:02:02.369309 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 06:02:02.369925 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 06:02:02.383320 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 06:02:02.394384 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 06:02:02.407479 disk-uuid[516]: Primary Header is updated. May 16 06:02:02.407479 disk-uuid[516]: Secondary Entries is updated. May 16 06:02:02.407479 disk-uuid[516]: Secondary Header is updated. May 16 06:02:02.417337 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 06:02:03.439291 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 06:02:03.440918 disk-uuid[517]: The operation has completed successfully. May 16 06:02:03.526641 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 06:02:03.526759 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 06:02:03.581372 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 06:02:03.598801 sh[528]: Success May 16 06:02:03.627387 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" May 16 06:02:03.712862 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 06:02:03.714558 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 06:02:03.720332 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 06:02:03.736775 kernel: BTRFS info (device dm-0): first mount of filesystem 7e35ecc6-4b22-44da-ae37-cf2eabf14492 May 16 06:02:03.736843 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 16 06:02:03.740620 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 16 06:02:03.740684 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 16 06:02:03.742285 kernel: BTRFS info (device dm-0): using free space tree May 16 06:02:03.759160 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 06:02:03.761725 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 06:02:03.767571 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 06:02:03.772494 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 06:02:03.797793 kernel: BTRFS info (device vda6): first mount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 06:02:03.797904 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 06:02:03.797924 kernel: BTRFS info (device vda6): using free space tree May 16 06:02:03.805272 kernel: BTRFS info (device vda6): auto enabling async discard May 16 06:02:03.813310 kernel: BTRFS info (device vda6): last unmount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 06:02:03.829095 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 06:02:03.836572 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 06:02:03.920896 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 06:02:03.932184 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 06:02:03.965703 systemd-networkd[709]: lo: Link UP May 16 06:02:03.965712 systemd-networkd[709]: lo: Gained carrier May 16 06:02:03.967907 systemd-networkd[709]: Enumeration completed May 16 06:02:03.967998 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 06:02:03.968873 systemd[1]: Reached target network.target - Network. May 16 06:02:03.969339 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 06:02:03.969344 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 06:02:03.970368 systemd-networkd[709]: eth0: Link UP May 16 06:02:03.970371 systemd-networkd[709]: eth0: Gained carrier May 16 06:02:03.970380 systemd-networkd[709]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 06:02:03.982296 systemd-networkd[709]: eth0: DHCPv4 address 172.24.4.222/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 16 06:02:04.003982 ignition[618]: Ignition 2.20.0 May 16 06:02:04.003995 ignition[618]: Stage: fetch-offline May 16 06:02:04.004037 ignition[618]: no configs at "/usr/lib/ignition/base.d" May 16 06:02:04.004046 ignition[618]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 06:02:04.006657 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 06:02:04.004149 ignition[618]: parsed url from cmdline: "" May 16 06:02:04.004153 ignition[618]: no config URL provided May 16 06:02:04.004159 ignition[618]: reading system config file "/usr/lib/ignition/user.ign" May 16 06:02:04.004167 ignition[618]: no config at "/usr/lib/ignition/user.ign" May 16 06:02:04.004174 ignition[618]: failed to fetch config: resource requires networking May 16 06:02:04.004392 ignition[618]: Ignition finished successfully May 16 06:02:04.011432 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 16 06:02:04.026098 ignition[719]: Ignition 2.20.0 May 16 06:02:04.026111 ignition[719]: Stage: fetch May 16 06:02:04.026323 ignition[719]: no configs at "/usr/lib/ignition/base.d" May 16 06:02:04.026335 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 06:02:04.026430 ignition[719]: parsed url from cmdline: "" May 16 06:02:04.026434 ignition[719]: no config URL provided May 16 06:02:04.026439 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" May 16 06:02:04.026447 ignition[719]: no config at "/usr/lib/ignition/user.ign" May 16 06:02:04.026591 ignition[719]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 16 06:02:04.026639 ignition[719]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 16 06:02:04.026682 ignition[719]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 16 06:02:04.564458 ignition[719]: GET result: OK May 16 06:02:04.564695 ignition[719]: parsing config with SHA512: eb255650bf42d7391759c7823d415c574a26ddb55df7cc3447edda8663470c08115a7ae5f8e79ac3e62e72fb74b9cb81bd78e33d9cf0f576015748ddd87e2f65 May 16 06:02:04.578860 unknown[719]: fetched base config from "system" May 16 06:02:04.578884 unknown[719]: fetched base config from "system" May 16 06:02:04.578898 unknown[719]: fetched user config from "openstack" May 16 06:02:04.581087 ignition[719]: fetch: fetch complete May 16 06:02:04.581101 ignition[719]: fetch: fetch passed May 16 06:02:04.584321 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 16 06:02:04.581222 ignition[719]: Ignition finished successfully May 16 06:02:04.600143 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 06:02:04.634114 ignition[726]: Ignition 2.20.0 May 16 06:02:04.634142 ignition[726]: Stage: kargs May 16 06:02:04.634655 ignition[726]: no configs at "/usr/lib/ignition/base.d" May 16 06:02:04.634684 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 06:02:04.639420 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 06:02:04.637081 ignition[726]: kargs: kargs passed May 16 06:02:04.637181 ignition[726]: Ignition finished successfully May 16 06:02:04.652589 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 06:02:04.679903 ignition[732]: Ignition 2.20.0 May 16 06:02:04.679930 ignition[732]: Stage: disks May 16 06:02:04.680403 ignition[732]: no configs at "/usr/lib/ignition/base.d" May 16 06:02:04.680430 ignition[732]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 06:02:04.682891 ignition[732]: disks: disks passed May 16 06:02:04.685057 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 06:02:04.682990 ignition[732]: Ignition finished successfully May 16 06:02:04.688063 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 06:02:04.689901 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 06:02:04.692067 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 06:02:04.693758 systemd[1]: Reached target sysinit.target - System Initialization. May 16 06:02:04.695948 systemd[1]: Reached target basic.target - Basic System. May 16 06:02:04.707375 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 06:02:04.726771 systemd-fsck[740]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 16 06:02:04.740792 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 06:02:04.766452 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 06:02:04.914298 kernel: EXT4-fs (vda9): mounted filesystem 14ea3086-9247-48be-9c0b-44ef9d324f10 r/w with ordered data mode. Quota mode: none. May 16 06:02:04.915109 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 06:02:04.916172 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 06:02:04.924366 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 06:02:04.928534 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 06:02:04.931970 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 06:02:04.939628 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 16 06:02:04.966381 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (748) May 16 06:02:04.966432 kernel: BTRFS info (device vda6): first mount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 06:02:04.966463 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 06:02:04.966493 kernel: BTRFS info (device vda6): using free space tree May 16 06:02:04.961368 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 06:02:04.961443 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 06:02:04.965507 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 06:02:04.971412 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 06:02:04.991056 kernel: BTRFS info (device vda6): auto enabling async discard May 16 06:02:04.990126 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 06:02:05.082954 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory May 16 06:02:05.088910 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory May 16 06:02:05.096578 initrd-setup-root[790]: cut: /sysroot/etc/shadow: No such file or directory May 16 06:02:05.104214 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory May 16 06:02:05.193220 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 06:02:05.198318 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 06:02:05.200395 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 06:02:05.208276 kernel: BTRFS info (device vda6): last unmount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 06:02:05.206711 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 06:02:05.245066 ignition[864]: INFO : Ignition 2.20.0 May 16 06:02:05.245066 ignition[864]: INFO : Stage: mount May 16 06:02:05.245066 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 06:02:05.245066 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 06:02:05.245066 ignition[864]: INFO : mount: mount passed May 16 06:02:05.245066 ignition[864]: INFO : Ignition finished successfully May 16 06:02:05.250806 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 06:02:05.256605 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 06:02:05.987620 systemd-networkd[709]: eth0: Gained IPv6LL May 16 06:02:12.119879 coreos-metadata[750]: May 16 06:02:12.119 WARN failed to locate config-drive, using the metadata service API instead May 16 06:02:12.160461 coreos-metadata[750]: May 16 06:02:12.160 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 16 06:02:12.177476 coreos-metadata[750]: May 16 06:02:12.177 INFO Fetch successful May 16 06:02:12.180225 coreos-metadata[750]: May 16 06:02:12.178 INFO wrote hostname ci-4230-1-1-n-15f3e1d893.novalocal to /sysroot/etc/hostname May 16 06:02:12.182669 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 16 06:02:12.182913 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 16 06:02:12.193470 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 06:02:12.224576 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 06:02:12.244304 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (882) May 16 06:02:12.251214 kernel: BTRFS info (device vda6): first mount of filesystem 82f90484-7c6e-4c5a-90fb-411944eb49d1 May 16 06:02:12.251346 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 16 06:02:12.255545 kernel: BTRFS info (device vda6): using free space tree May 16 06:02:12.267301 kernel: BTRFS info (device vda6): auto enabling async discard May 16 06:02:12.272638 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 06:02:12.320813 ignition[900]: INFO : Ignition 2.20.0 May 16 06:02:12.320813 ignition[900]: INFO : Stage: files May 16 06:02:12.323982 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 06:02:12.323982 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 06:02:12.323982 ignition[900]: DEBUG : files: compiled without relabeling support, skipping May 16 06:02:12.329480 ignition[900]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 06:02:12.329480 ignition[900]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 06:02:12.333720 ignition[900]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 06:02:12.335982 ignition[900]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 06:02:12.335982 ignition[900]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 06:02:12.334799 unknown[900]: wrote ssh authorized keys file for user: core May 16 06:02:12.341434 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 06:02:12.341434 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 16 06:02:12.426524 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 06:02:12.747322 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 16 06:02:12.747322 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 06:02:12.747322 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 16 06:02:13.479860 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 06:02:13.904015 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 06:02:13.904015 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 06:02:13.908650 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 16 06:02:14.554497 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 06:02:17.161862 ignition[900]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 16 06:02:17.163509 ignition[900]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 06:02:17.165140 ignition[900]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 06:02:17.165140 ignition[900]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 06:02:17.165140 ignition[900]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 06:02:17.165140 ignition[900]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 16 06:02:17.165140 ignition[900]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 16 06:02:17.165140 ignition[900]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 06:02:17.179355 ignition[900]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 06:02:17.179355 ignition[900]: INFO : files: files passed May 16 06:02:17.179355 ignition[900]: INFO : Ignition finished successfully May 16 06:02:17.167415 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 06:02:17.177451 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 06:02:17.182371 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 06:02:17.188877 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 06:02:17.188965 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 06:02:17.205328 initrd-setup-root-after-ignition[928]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 06:02:17.205328 initrd-setup-root-after-ignition[928]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 06:02:17.210569 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 06:02:17.211124 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 06:02:17.215619 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 06:02:17.223450 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 06:02:17.281102 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 06:02:17.281389 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 06:02:17.285065 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 06:02:17.287508 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 06:02:17.291826 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 06:02:17.301541 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 06:02:17.338978 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 06:02:17.350509 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 06:02:17.379029 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 06:02:17.380807 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 06:02:17.384056 systemd[1]: Stopped target timers.target - Timer Units. May 16 06:02:17.386910 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 06:02:17.387218 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 06:02:17.390225 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 06:02:17.392053 systemd[1]: Stopped target basic.target - Basic System. May 16 06:02:17.394963 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 06:02:17.397556 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 06:02:17.401807 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 06:02:17.404872 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 06:02:17.407780 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 06:02:17.410762 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 06:02:17.413701 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 06:02:17.416542 systemd[1]: Stopped target swap.target - Swaps. May 16 06:02:17.418907 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 06:02:17.419191 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 06:02:17.422317 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 06:02:17.424373 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 06:02:17.427083 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 06:02:17.427859 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 06:02:17.430139 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 06:02:17.430631 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 06:02:17.433945 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 06:02:17.434385 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 06:02:17.438013 systemd[1]: ignition-files.service: Deactivated successfully. May 16 06:02:17.438452 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 06:02:17.448353 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 06:02:17.449694 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 06:02:17.451544 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 06:02:17.465475 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 06:02:17.466001 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 06:02:17.466177 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 06:02:17.467436 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 06:02:17.467559 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 06:02:17.480865 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 06:02:17.480963 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 06:02:17.492014 ignition[952]: INFO : Ignition 2.20.0 May 16 06:02:17.492014 ignition[952]: INFO : Stage: umount May 16 06:02:17.492014 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 06:02:17.492014 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 16 06:02:17.492014 ignition[952]: INFO : umount: umount passed May 16 06:02:17.492014 ignition[952]: INFO : Ignition finished successfully May 16 06:02:17.494597 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 06:02:17.494704 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 06:02:17.498641 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 06:02:17.498738 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 06:02:17.501358 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 06:02:17.501410 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 06:02:17.502829 systemd[1]: ignition-fetch.service: Deactivated successfully. May 16 06:02:17.502873 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 16 06:02:17.503940 systemd[1]: Stopped target network.target - Network. May 16 06:02:17.504923 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 06:02:17.504971 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 06:02:17.506126 systemd[1]: Stopped target paths.target - Path Units. May 16 06:02:17.507334 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 06:02:17.512304 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 06:02:17.513201 systemd[1]: Stopped target slices.target - Slice Units. May 16 06:02:17.514344 systemd[1]: Stopped target sockets.target - Socket Units. May 16 06:02:17.515611 systemd[1]: iscsid.socket: Deactivated successfully. May 16 06:02:17.515652 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 06:02:17.516558 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 06:02:17.516591 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 06:02:17.517572 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 06:02:17.517619 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 06:02:17.518566 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 06:02:17.518609 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 06:02:17.519677 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 06:02:17.521017 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 06:02:17.523972 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 06:02:17.525872 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 06:02:17.525967 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 06:02:17.529692 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 06:02:17.529921 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 06:02:17.530003 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 06:02:17.532390 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 06:02:17.532509 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 06:02:17.534835 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 06:02:17.535477 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 06:02:17.535522 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 06:02:17.539931 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 06:02:17.539978 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 06:02:17.552354 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 06:02:17.553583 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 06:02:17.553664 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 06:02:17.554228 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 06:02:17.554291 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 06:02:17.555722 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 06:02:17.555765 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 06:02:17.556473 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 06:02:17.556515 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 06:02:17.557986 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 06:02:17.559791 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 06:02:17.559854 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 06:02:17.567756 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 06:02:17.567891 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 06:02:17.570610 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 06:02:17.570744 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 06:02:17.572131 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 06:02:17.572192 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 06:02:17.573013 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 06:02:17.573048 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 06:02:17.574183 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 06:02:17.574231 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 06:02:17.575870 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 06:02:17.575914 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 06:02:17.577102 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 06:02:17.577150 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 06:02:17.584473 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 06:02:17.585712 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 06:02:17.585780 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 06:02:17.587164 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 06:02:17.587213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 06:02:17.589662 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 06:02:17.589732 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 06:02:17.590078 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 06:02:17.590166 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 06:02:17.591885 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 06:02:17.598399 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 06:02:17.605500 systemd[1]: Switching root. May 16 06:02:17.641052 systemd-journald[185]: Journal stopped May 16 06:02:19.374937 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). May 16 06:02:19.375006 kernel: SELinux: policy capability network_peer_controls=1 May 16 06:02:19.375031 kernel: SELinux: policy capability open_perms=1 May 16 06:02:19.375043 kernel: SELinux: policy capability extended_socket_class=1 May 16 06:02:19.375055 kernel: SELinux: policy capability always_check_network=0 May 16 06:02:19.375066 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 06:02:19.375078 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 06:02:19.375093 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 06:02:19.375104 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 06:02:19.375124 systemd[1]: Successfully loaded SELinux policy in 78.133ms. May 16 06:02:19.375144 kernel: audit: type=1403 audit(1747375338.262:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 06:02:19.375156 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.669ms. May 16 06:02:19.375170 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 06:02:19.375182 systemd[1]: Detected virtualization kvm. May 16 06:02:19.375195 systemd[1]: Detected architecture x86-64. May 16 06:02:19.375210 systemd[1]: Detected first boot. May 16 06:02:19.375222 systemd[1]: Hostname set to . May 16 06:02:19.377832 systemd[1]: Initializing machine ID from VM UUID. May 16 06:02:19.377857 zram_generator::config[996]: No configuration found. May 16 06:02:19.377871 kernel: Guest personality initialized and is inactive May 16 06:02:19.377883 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 16 06:02:19.377895 kernel: Initialized host personality May 16 06:02:19.377906 kernel: NET: Registered PF_VSOCK protocol family May 16 06:02:19.377917 systemd[1]: Populated /etc with preset unit settings. May 16 06:02:19.377935 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 06:02:19.377948 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 06:02:19.377960 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 06:02:19.377973 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 06:02:19.377985 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 06:02:19.378000 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 06:02:19.378013 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 06:02:19.378025 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 06:02:19.378040 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 06:02:19.378053 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 06:02:19.378065 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 06:02:19.378078 systemd[1]: Created slice user.slice - User and Session Slice. May 16 06:02:19.378090 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 06:02:19.378102 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 06:02:19.378115 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 06:02:19.378127 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 06:02:19.378142 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 06:02:19.378155 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 06:02:19.378168 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 16 06:02:19.378180 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 06:02:19.378192 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 06:02:19.378205 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 06:02:19.378217 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 06:02:19.378231 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 06:02:19.378275 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 06:02:19.378289 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 06:02:19.378301 systemd[1]: Reached target slices.target - Slice Units. May 16 06:02:19.378313 systemd[1]: Reached target swap.target - Swaps. May 16 06:02:19.378326 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 06:02:19.378338 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 06:02:19.378350 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 06:02:19.378362 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 06:02:19.378375 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 06:02:19.378389 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 06:02:19.378402 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 06:02:19.378414 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 06:02:19.378428 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 06:02:19.378440 systemd[1]: Mounting media.mount - External Media Directory... May 16 06:02:19.378456 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 06:02:19.378468 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 06:02:19.378480 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 06:02:19.378494 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 06:02:19.378507 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 06:02:19.378529 systemd[1]: Reached target machines.target - Containers. May 16 06:02:19.378542 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 06:02:19.378554 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 06:02:19.378566 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 06:02:19.378578 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 06:02:19.378591 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 06:02:19.378603 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 06:02:19.378618 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 06:02:19.378631 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 06:02:19.378643 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 06:02:19.378656 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 06:02:19.378668 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 06:02:19.378680 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 06:02:19.378692 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 06:02:19.378705 systemd[1]: Stopped systemd-fsck-usr.service. May 16 06:02:19.378720 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 06:02:19.378732 kernel: loop: module loaded May 16 06:02:19.378744 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 06:02:19.378756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 06:02:19.378769 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 06:02:19.378781 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 06:02:19.378793 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 06:02:19.378806 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 06:02:19.378818 systemd[1]: verity-setup.service: Deactivated successfully. May 16 06:02:19.378833 systemd[1]: Stopped verity-setup.service. May 16 06:02:19.378846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 06:02:19.378857 kernel: ACPI: bus type drm_connector registered May 16 06:02:19.378869 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 06:02:19.378903 systemd-journald[1093]: Collecting audit messages is disabled. May 16 06:02:19.378933 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 06:02:19.378945 kernel: fuse: init (API version 7.39) May 16 06:02:19.378960 systemd[1]: Mounted media.mount - External Media Directory. May 16 06:02:19.378972 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 06:02:19.378987 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 06:02:19.379000 systemd-journald[1093]: Journal started May 16 06:02:19.379027 systemd-journald[1093]: Runtime Journal (/run/log/journal/3ce72df2ba0b4fa8b48a2c0ca1355291) is 8M, max 78.3M, 70.3M free. May 16 06:02:19.007856 systemd[1]: Queued start job for default target multi-user.target. May 16 06:02:19.382455 systemd[1]: Started systemd-journald.service - Journal Service. May 16 06:02:19.018573 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 06:02:19.019055 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 06:02:19.381885 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 06:02:19.383794 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 06:02:19.384625 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 06:02:19.386616 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 06:02:19.386778 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 06:02:19.387675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 06:02:19.387845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 06:02:19.388859 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 06:02:19.389034 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 06:02:19.389979 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 06:02:19.390155 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 06:02:19.391336 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 06:02:19.391585 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 06:02:19.392633 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 06:02:19.392882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 06:02:19.393794 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 06:02:19.394927 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 06:02:19.395876 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 06:02:19.396978 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 06:02:19.409102 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 06:02:19.415898 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 06:02:19.420390 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 06:02:19.422718 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 06:02:19.422838 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 06:02:19.424813 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 06:02:19.429372 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 06:02:19.436692 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 06:02:19.437798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 06:02:19.448384 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 06:02:19.456397 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 06:02:19.457035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 06:02:19.458127 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 06:02:19.458834 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 06:02:19.460012 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 06:02:19.462405 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 06:02:19.466412 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 06:02:19.469981 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 06:02:19.482104 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 06:02:19.483592 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 06:02:19.500033 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 06:02:19.503443 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 16 06:02:19.512271 systemd-journald[1093]: Time spent on flushing to /var/log/journal/3ce72df2ba0b4fa8b48a2c0ca1355291 is 42.645ms for 962 entries. May 16 06:02:19.512271 systemd-journald[1093]: System Journal (/var/log/journal/3ce72df2ba0b4fa8b48a2c0ca1355291) is 8M, max 584.8M, 576.8M free. May 16 06:02:19.601114 systemd-journald[1093]: Received client request to flush runtime journal. May 16 06:02:19.601153 kernel: loop0: detected capacity change from 0 to 8 May 16 06:02:19.601168 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 06:02:19.601181 kernel: loop1: detected capacity change from 0 to 147912 May 16 06:02:19.528724 udevadm[1141]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 06:02:19.555189 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 06:02:19.556488 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 06:02:19.568481 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 06:02:19.571036 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 06:02:19.602889 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 06:02:19.667499 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 06:02:19.673808 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 06:02:19.696536 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 06:02:19.721285 kernel: loop2: detected capacity change from 0 to 138176 May 16 06:02:19.735430 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. May 16 06:02:19.735449 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. May 16 06:02:19.744611 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 06:02:19.796308 kernel: loop3: detected capacity change from 0 to 224512 May 16 06:02:19.892295 kernel: loop4: detected capacity change from 0 to 8 May 16 06:02:19.897293 kernel: loop5: detected capacity change from 0 to 147912 May 16 06:02:19.975289 kernel: loop6: detected capacity change from 0 to 138176 May 16 06:02:20.020794 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 06:02:20.046310 kernel: loop7: detected capacity change from 0 to 224512 May 16 06:02:20.114127 (sd-merge)[1160]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 16 06:02:20.118340 (sd-merge)[1160]: Merged extensions into '/usr'. May 16 06:02:20.132499 systemd[1]: Reload requested from client PID 1135 ('systemd-sysext') (unit systemd-sysext.service)... May 16 06:02:20.132515 systemd[1]: Reloading... May 16 06:02:20.217265 zram_generator::config[1187]: No configuration found. May 16 06:02:20.426429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 06:02:20.516736 systemd[1]: Reloading finished in 383 ms. May 16 06:02:20.548636 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 06:02:20.551953 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 06:02:20.560620 systemd[1]: Starting ensure-sysext.service... May 16 06:02:20.563435 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 06:02:20.570534 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 06:02:20.586414 systemd[1]: Reload requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... May 16 06:02:20.586439 systemd[1]: Reloading... May 16 06:02:20.615715 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 06:02:20.616099 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 06:02:20.617082 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 06:02:20.620463 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 16 06:02:20.620540 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 16 06:02:20.630130 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. May 16 06:02:20.630139 systemd-tmpfiles[1245]: Skipping /boot May 16 06:02:20.643556 systemd-udevd[1246]: Using default interface naming scheme 'v255'. May 16 06:02:20.655150 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. May 16 06:02:20.656341 systemd-tmpfiles[1245]: Skipping /boot May 16 06:02:20.706296 zram_generator::config[1274]: No configuration found. May 16 06:02:20.716936 ldconfig[1130]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 06:02:20.879341 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1321) May 16 06:02:20.946263 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 16 06:02:20.946348 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 16 06:02:20.974094 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 06:02:20.974295 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 16 06:02:20.990479 kernel: ACPI: button: Power Button [PWRF] May 16 06:02:21.038272 kernel: mousedev: PS/2 mouse device common for all mice May 16 06:02:21.088263 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 16 06:02:21.088366 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 16 06:02:21.095183 kernel: Console: switching to colour dummy device 80x25 May 16 06:02:21.095301 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 16 06:02:21.095325 kernel: [drm] features: -context_init May 16 06:02:21.095343 kernel: [drm] number of scanouts: 1 May 16 06:02:21.095359 kernel: [drm] number of cap sets: 0 May 16 06:02:21.101279 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 16 06:02:21.110184 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 16 06:02:21.110287 kernel: Console: switching to colour frame buffer device 160x50 May 16 06:02:21.117029 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 16 06:02:21.119430 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 06:02:21.122889 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 16 06:02:21.123708 systemd[1]: Reloading finished in 536 ms. May 16 06:02:21.138802 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 06:02:21.141643 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 06:02:21.155094 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 06:02:21.185707 systemd[1]: Finished ensure-sysext.service. May 16 06:02:21.198528 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 16 06:02:21.217125 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 06:02:21.222393 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 06:02:21.234677 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 06:02:21.234946 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 06:02:21.237437 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 16 06:02:21.244477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 06:02:21.249091 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 06:02:21.266748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 06:02:21.273468 lvm[1368]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 06:02:21.273506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 06:02:21.275617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 06:02:21.279302 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 06:02:21.280310 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 06:02:21.287835 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 06:02:21.298486 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 06:02:21.306535 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 06:02:21.317592 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 06:02:21.329438 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 06:02:21.336489 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 06:02:21.338716 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 16 06:02:21.341085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 06:02:21.342308 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 06:02:21.343038 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 06:02:21.343380 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 06:02:21.345140 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 06:02:21.347124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 06:02:21.349404 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 16 06:02:21.350218 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 06:02:21.354732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 06:02:21.355895 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 06:02:21.374283 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 06:02:21.389394 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 16 06:02:21.391153 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 06:02:21.391230 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 06:02:21.395974 augenrules[1409]: No rules May 16 06:02:21.396295 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 06:02:21.402451 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 06:02:21.405764 systemd[1]: audit-rules.service: Deactivated successfully. May 16 06:02:21.406408 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 06:02:21.408332 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 06:02:21.426213 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 06:02:21.438450 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 06:02:21.439460 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 16 06:02:21.459519 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 06:02:21.473032 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 06:02:21.476440 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 06:02:21.482978 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 06:02:21.519478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 06:02:21.581117 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 06:02:21.582679 systemd[1]: Reached target time-set.target - System Time Set. May 16 06:02:21.602873 systemd-networkd[1382]: lo: Link UP May 16 06:02:21.602889 systemd-networkd[1382]: lo: Gained carrier May 16 06:02:21.604407 systemd-networkd[1382]: Enumeration completed May 16 06:02:21.604513 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 06:02:21.609841 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 06:02:21.609855 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 06:02:21.610447 systemd-networkd[1382]: eth0: Link UP May 16 06:02:21.610455 systemd-networkd[1382]: eth0: Gained carrier May 16 06:02:21.610471 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 06:02:21.611219 systemd-resolved[1384]: Positive Trust Anchors: May 16 06:02:21.611552 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 06:02:21.611605 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 06:02:21.617463 systemd-resolved[1384]: Using system hostname 'ci-4230-1-1-n-15f3e1d893.novalocal'. May 16 06:02:21.617663 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 06:02:21.622503 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 06:02:21.624371 systemd-networkd[1382]: eth0: DHCPv4 address 172.24.4.222/24, gateway 172.24.4.1 acquired from 172.24.4.1 May 16 06:02:21.625281 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. May 16 06:02:21.625907 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 06:02:21.626651 systemd[1]: Reached target network.target - Network. May 16 06:02:21.628988 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 06:02:21.633213 systemd[1]: Reached target sysinit.target - System Initialization. May 16 06:02:21.638124 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 06:02:21.643329 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 06:02:21.645736 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 06:02:21.646548 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 06:02:21.647106 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 06:02:21.650850 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 06:02:21.650902 systemd[1]: Reached target paths.target - Path Units. May 16 06:02:21.651512 systemd[1]: Reached target timers.target - Timer Units. May 16 06:02:21.660305 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 06:02:21.664664 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 06:02:21.668735 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 06:02:21.670707 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 06:02:21.671280 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 06:02:21.682138 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 06:02:21.683646 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 06:02:21.686469 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 06:02:21.687996 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 06:02:21.691031 systemd[1]: Reached target sockets.target - Socket Units. May 16 06:02:21.693369 systemd[1]: Reached target basic.target - Basic System. May 16 06:02:21.694790 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 06:02:21.694825 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 06:02:21.703729 systemd[1]: Starting containerd.service - containerd container runtime... May 16 06:02:21.708715 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 16 06:02:21.716424 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 06:02:21.722495 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 06:02:21.734811 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 06:02:21.737383 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 06:02:21.741430 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 06:02:21.749285 jq[1445]: false May 16 06:02:21.748950 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 06:02:21.754427 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 06:02:21.765725 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 06:02:21.769137 extend-filesystems[1446]: Found loop4 May 16 06:02:21.775370 extend-filesystems[1446]: Found loop5 May 16 06:02:21.775370 extend-filesystems[1446]: Found loop6 May 16 06:02:21.775370 extend-filesystems[1446]: Found loop7 May 16 06:02:21.775370 extend-filesystems[1446]: Found vda May 16 06:02:21.775370 extend-filesystems[1446]: Found vda1 May 16 06:02:21.775370 extend-filesystems[1446]: Found vda2 May 16 06:02:21.775370 extend-filesystems[1446]: Found vda3 May 16 06:02:21.775370 extend-filesystems[1446]: Found usr May 16 06:02:21.775370 extend-filesystems[1446]: Found vda4 May 16 06:02:21.775370 extend-filesystems[1446]: Found vda6 May 16 06:02:21.775370 extend-filesystems[1446]: Found vda7 May 16 06:02:21.775370 extend-filesystems[1446]: Found vda9 May 16 06:02:21.775370 extend-filesystems[1446]: Checking size of /dev/vda9 May 16 06:02:21.920978 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks May 16 06:02:21.921016 kernel: EXT4-fs (vda9): resized filesystem to 2014203 May 16 06:02:21.921056 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1323) May 16 06:02:21.779471 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 06:02:21.823467 dbus-daemon[1442]: [system] SELinux support is enabled May 16 06:02:21.921590 extend-filesystems[1446]: Resized partition /dev/vda9 May 16 06:02:21.789563 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 06:02:21.942842 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) May 16 06:02:21.942842 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 06:02:21.942842 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 06:02:21.942842 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. May 16 06:02:21.790159 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 06:02:21.958659 extend-filesystems[1446]: Resized filesystem in /dev/vda9 May 16 06:02:21.797388 systemd[1]: Starting update-engine.service - Update Engine... May 16 06:02:21.959307 update_engine[1460]: I20250516 06:02:21.865690 1460 main.cc:92] Flatcar Update Engine starting May 16 06:02:21.959307 update_engine[1460]: I20250516 06:02:21.874506 1460 update_check_scheduler.cc:74] Next update check in 8m27s May 16 06:02:21.828038 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 06:02:21.836843 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 06:02:21.959847 jq[1464]: true May 16 06:02:21.849979 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 06:02:21.850608 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 06:02:21.850888 systemd[1]: motdgen.service: Deactivated successfully. May 16 06:02:21.852293 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 06:02:21.878687 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 06:02:21.878917 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 06:02:21.901604 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 06:02:21.901843 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 06:02:21.926645 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 06:02:21.926680 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 06:02:21.938588 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 06:02:21.938612 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 06:02:21.956554 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 06:02:21.965395 jq[1471]: true May 16 06:02:21.972823 tar[1470]: linux-amd64/LICENSE May 16 06:02:21.972823 tar[1470]: linux-amd64/helm May 16 06:02:21.978778 systemd[1]: Started update-engine.service - Update Engine. May 16 06:02:21.999467 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 06:02:22.027807 bash[1499]: Updated "/home/core/.ssh/authorized_keys" May 16 06:02:22.030306 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 06:02:22.043514 systemd[1]: Starting sshkeys.service... May 16 06:02:22.044685 systemd-logind[1455]: New seat seat0. May 16 06:02:22.061967 systemd-logind[1455]: Watching system buttons on /dev/input/event2 (Power Button) May 16 06:02:22.061987 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 16 06:02:22.062185 systemd[1]: Started systemd-logind.service - User Login Management. May 16 06:02:22.085229 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 16 06:02:22.105172 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 16 06:02:22.270726 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 06:02:22.295720 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 06:02:22.333823 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 06:02:22.349157 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 06:02:22.366065 systemd[1]: issuegen.service: Deactivated successfully. May 16 06:02:22.366322 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 06:02:22.382687 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 06:02:22.411746 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 06:02:22.423742 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 06:02:22.444014 containerd[1477]: time="2025-05-16T06:02:22.436595529Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 16 06:02:22.438674 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 16 06:02:22.441127 systemd[1]: Reached target getty.target - Login Prompts. May 16 06:02:22.480640 containerd[1477]: time="2025-05-16T06:02:22.480569121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 06:02:22.482447 containerd[1477]: time="2025-05-16T06:02:22.482418961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 06:02:22.482529 containerd[1477]: time="2025-05-16T06:02:22.482500554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 06:02:22.483259 containerd[1477]: time="2025-05-16T06:02:22.482598287Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 06:02:22.483259 containerd[1477]: time="2025-05-16T06:02:22.482775560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 16 06:02:22.483259 containerd[1477]: time="2025-05-16T06:02:22.482794104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 16 06:02:22.483259 containerd[1477]: time="2025-05-16T06:02:22.482858185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 16 06:02:22.483259 containerd[1477]: time="2025-05-16T06:02:22.482873523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 06:02:22.483259 containerd[1477]: time="2025-05-16T06:02:22.483066746Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 06:02:22.483259 containerd[1477]: time="2025-05-16T06:02:22.483083698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 06:02:22.483259 containerd[1477]: time="2025-05-16T06:02:22.483098485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 16 06:02:22.483259 containerd[1477]: time="2025-05-16T06:02:22.483109376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 06:02:22.483259 containerd[1477]: time="2025-05-16T06:02:22.483194776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 06:02:22.483698 containerd[1477]: time="2025-05-16T06:02:22.483679976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 06:02:22.483873 containerd[1477]: time="2025-05-16T06:02:22.483853351Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 06:02:22.483938 containerd[1477]: time="2025-05-16T06:02:22.483924755Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 06:02:22.484069 containerd[1477]: time="2025-05-16T06:02:22.484052084Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 06:02:22.484182 containerd[1477]: time="2025-05-16T06:02:22.484165496Z" level=info msg="metadata content store policy set" policy=shared May 16 06:02:22.494085 containerd[1477]: time="2025-05-16T06:02:22.494061476Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 06:02:22.494287 containerd[1477]: time="2025-05-16T06:02:22.494230804Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 06:02:22.494375 containerd[1477]: time="2025-05-16T06:02:22.494345960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 16 06:02:22.494665 containerd[1477]: time="2025-05-16T06:02:22.494450245Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 16 06:02:22.494665 containerd[1477]: time="2025-05-16T06:02:22.494470874Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 06:02:22.494665 containerd[1477]: time="2025-05-16T06:02:22.494614083Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 06:02:22.495280 containerd[1477]: time="2025-05-16T06:02:22.495260756Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 06:02:22.495455 containerd[1477]: time="2025-05-16T06:02:22.495437818Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495586697Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495611223Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495626802Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495640738Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495655286Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495670855Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495686294Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495700951Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495714416Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495727902Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495756285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495771714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495793254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496051 containerd[1477]: time="2025-05-16T06:02:22.495808713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495824052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495839391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495853327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495868656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495883974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495901187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495916195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495929520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495942785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495957963Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495979974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.495995534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 06:02:22.496676 containerd[1477]: time="2025-05-16T06:02:22.496008548Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 06:02:22.497991 containerd[1477]: time="2025-05-16T06:02:22.496944844Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 06:02:22.497991 containerd[1477]: time="2025-05-16T06:02:22.496971184Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 16 06:02:22.497991 containerd[1477]: time="2025-05-16T06:02:22.497046314Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 06:02:22.497991 containerd[1477]: time="2025-05-16T06:02:22.497064058Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 16 06:02:22.497991 containerd[1477]: time="2025-05-16T06:02:22.497075128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 06:02:22.497991 containerd[1477]: time="2025-05-16T06:02:22.497095186Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 16 06:02:22.497991 containerd[1477]: time="2025-05-16T06:02:22.497107209Z" level=info msg="NRI interface is disabled by configuration." May 16 06:02:22.497991 containerd[1477]: time="2025-05-16T06:02:22.497119802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 06:02:22.498163 containerd[1477]: time="2025-05-16T06:02:22.497444231Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 06:02:22.498163 containerd[1477]: time="2025-05-16T06:02:22.497500757Z" level=info msg="Connect containerd service" May 16 06:02:22.498163 containerd[1477]: time="2025-05-16T06:02:22.497526395Z" level=info msg="using legacy CRI server" May 16 06:02:22.498163 containerd[1477]: time="2025-05-16T06:02:22.497533238Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 06:02:22.498163 containerd[1477]: time="2025-05-16T06:02:22.497648374Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 06:02:22.498829 containerd[1477]: time="2025-05-16T06:02:22.498808429Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 06:02:22.499017 containerd[1477]: time="2025-05-16T06:02:22.498988527Z" level=info msg="Start subscribing containerd event" May 16 06:02:22.499092 containerd[1477]: time="2025-05-16T06:02:22.499078997Z" level=info msg="Start recovering state" May 16 06:02:22.499472 containerd[1477]: time="2025-05-16T06:02:22.499179084Z" level=info msg="Start event monitor" May 16 06:02:22.499472 containerd[1477]: time="2025-05-16T06:02:22.499194964Z" level=info msg="Start snapshots syncer" May 16 06:02:22.499472 containerd[1477]: time="2025-05-16T06:02:22.499203290Z" level=info msg="Start cni network conf syncer for default" May 16 06:02:22.499472 containerd[1477]: time="2025-05-16T06:02:22.499211716Z" level=info msg="Start streaming server" May 16 06:02:22.499771 containerd[1477]: time="2025-05-16T06:02:22.499754464Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 06:02:22.499873 containerd[1477]: time="2025-05-16T06:02:22.499858108Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 06:02:22.500044 systemd[1]: Started containerd.service - containerd container runtime. May 16 06:02:22.505836 containerd[1477]: time="2025-05-16T06:02:22.505143972Z" level=info msg="containerd successfully booted in 0.070489s" May 16 06:02:22.690210 tar[1470]: linux-amd64/README.md May 16 06:02:22.700822 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 06:02:23.075406 systemd-networkd[1382]: eth0: Gained IPv6LL May 16 06:02:23.076044 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. May 16 06:02:23.078144 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 06:02:23.084196 systemd[1]: Reached target network-online.target - Network is Online. May 16 06:02:23.099406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 06:02:23.106010 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 06:02:23.146838 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 06:02:25.537524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 06:02:25.549141 (kubelet)[1558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 06:02:26.375527 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 06:02:26.390525 systemd[1]: Started sshd@0-172.24.4.222:22-172.24.4.1:33816.service - OpenSSH per-connection server daemon (172.24.4.1:33816). May 16 06:02:26.921015 kubelet[1558]: E0516 06:02:26.920826 1558 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 06:02:26.924202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 06:02:26.924587 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 06:02:26.925307 systemd[1]: kubelet.service: Consumed 2.425s CPU time, 269.1M memory peak. May 16 06:02:27.534228 login[1530]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying May 16 06:02:27.535016 login[1529]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 16 06:02:27.569407 systemd-logind[1455]: New session 1 of user core. May 16 06:02:27.573954 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 06:02:27.584933 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 06:02:27.614445 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 06:02:27.624333 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 06:02:27.644431 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 06:02:27.650442 systemd-logind[1455]: New session c1 of user core. May 16 06:02:27.760302 sshd[1565]: Accepted publickey for core from 172.24.4.1 port 33816 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:02:27.760866 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:02:27.772217 systemd-logind[1455]: New session 3 of user core. May 16 06:02:27.863294 systemd[1574]: Queued start job for default target default.target. May 16 06:02:27.873283 systemd[1574]: Created slice app.slice - User Application Slice. May 16 06:02:27.873313 systemd[1574]: Reached target paths.target - Paths. May 16 06:02:27.873356 systemd[1574]: Reached target timers.target - Timers. May 16 06:02:27.874814 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 06:02:27.903409 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 06:02:27.903533 systemd[1574]: Reached target sockets.target - Sockets. May 16 06:02:27.903579 systemd[1574]: Reached target basic.target - Basic System. May 16 06:02:27.903617 systemd[1574]: Reached target default.target - Main User Target. May 16 06:02:27.903644 systemd[1574]: Startup finished in 238ms. May 16 06:02:27.903795 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 06:02:27.913439 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 06:02:27.914229 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 06:02:28.480270 systemd[1]: Started sshd@1-172.24.4.222:22-172.24.4.1:33824.service - OpenSSH per-connection server daemon (172.24.4.1:33824). May 16 06:02:28.542736 login[1530]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 16 06:02:28.554465 systemd-logind[1455]: New session 2 of user core. May 16 06:02:28.562734 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 06:02:28.771339 coreos-metadata[1441]: May 16 06:02:28.770 WARN failed to locate config-drive, using the metadata service API instead May 16 06:02:28.814871 coreos-metadata[1441]: May 16 06:02:28.814 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 16 06:02:29.002961 coreos-metadata[1441]: May 16 06:02:29.002 INFO Fetch successful May 16 06:02:29.002961 coreos-metadata[1441]: May 16 06:02:29.002 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 16 06:02:29.017706 coreos-metadata[1441]: May 16 06:02:29.017 INFO Fetch successful May 16 06:02:29.017706 coreos-metadata[1441]: May 16 06:02:29.017 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 16 06:02:29.032281 coreos-metadata[1441]: May 16 06:02:29.032 INFO Fetch successful May 16 06:02:29.032281 coreos-metadata[1441]: May 16 06:02:29.032 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 16 06:02:29.046939 coreos-metadata[1441]: May 16 06:02:29.046 INFO Fetch successful May 16 06:02:29.046939 coreos-metadata[1441]: May 16 06:02:29.046 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 16 06:02:29.059757 coreos-metadata[1441]: May 16 06:02:29.059 INFO Fetch successful May 16 06:02:29.059757 coreos-metadata[1441]: May 16 06:02:29.059 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 16 06:02:29.073434 coreos-metadata[1441]: May 16 06:02:29.073 INFO Fetch successful May 16 06:02:29.134809 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 16 06:02:29.136980 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 06:02:29.176757 coreos-metadata[1503]: May 16 06:02:29.176 WARN failed to locate config-drive, using the metadata service API instead May 16 06:02:29.219553 coreos-metadata[1503]: May 16 06:02:29.219 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 16 06:02:29.233774 coreos-metadata[1503]: May 16 06:02:29.233 INFO Fetch successful May 16 06:02:29.233774 coreos-metadata[1503]: May 16 06:02:29.233 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 16 06:02:29.246601 coreos-metadata[1503]: May 16 06:02:29.246 INFO Fetch successful May 16 06:02:29.252508 unknown[1503]: wrote ssh authorized keys file for user: core May 16 06:02:29.289040 update-ssh-keys[1617]: Updated "/home/core/.ssh/authorized_keys" May 16 06:02:29.290843 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 16 06:02:29.295141 systemd[1]: Finished sshkeys.service. May 16 06:02:29.300275 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 06:02:29.301191 systemd[1]: Startup finished in 1.191s (kernel) + 17.436s (initrd) + 11.116s (userspace) = 29.743s. May 16 06:02:29.903753 sshd[1598]: Accepted publickey for core from 172.24.4.1 port 33824 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:02:29.906359 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:02:29.918999 systemd-logind[1455]: New session 4 of user core. May 16 06:02:29.925594 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 06:02:30.545296 sshd[1621]: Connection closed by 172.24.4.1 port 33824 May 16 06:02:30.544414 sshd-session[1598]: pam_unix(sshd:session): session closed for user core May 16 06:02:30.563427 systemd[1]: sshd@1-172.24.4.222:22-172.24.4.1:33824.service: Deactivated successfully. May 16 06:02:30.566813 systemd[1]: session-4.scope: Deactivated successfully. May 16 06:02:30.568683 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. May 16 06:02:30.578596 systemd[1]: Started sshd@2-172.24.4.222:22-172.24.4.1:33832.service - OpenSSH per-connection server daemon (172.24.4.1:33832). May 16 06:02:30.582021 systemd-logind[1455]: Removed session 4. May 16 06:02:31.813109 sshd[1626]: Accepted publickey for core from 172.24.4.1 port 33832 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:02:31.815942 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:02:31.829113 systemd-logind[1455]: New session 5 of user core. May 16 06:02:31.836601 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 06:02:32.399724 sshd[1629]: Connection closed by 172.24.4.1 port 33832 May 16 06:02:32.401125 sshd-session[1626]: pam_unix(sshd:session): session closed for user core May 16 06:02:32.421356 systemd[1]: sshd@2-172.24.4.222:22-172.24.4.1:33832.service: Deactivated successfully. May 16 06:02:32.425449 systemd[1]: session-5.scope: Deactivated successfully. May 16 06:02:32.428041 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. May 16 06:02:32.437897 systemd[1]: Started sshd@3-172.24.4.222:22-172.24.4.1:33838.service - OpenSSH per-connection server daemon (172.24.4.1:33838). May 16 06:02:32.440384 systemd-logind[1455]: Removed session 5. May 16 06:02:34.062618 sshd[1634]: Accepted publickey for core from 172.24.4.1 port 33838 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:02:34.065394 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:02:34.078079 systemd-logind[1455]: New session 6 of user core. May 16 06:02:34.092604 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 06:02:34.702902 sshd[1637]: Connection closed by 172.24.4.1 port 33838 May 16 06:02:34.704055 sshd-session[1634]: pam_unix(sshd:session): session closed for user core May 16 06:02:34.721840 systemd[1]: sshd@3-172.24.4.222:22-172.24.4.1:33838.service: Deactivated successfully. May 16 06:02:34.725648 systemd[1]: session-6.scope: Deactivated successfully. May 16 06:02:34.729673 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. May 16 06:02:34.740842 systemd[1]: Started sshd@4-172.24.4.222:22-172.24.4.1:35558.service - OpenSSH per-connection server daemon (172.24.4.1:35558). May 16 06:02:34.745108 systemd-logind[1455]: Removed session 6. May 16 06:02:35.832182 sshd[1642]: Accepted publickey for core from 172.24.4.1 port 35558 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:02:35.835389 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:02:35.850513 systemd-logind[1455]: New session 7 of user core. May 16 06:02:35.866612 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 06:02:36.328301 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 06:02:36.329007 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 06:02:36.351806 sudo[1646]: pam_unix(sudo:session): session closed for user root May 16 06:02:36.568414 sshd[1645]: Connection closed by 172.24.4.1 port 35558 May 16 06:02:36.566897 sshd-session[1642]: pam_unix(sshd:session): session closed for user core May 16 06:02:36.592654 systemd[1]: sshd@4-172.24.4.222:22-172.24.4.1:35558.service: Deactivated successfully. May 16 06:02:36.596796 systemd[1]: session-7.scope: Deactivated successfully. May 16 06:02:36.600598 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. May 16 06:02:36.612893 systemd[1]: Started sshd@5-172.24.4.222:22-172.24.4.1:35574.service - OpenSSH per-connection server daemon (172.24.4.1:35574). May 16 06:02:36.617098 systemd-logind[1455]: Removed session 7. May 16 06:02:37.176092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 06:02:37.191714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 06:02:37.576522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 06:02:37.576579 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 06:02:37.631496 kubelet[1662]: E0516 06:02:37.631448 1662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 06:02:37.637723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 06:02:37.638048 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 06:02:37.638742 systemd[1]: kubelet.service: Consumed 259ms CPU time, 110.3M memory peak. May 16 06:02:37.903392 sshd[1651]: Accepted publickey for core from 172.24.4.1 port 35574 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:02:37.906422 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:02:37.919094 systemd-logind[1455]: New session 8 of user core. May 16 06:02:37.926626 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 06:02:38.386682 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 06:02:38.388184 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 06:02:38.396166 sudo[1671]: pam_unix(sudo:session): session closed for user root May 16 06:02:38.408342 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 06:02:38.408988 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 06:02:38.437886 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 06:02:38.498415 augenrules[1693]: No rules May 16 06:02:38.501581 systemd[1]: audit-rules.service: Deactivated successfully. May 16 06:02:38.502062 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 06:02:38.504454 sudo[1670]: pam_unix(sudo:session): session closed for user root May 16 06:02:38.649075 sshd[1669]: Connection closed by 172.24.4.1 port 35574 May 16 06:02:38.650987 sshd-session[1651]: pam_unix(sshd:session): session closed for user core May 16 06:02:38.669637 systemd[1]: sshd@5-172.24.4.222:22-172.24.4.1:35574.service: Deactivated successfully. May 16 06:02:38.672942 systemd[1]: session-8.scope: Deactivated successfully. May 16 06:02:38.676204 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. May 16 06:02:38.685814 systemd[1]: Started sshd@6-172.24.4.222:22-172.24.4.1:35578.service - OpenSSH per-connection server daemon (172.24.4.1:35578). May 16 06:02:38.688330 systemd-logind[1455]: Removed session 8. May 16 06:02:39.708178 sshd[1701]: Accepted publickey for core from 172.24.4.1 port 35578 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:02:39.712009 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:02:39.724377 systemd-logind[1455]: New session 9 of user core. May 16 06:02:39.732566 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 06:02:40.196327 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 06:02:40.197013 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 06:02:40.885559 (dockerd)[1723]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 06:02:40.885843 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 06:02:41.507192 dockerd[1723]: time="2025-05-16T06:02:41.507081420Z" level=info msg="Starting up" May 16 06:02:41.693130 dockerd[1723]: time="2025-05-16T06:02:41.692914600Z" level=info msg="Loading containers: start." May 16 06:02:41.931337 kernel: Initializing XFRM netlink socket May 16 06:02:41.985059 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. May 16 06:02:42.068369 systemd-networkd[1382]: docker0: Link UP May 16 06:02:42.109975 dockerd[1723]: time="2025-05-16T06:02:42.109879488Z" level=info msg="Loading containers: done." May 16 06:02:42.138125 dockerd[1723]: time="2025-05-16T06:02:42.137680318Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 06:02:42.138125 dockerd[1723]: time="2025-05-16T06:02:42.137787199Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 May 16 06:02:42.138125 dockerd[1723]: time="2025-05-16T06:02:42.137913586Z" level=info msg="Daemon has completed initialization" May 16 06:02:42.183947 dockerd[1723]: time="2025-05-16T06:02:42.183800036Z" level=info msg="API listen on /run/docker.sock" May 16 06:02:42.184793 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 06:02:43.210272 systemd-timesyncd[1387]: Contacted time server 23.142.248.8:123 (2.flatcar.pool.ntp.org). May 16 06:02:43.210418 systemd-timesyncd[1387]: Initial clock synchronization to Fri 2025-05-16 06:02:43.209874 UTC. May 16 06:02:43.211041 systemd-resolved[1384]: Clock change detected. Flushing caches. May 16 06:02:44.654740 containerd[1477]: time="2025-05-16T06:02:44.654584063Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 06:02:45.391010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1592034217.mount: Deactivated successfully. May 16 06:02:47.237058 containerd[1477]: time="2025-05-16T06:02:47.237010213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:47.238605 containerd[1477]: time="2025-05-16T06:02:47.238573675Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797819" May 16 06:02:47.239108 containerd[1477]: time="2025-05-16T06:02:47.239084323Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:47.242617 containerd[1477]: time="2025-05-16T06:02:47.242567245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:47.243976 containerd[1477]: time="2025-05-16T06:02:47.243945780Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 2.589283361s" May 16 06:02:47.244060 containerd[1477]: time="2025-05-16T06:02:47.244044295Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 16 06:02:47.244956 containerd[1477]: time="2025-05-16T06:02:47.244901683Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 06:02:48.854386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 06:02:48.863935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 06:02:49.003870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 06:02:49.004536 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 06:02:49.081837 kubelet[1976]: E0516 06:02:49.081597 1976 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 06:02:49.085260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 06:02:49.085399 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 06:02:49.085713 systemd[1]: kubelet.service: Consumed 147ms CPU time, 110.3M memory peak. May 16 06:02:49.408371 containerd[1477]: time="2025-05-16T06:02:49.408304034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:49.409894 containerd[1477]: time="2025-05-16T06:02:49.409834785Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782531" May 16 06:02:49.410936 containerd[1477]: time="2025-05-16T06:02:49.410901175Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:49.414752 containerd[1477]: time="2025-05-16T06:02:49.414689490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:49.416180 containerd[1477]: time="2025-05-16T06:02:49.415798450Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 2.170737187s" May 16 06:02:49.416180 containerd[1477]: time="2025-05-16T06:02:49.415831512Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 16 06:02:49.416484 containerd[1477]: time="2025-05-16T06:02:49.416441065Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 06:02:51.585750 containerd[1477]: time="2025-05-16T06:02:51.585637432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:51.587157 containerd[1477]: time="2025-05-16T06:02:51.586939955Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176071" May 16 06:02:51.588630 containerd[1477]: time="2025-05-16T06:02:51.588555154Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:51.591834 containerd[1477]: time="2025-05-16T06:02:51.591791744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:51.593714 containerd[1477]: time="2025-05-16T06:02:51.592941070Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 2.176473175s" May 16 06:02:51.593714 containerd[1477]: time="2025-05-16T06:02:51.592983019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 16 06:02:51.593714 containerd[1477]: time="2025-05-16T06:02:51.593510258Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 06:02:53.154089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2131707427.mount: Deactivated successfully. May 16 06:02:53.741867 containerd[1477]: time="2025-05-16T06:02:53.741780197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:53.743051 containerd[1477]: time="2025-05-16T06:02:53.742848731Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892880" May 16 06:02:53.744452 containerd[1477]: time="2025-05-16T06:02:53.744395643Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:53.747297 containerd[1477]: time="2025-05-16T06:02:53.747236571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:53.748018 containerd[1477]: time="2025-05-16T06:02:53.747986237Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 2.154451283s" May 16 06:02:53.748072 containerd[1477]: time="2025-05-16T06:02:53.748017756Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 16 06:02:53.749233 containerd[1477]: time="2025-05-16T06:02:53.749190736Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 06:02:54.543830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295760622.mount: Deactivated successfully. May 16 06:02:55.835714 containerd[1477]: time="2025-05-16T06:02:55.835604224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:55.837487 containerd[1477]: time="2025-05-16T06:02:55.837019518Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 16 06:02:55.840196 containerd[1477]: time="2025-05-16T06:02:55.840162623Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:55.845922 containerd[1477]: time="2025-05-16T06:02:55.845861932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:55.847702 containerd[1477]: time="2025-05-16T06:02:55.847037236Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.097814591s" May 16 06:02:55.847702 containerd[1477]: time="2025-05-16T06:02:55.847071911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 16 06:02:55.849185 containerd[1477]: time="2025-05-16T06:02:55.849141924Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 06:02:56.425586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1758783058.mount: Deactivated successfully. May 16 06:02:56.450084 containerd[1477]: time="2025-05-16T06:02:56.450024048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:56.454524 containerd[1477]: time="2025-05-16T06:02:56.454472591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 16 06:02:56.457145 containerd[1477]: time="2025-05-16T06:02:56.457062288Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:56.461627 containerd[1477]: time="2025-05-16T06:02:56.461585882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:02:56.464114 containerd[1477]: time="2025-05-16T06:02:56.463811627Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 614.538477ms" May 16 06:02:56.464114 containerd[1477]: time="2025-05-16T06:02:56.463909440Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 16 06:02:56.465166 containerd[1477]: time="2025-05-16T06:02:56.465103109Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 06:02:57.113418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2210656456.mount: Deactivated successfully. May 16 06:02:59.286328 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 16 06:02:59.294909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 06:02:59.413908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 06:02:59.422942 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 06:02:59.625353 kubelet[2112]: E0516 06:02:59.625203 2112 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 06:02:59.627754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 06:02:59.627934 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 06:02:59.628269 systemd[1]: kubelet.service: Consumed 169ms CPU time, 110.6M memory peak. May 16 06:03:00.264317 containerd[1477]: time="2025-05-16T06:03:00.264138028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:03:00.273930 containerd[1477]: time="2025-05-16T06:03:00.273818984Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" May 16 06:03:00.278742 containerd[1477]: time="2025-05-16T06:03:00.278183811Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:03:00.288237 containerd[1477]: time="2025-05-16T06:03:00.288176231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:03:00.293089 containerd[1477]: time="2025-05-16T06:03:00.292987074Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.827827289s" May 16 06:03:00.293089 containerd[1477]: time="2025-05-16T06:03:00.293079377Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 16 06:03:03.987220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 06:03:03.989131 systemd[1]: kubelet.service: Consumed 169ms CPU time, 110.6M memory peak. May 16 06:03:03.999974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 06:03:04.044291 systemd[1]: Reload requested from client PID 2148 ('systemctl') (unit session-9.scope)... May 16 06:03:04.044550 systemd[1]: Reloading... May 16 06:03:04.158756 zram_generator::config[2200]: No configuration found. May 16 06:03:04.308230 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 06:03:04.435975 systemd[1]: Reloading finished in 390 ms. May 16 06:03:04.502482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 06:03:04.506497 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 06:03:04.514192 systemd[1]: kubelet.service: Deactivated successfully. May 16 06:03:04.514417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 06:03:04.514462 systemd[1]: kubelet.service: Consumed 149ms CPU time, 98.2M memory peak. May 16 06:03:04.520818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 06:03:04.694528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 06:03:04.700726 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 06:03:04.753710 kubelet[2262]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 06:03:04.753710 kubelet[2262]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 06:03:04.753710 kubelet[2262]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 06:03:04.754462 kubelet[2262]: I0516 06:03:04.753744 2262 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 06:03:05.299152 kubelet[2262]: I0516 06:03:05.298203 2262 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 06:03:05.302646 kubelet[2262]: I0516 06:03:05.300479 2262 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 06:03:05.302646 kubelet[2262]: I0516 06:03:05.301138 2262 server.go:954] "Client rotation is on, will bootstrap in background" May 16 06:03:05.338438 kubelet[2262]: E0516 06:03:05.338363 2262 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.222:6443: connect: connection refused" logger="UnhandledError" May 16 06:03:05.339549 kubelet[2262]: I0516 06:03:05.339370 2262 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 06:03:05.351259 kubelet[2262]: E0516 06:03:05.351173 2262 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 06:03:05.351259 kubelet[2262]: I0516 06:03:05.351261 2262 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 06:03:05.359293 kubelet[2262]: I0516 06:03:05.359259 2262 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 06:03:05.359943 kubelet[2262]: I0516 06:03:05.359871 2262 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 06:03:05.360362 kubelet[2262]: I0516 06:03:05.359942 2262 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-n-15f3e1d893.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 06:03:05.361331 kubelet[2262]: I0516 06:03:05.360379 2262 topology_manager.go:138] "Creating topology manager with none policy" May 16 06:03:05.361331 kubelet[2262]: I0516 06:03:05.360406 2262 container_manager_linux.go:304] "Creating device plugin manager" May 16 06:03:05.361331 kubelet[2262]: I0516 06:03:05.360642 2262 state_mem.go:36] "Initialized new in-memory state store" May 16 06:03:05.370698 kubelet[2262]: I0516 06:03:05.370602 2262 kubelet.go:446] "Attempting to sync node with API server" May 16 06:03:05.370698 kubelet[2262]: I0516 06:03:05.370662 2262 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 06:03:05.370788 kubelet[2262]: I0516 06:03:05.370741 2262 kubelet.go:352] "Adding apiserver pod source" May 16 06:03:05.370788 kubelet[2262]: I0516 06:03:05.370766 2262 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 06:03:05.381048 kubelet[2262]: W0516 06:03:05.380276 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-15f3e1d893.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.222:6443: connect: connection refused May 16 06:03:05.381048 kubelet[2262]: E0516 06:03:05.380337 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-15f3e1d893.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.222:6443: connect: connection refused" logger="UnhandledError" May 16 06:03:05.381048 kubelet[2262]: W0516 06:03:05.380685 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.222:6443: connect: connection refused May 16 06:03:05.381048 kubelet[2262]: E0516 06:03:05.380719 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.222:6443: connect: connection refused" logger="UnhandledError" May 16 06:03:05.381444 kubelet[2262]: I0516 06:03:05.381427 2262 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 06:03:05.381991 kubelet[2262]: I0516 06:03:05.381977 2262 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 06:03:05.382159 kubelet[2262]: W0516 06:03:05.382148 2262 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 06:03:05.386324 kubelet[2262]: I0516 06:03:05.386307 2262 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 06:03:05.386415 kubelet[2262]: I0516 06:03:05.386406 2262 server.go:1287] "Started kubelet" May 16 06:03:05.396696 kubelet[2262]: I0516 06:03:05.395111 2262 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 06:03:05.396825 kubelet[2262]: I0516 06:03:05.396812 2262 server.go:479] "Adding debug handlers to kubelet server" May 16 06:03:05.397040 kubelet[2262]: I0516 06:03:05.396997 2262 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 06:03:05.397368 kubelet[2262]: I0516 06:03:05.397355 2262 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 06:03:05.398420 kubelet[2262]: I0516 06:03:05.398406 2262 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 06:03:05.406898 kubelet[2262]: I0516 06:03:05.406874 2262 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 06:03:05.408929 kubelet[2262]: E0516 06:03:05.404513 2262 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.222:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.222:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-1-1-n-15f3e1d893.novalocal.183feca7af377e36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-n-15f3e1d893.novalocal,UID:ci-4230-1-1-n-15f3e1d893.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-n-15f3e1d893.novalocal,},FirstTimestamp:2025-05-16 06:03:05.38638495 +0000 UTC m=+0.681585665,LastTimestamp:2025-05-16 06:03:05.38638495 +0000 UTC m=+0.681585665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-n-15f3e1d893.novalocal,}" May 16 06:03:05.410078 kubelet[2262]: I0516 06:03:05.410064 2262 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 06:03:05.410417 kubelet[2262]: E0516 06:03:05.410400 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" May 16 06:03:05.415073 kubelet[2262]: I0516 06:03:05.415056 2262 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 06:03:05.415219 kubelet[2262]: I0516 06:03:05.415209 2262 reconciler.go:26] "Reconciler: start to sync state" May 16 06:03:05.420616 kubelet[2262]: W0516 06:03:05.420497 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.222:6443: connect: connection refused May 16 06:03:05.420773 kubelet[2262]: E0516 06:03:05.420669 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.222:6443: connect: connection refused" logger="UnhandledError" May 16 06:03:05.421044 kubelet[2262]: E0516 06:03:05.420903 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-15f3e1d893.novalocal?timeout=10s\": dial tcp 172.24.4.222:6443: connect: connection refused" interval="200ms" May 16 06:03:05.421529 kubelet[2262]: I0516 06:03:05.421481 2262 factory.go:221] Registration of the systemd container factory successfully May 16 06:03:05.421696 kubelet[2262]: I0516 06:03:05.421645 2262 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 06:03:05.432846 kubelet[2262]: I0516 06:03:05.432805 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 06:03:05.432846 kubelet[2262]: I0516 06:03:05.433247 2262 factory.go:221] Registration of the containerd container factory successfully May 16 06:03:05.436773 kubelet[2262]: I0516 06:03:05.435132 2262 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 06:03:05.436773 kubelet[2262]: I0516 06:03:05.435175 2262 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 06:03:05.436773 kubelet[2262]: I0516 06:03:05.435649 2262 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 06:03:05.436773 kubelet[2262]: I0516 06:03:05.436099 2262 kubelet.go:2382] "Starting kubelet main sync loop" May 16 06:03:05.436773 kubelet[2262]: E0516 06:03:05.436163 2262 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 06:03:05.446101 kubelet[2262]: E0516 06:03:05.446077 2262 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 06:03:05.446351 kubelet[2262]: W0516 06:03:05.446313 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.222:6443: connect: connection refused May 16 06:03:05.446444 kubelet[2262]: E0516 06:03:05.446424 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.222:6443: connect: connection refused" logger="UnhandledError" May 16 06:03:05.462510 kubelet[2262]: I0516 06:03:05.462475 2262 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 06:03:05.462765 kubelet[2262]: I0516 06:03:05.462752 2262 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 06:03:05.462869 kubelet[2262]: I0516 06:03:05.462858 2262 state_mem.go:36] "Initialized new in-memory state store" May 16 06:03:05.467915 kubelet[2262]: I0516 06:03:05.467899 2262 policy_none.go:49] "None policy: Start" May 16 06:03:05.468003 kubelet[2262]: I0516 06:03:05.467993 2262 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 06:03:05.468066 kubelet[2262]: I0516 06:03:05.468057 2262 state_mem.go:35] "Initializing new in-memory state store" May 16 06:03:05.489961 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 06:03:05.505214 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 06:03:05.510565 kubelet[2262]: E0516 06:03:05.510545 2262 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" May 16 06:03:05.510885 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 06:03:05.517862 kubelet[2262]: I0516 06:03:05.517818 2262 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 06:03:05.518022 kubelet[2262]: I0516 06:03:05.517996 2262 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 06:03:05.518089 kubelet[2262]: I0516 06:03:05.518008 2262 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 06:03:05.518553 kubelet[2262]: I0516 06:03:05.518499 2262 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 06:03:05.520619 kubelet[2262]: E0516 06:03:05.520573 2262 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 06:03:05.520619 kubelet[2262]: E0516 06:03:05.520611 2262 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" May 16 06:03:05.550245 systemd[1]: Created slice kubepods-burstable-podef7941034a3b867f2177c17b019b6b38.slice - libcontainer container kubepods-burstable-podef7941034a3b867f2177c17b019b6b38.slice. May 16 06:03:05.566115 kubelet[2262]: E0516 06:03:05.566071 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.570942 systemd[1]: Created slice kubepods-burstable-pod5af8902586b26be017b3746884c2afeb.slice - libcontainer container kubepods-burstable-pod5af8902586b26be017b3746884c2afeb.slice. May 16 06:03:05.573727 kubelet[2262]: E0516 06:03:05.573530 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.576097 systemd[1]: Created slice kubepods-burstable-podb2ba21b679c79329ac524e93e19f3ae0.slice - libcontainer container kubepods-burstable-podb2ba21b679c79329ac524e93e19f3ae0.slice. May 16 06:03:05.578380 kubelet[2262]: E0516 06:03:05.578212 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.616497 kubelet[2262]: I0516 06:03:05.616368 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef7941034a3b867f2177c17b019b6b38-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"ef7941034a3b867f2177c17b019b6b38\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.616497 kubelet[2262]: I0516 06:03:05.616410 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5af8902586b26be017b3746884c2afeb-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"5af8902586b26be017b3746884c2afeb\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.616497 kubelet[2262]: I0516 06:03:05.616438 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2ba21b679c79329ac524e93e19f3ae0-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"b2ba21b679c79329ac524e93e19f3ae0\") " pod="kube-system/kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.616497 kubelet[2262]: I0516 06:03:05.616462 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5af8902586b26be017b3746884c2afeb-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"5af8902586b26be017b3746884c2afeb\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.616497 kubelet[2262]: I0516 06:03:05.616484 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5af8902586b26be017b3746884c2afeb-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"5af8902586b26be017b3746884c2afeb\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.616994 kubelet[2262]: I0516 06:03:05.616504 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5af8902586b26be017b3746884c2afeb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"5af8902586b26be017b3746884c2afeb\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.616994 kubelet[2262]: I0516 06:03:05.616525 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef7941034a3b867f2177c17b019b6b38-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"ef7941034a3b867f2177c17b019b6b38\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.616994 kubelet[2262]: I0516 06:03:05.616544 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef7941034a3b867f2177c17b019b6b38-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"ef7941034a3b867f2177c17b019b6b38\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.616994 kubelet[2262]: I0516 06:03:05.616563 2262 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5af8902586b26be017b3746884c2afeb-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"5af8902586b26be017b3746884c2afeb\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.619874 kubelet[2262]: I0516 06:03:05.619802 2262 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.620523 kubelet[2262]: E0516 06:03:05.620452 2262 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.222:6443/api/v1/nodes\": dial tcp 172.24.4.222:6443: connect: connection refused" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.621798 kubelet[2262]: E0516 06:03:05.621722 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-15f3e1d893.novalocal?timeout=10s\": dial tcp 172.24.4.222:6443: connect: connection refused" interval="400ms" May 16 06:03:05.826197 kubelet[2262]: I0516 06:03:05.825896 2262 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.828630 kubelet[2262]: E0516 06:03:05.828555 2262 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.222:6443/api/v1/nodes\": dial tcp 172.24.4.222:6443: connect: connection refused" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:05.869106 containerd[1477]: time="2025-05-16T06:03:05.868641923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal,Uid:ef7941034a3b867f2177c17b019b6b38,Namespace:kube-system,Attempt:0,}" May 16 06:03:05.876533 containerd[1477]: time="2025-05-16T06:03:05.876433366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal,Uid:5af8902586b26be017b3746884c2afeb,Namespace:kube-system,Attempt:0,}" May 16 06:03:05.881116 containerd[1477]: time="2025-05-16T06:03:05.880471279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal,Uid:b2ba21b679c79329ac524e93e19f3ae0,Namespace:kube-system,Attempt:0,}" May 16 06:03:06.022858 kubelet[2262]: E0516 06:03:06.022768 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-15f3e1d893.novalocal?timeout=10s\": dial tcp 172.24.4.222:6443: connect: connection refused" interval="800ms" May 16 06:03:06.232359 kubelet[2262]: I0516 06:03:06.232186 2262 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:06.233176 kubelet[2262]: E0516 06:03:06.232812 2262 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.222:6443/api/v1/nodes\": dial tcp 172.24.4.222:6443: connect: connection refused" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:06.310736 kubelet[2262]: W0516 06:03:06.310579 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.222:6443: connect: connection refused May 16 06:03:06.310902 kubelet[2262]: E0516 06:03:06.310747 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.222:6443: connect: connection refused" logger="UnhandledError" May 16 06:03:06.496894 kubelet[2262]: W0516 06:03:06.496510 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.222:6443: connect: connection refused May 16 06:03:06.496894 kubelet[2262]: E0516 06:03:06.496620 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.222:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.222:6443: connect: connection refused" logger="UnhandledError" May 16 06:03:06.778861 kubelet[2262]: W0516 06:03:06.778468 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-15f3e1d893.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.222:6443: connect: connection refused May 16 06:03:06.778861 kubelet[2262]: E0516 06:03:06.778638 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.222:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-1-1-n-15f3e1d893.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.222:6443: connect: connection refused" logger="UnhandledError" May 16 06:03:06.823797 kubelet[2262]: E0516 06:03:06.823732 2262 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.222:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-1-1-n-15f3e1d893.novalocal?timeout=10s\": dial tcp 172.24.4.222:6443: connect: connection refused" interval="1.6s" May 16 06:03:06.857129 kubelet[2262]: W0516 06:03:06.857043 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.222:6443: connect: connection refused May 16 06:03:06.857814 kubelet[2262]: E0516 06:03:06.857132 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.222:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.222:6443: connect: connection refused" logger="UnhandledError" May 16 06:03:07.037131 kubelet[2262]: I0516 06:03:07.037085 2262 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:07.038031 kubelet[2262]: E0516 06:03:07.037947 2262 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.222:6443/api/v1/nodes\": dial tcp 172.24.4.222:6443: connect: connection refused" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:07.499600 kubelet[2262]: E0516 06:03:07.499041 2262 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.222:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.222:6443: connect: connection refused" logger="UnhandledError" May 16 06:03:07.520348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940843524.mount: Deactivated successfully. May 16 06:03:07.531231 containerd[1477]: time="2025-05-16T06:03:07.531112937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 06:03:07.535885 containerd[1477]: time="2025-05-16T06:03:07.535743552Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 06:03:07.555746 containerd[1477]: time="2025-05-16T06:03:07.553937084Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" May 16 06:03:07.555746 containerd[1477]: time="2025-05-16T06:03:07.554356801Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 06:03:07.555746 containerd[1477]: time="2025-05-16T06:03:07.554740982Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 16 06:03:07.556825 containerd[1477]: time="2025-05-16T06:03:07.556747475Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 06:03:07.568811 containerd[1477]: time="2025-05-16T06:03:07.568621024Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 06:03:07.572539 containerd[1477]: time="2025-05-16T06:03:07.571429471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.701776973s" May 16 06:03:07.577168 containerd[1477]: time="2025-05-16T06:03:07.577119653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 06:03:07.578237 containerd[1477]: time="2025-05-16T06:03:07.578201742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.697573289s" May 16 06:03:07.589626 containerd[1477]: time="2025-05-16T06:03:07.589585663Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.712992918s" May 16 06:03:07.765803 containerd[1477]: time="2025-05-16T06:03:07.763398218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 06:03:07.765803 containerd[1477]: time="2025-05-16T06:03:07.764968573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 06:03:07.765803 containerd[1477]: time="2025-05-16T06:03:07.764988411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:07.765803 containerd[1477]: time="2025-05-16T06:03:07.765073500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:07.768707 containerd[1477]: time="2025-05-16T06:03:07.768016570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 06:03:07.768707 containerd[1477]: time="2025-05-16T06:03:07.768065712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 06:03:07.768707 containerd[1477]: time="2025-05-16T06:03:07.768084988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:07.768707 containerd[1477]: time="2025-05-16T06:03:07.768157975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:07.768707 containerd[1477]: time="2025-05-16T06:03:07.766434483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 06:03:07.768707 containerd[1477]: time="2025-05-16T06:03:07.766502060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 06:03:07.768707 containerd[1477]: time="2025-05-16T06:03:07.766644877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:07.768707 containerd[1477]: time="2025-05-16T06:03:07.766748562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:07.794968 systemd[1]: Started cri-containerd-28b8d001c3401e5e5a80b0f67974c78aa07074e7b0e6136cd58cd16f15a8e15e.scope - libcontainer container 28b8d001c3401e5e5a80b0f67974c78aa07074e7b0e6136cd58cd16f15a8e15e. May 16 06:03:07.806155 systemd[1]: Started cri-containerd-740f20cde057760ff86c30a9f10f69561dcdcbb71360844f15693bc3ec87fd8e.scope - libcontainer container 740f20cde057760ff86c30a9f10f69561dcdcbb71360844f15693bc3ec87fd8e. May 16 06:03:07.816069 systemd[1]: Started cri-containerd-d8b8478841a847560eb3ee338cc135264de4d304a58a3e1d4773a19f1a9af17d.scope - libcontainer container d8b8478841a847560eb3ee338cc135264de4d304a58a3e1d4773a19f1a9af17d. May 16 06:03:07.868639 containerd[1477]: time="2025-05-16T06:03:07.868451836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal,Uid:5af8902586b26be017b3746884c2afeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"740f20cde057760ff86c30a9f10f69561dcdcbb71360844f15693bc3ec87fd8e\"" May 16 06:03:07.870543 containerd[1477]: time="2025-05-16T06:03:07.867693394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal,Uid:ef7941034a3b867f2177c17b019b6b38,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8b8478841a847560eb3ee338cc135264de4d304a58a3e1d4773a19f1a9af17d\"" May 16 06:03:07.873247 containerd[1477]: time="2025-05-16T06:03:07.872894799Z" level=info msg="CreateContainer within sandbox \"d8b8478841a847560eb3ee338cc135264de4d304a58a3e1d4773a19f1a9af17d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 06:03:07.873818 containerd[1477]: time="2025-05-16T06:03:07.873797041Z" level=info msg="CreateContainer within sandbox \"740f20cde057760ff86c30a9f10f69561dcdcbb71360844f15693bc3ec87fd8e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 06:03:07.892056 containerd[1477]: time="2025-05-16T06:03:07.891934728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal,Uid:b2ba21b679c79329ac524e93e19f3ae0,Namespace:kube-system,Attempt:0,} returns sandbox id \"28b8d001c3401e5e5a80b0f67974c78aa07074e7b0e6136cd58cd16f15a8e15e\"" May 16 06:03:07.896665 containerd[1477]: time="2025-05-16T06:03:07.896538733Z" level=info msg="CreateContainer within sandbox \"28b8d001c3401e5e5a80b0f67974c78aa07074e7b0e6136cd58cd16f15a8e15e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 06:03:07.914806 containerd[1477]: time="2025-05-16T06:03:07.914758224Z" level=info msg="CreateContainer within sandbox \"d8b8478841a847560eb3ee338cc135264de4d304a58a3e1d4773a19f1a9af17d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5cd6eafb33683adcbb23143b51da0496627a5b03c0b5c9caf8fb999dab30be4c\"" May 16 06:03:07.916311 containerd[1477]: time="2025-05-16T06:03:07.916037904Z" level=info msg="StartContainer for \"5cd6eafb33683adcbb23143b51da0496627a5b03c0b5c9caf8fb999dab30be4c\"" May 16 06:03:07.922414 containerd[1477]: time="2025-05-16T06:03:07.922155828Z" level=info msg="CreateContainer within sandbox \"740f20cde057760ff86c30a9f10f69561dcdcbb71360844f15693bc3ec87fd8e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6840bc7bf404d660473c866bd1f8b40633c809d032cae6216c4b2cf57c20b1ec\"" May 16 06:03:07.925740 containerd[1477]: time="2025-05-16T06:03:07.925711006Z" level=info msg="StartContainer for \"6840bc7bf404d660473c866bd1f8b40633c809d032cae6216c4b2cf57c20b1ec\"" May 16 06:03:07.929716 kubelet[2262]: W0516 06:03:07.929592 2262 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.222:6443: connect: connection refused May 16 06:03:07.930545 kubelet[2262]: E0516 06:03:07.929840 2262 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.222:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.222:6443: connect: connection refused" logger="UnhandledError" May 16 06:03:07.941269 containerd[1477]: time="2025-05-16T06:03:07.940879054Z" level=info msg="CreateContainer within sandbox \"28b8d001c3401e5e5a80b0f67974c78aa07074e7b0e6136cd58cd16f15a8e15e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ac93646463e048ad20d58f97b576940e2673b4d398e4994e53ecb54fd352ac59\"" May 16 06:03:07.942580 containerd[1477]: time="2025-05-16T06:03:07.942516014Z" level=info msg="StartContainer for \"ac93646463e048ad20d58f97b576940e2673b4d398e4994e53ecb54fd352ac59\"" May 16 06:03:07.945938 systemd[1]: Started cri-containerd-5cd6eafb33683adcbb23143b51da0496627a5b03c0b5c9caf8fb999dab30be4c.scope - libcontainer container 5cd6eafb33683adcbb23143b51da0496627a5b03c0b5c9caf8fb999dab30be4c. May 16 06:03:07.967924 systemd[1]: Started cri-containerd-6840bc7bf404d660473c866bd1f8b40633c809d032cae6216c4b2cf57c20b1ec.scope - libcontainer container 6840bc7bf404d660473c866bd1f8b40633c809d032cae6216c4b2cf57c20b1ec. May 16 06:03:07.991843 systemd[1]: Started cri-containerd-ac93646463e048ad20d58f97b576940e2673b4d398e4994e53ecb54fd352ac59.scope - libcontainer container ac93646463e048ad20d58f97b576940e2673b4d398e4994e53ecb54fd352ac59. May 16 06:03:08.019646 containerd[1477]: time="2025-05-16T06:03:08.019413835Z" level=info msg="StartContainer for \"5cd6eafb33683adcbb23143b51da0496627a5b03c0b5c9caf8fb999dab30be4c\" returns successfully" May 16 06:03:08.060549 containerd[1477]: time="2025-05-16T06:03:08.060501966Z" level=info msg="StartContainer for \"6840bc7bf404d660473c866bd1f8b40633c809d032cae6216c4b2cf57c20b1ec\" returns successfully" May 16 06:03:08.080007 containerd[1477]: time="2025-05-16T06:03:08.079955722Z" level=info msg="StartContainer for \"ac93646463e048ad20d58f97b576940e2673b4d398e4994e53ecb54fd352ac59\" returns successfully" May 16 06:03:08.467475 kubelet[2262]: E0516 06:03:08.467049 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:08.467475 kubelet[2262]: E0516 06:03:08.467233 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:08.469704 kubelet[2262]: E0516 06:03:08.468913 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:08.523759 update_engine[1460]: I20250516 06:03:08.523701 1460 update_attempter.cc:509] Updating boot flags... May 16 06:03:08.571813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2543) May 16 06:03:08.646682 kubelet[2262]: I0516 06:03:08.646639 2262 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:08.684927 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2542) May 16 06:03:08.794757 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2542) May 16 06:03:09.476690 kubelet[2262]: E0516 06:03:09.474345 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:09.483475 kubelet[2262]: E0516 06:03:09.483446 2262 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:10.266699 kubelet[2262]: E0516 06:03:10.266533 2262 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:10.384668 kubelet[2262]: I0516 06:03:10.384386 2262 apiserver.go:52] "Watching apiserver" May 16 06:03:10.413046 kubelet[2262]: E0516 06:03:10.412849 2262 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-1-1-n-15f3e1d893.novalocal.183feca7af377e36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-1-1-n-15f3e1d893.novalocal,UID:ci-4230-1-1-n-15f3e1d893.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-1-1-n-15f3e1d893.novalocal,},FirstTimestamp:2025-05-16 06:03:05.38638495 +0000 UTC m=+0.681585665,LastTimestamp:2025-05-16 06:03:05.38638495 +0000 UTC m=+0.681585665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-1-1-n-15f3e1d893.novalocal,}" May 16 06:03:10.415933 kubelet[2262]: I0516 06:03:10.415882 2262 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 06:03:10.452917 kubelet[2262]: I0516 06:03:10.452771 2262 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:10.452917 kubelet[2262]: E0516 06:03:10.452830 2262 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230-1-1-n-15f3e1d893.novalocal\": node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" May 16 06:03:10.510708 kubelet[2262]: I0516 06:03:10.510629 2262 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:10.525301 kubelet[2262]: E0516 06:03:10.525166 2262 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:10.525301 kubelet[2262]: I0516 06:03:10.525226 2262 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:10.527611 kubelet[2262]: E0516 06:03:10.527568 2262 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:10.527703 kubelet[2262]: I0516 06:03:10.527617 2262 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:10.529326 kubelet[2262]: E0516 06:03:10.529283 2262 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:12.115037 kubelet[2262]: I0516 06:03:12.114271 2262 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:12.128246 kubelet[2262]: W0516 06:03:12.128193 2262 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 06:03:13.109805 systemd[1]: Reload requested from client PID 2553 ('systemctl') (unit session-9.scope)... May 16 06:03:13.109843 systemd[1]: Reloading... May 16 06:03:13.243705 zram_generator::config[2595]: No configuration found. May 16 06:03:13.421394 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 06:03:13.573426 systemd[1]: Reloading finished in 461 ms. May 16 06:03:13.600743 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 06:03:13.616179 systemd[1]: kubelet.service: Deactivated successfully. May 16 06:03:13.616456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 06:03:13.616506 systemd[1]: kubelet.service: Consumed 1.332s CPU time, 132.2M memory peak. May 16 06:03:13.622987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 06:03:13.885448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 06:03:13.899469 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 06:03:14.026223 kubelet[2662]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 06:03:14.026223 kubelet[2662]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 06:03:14.026223 kubelet[2662]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 06:03:14.026223 kubelet[2662]: I0516 06:03:14.025967 2662 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 06:03:14.039775 kubelet[2662]: I0516 06:03:14.039718 2662 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 06:03:14.040540 kubelet[2662]: I0516 06:03:14.040033 2662 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 06:03:14.040540 kubelet[2662]: I0516 06:03:14.040543 2662 server.go:954] "Client rotation is on, will bootstrap in background" May 16 06:03:14.043034 kubelet[2662]: I0516 06:03:14.043002 2662 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 06:03:14.047886 kubelet[2662]: I0516 06:03:14.047684 2662 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 06:03:14.053405 kubelet[2662]: E0516 06:03:14.053227 2662 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 06:03:14.053405 kubelet[2662]: I0516 06:03:14.053477 2662 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 06:03:14.063168 kubelet[2662]: I0516 06:03:14.060800 2662 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 06:03:14.063168 kubelet[2662]: I0516 06:03:14.061748 2662 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 06:03:14.063168 kubelet[2662]: I0516 06:03:14.061793 2662 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-1-1-n-15f3e1d893.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 06:03:14.063168 kubelet[2662]: I0516 06:03:14.062293 2662 topology_manager.go:138] "Creating topology manager with none policy" May 16 06:03:14.063454 kubelet[2662]: I0516 06:03:14.062314 2662 container_manager_linux.go:304] "Creating device plugin manager" May 16 06:03:14.063454 kubelet[2662]: I0516 06:03:14.062386 2662 state_mem.go:36] "Initialized new in-memory state store" May 16 06:03:14.063454 kubelet[2662]: I0516 06:03:14.062629 2662 kubelet.go:446] "Attempting to sync node with API server" May 16 06:03:14.063454 kubelet[2662]: I0516 06:03:14.062720 2662 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 06:03:14.063454 kubelet[2662]: I0516 06:03:14.062762 2662 kubelet.go:352] "Adding apiserver pod source" May 16 06:03:14.063454 kubelet[2662]: I0516 06:03:14.062779 2662 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 06:03:14.070709 kubelet[2662]: I0516 06:03:14.070021 2662 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 16 06:03:14.070709 kubelet[2662]: I0516 06:03:14.070488 2662 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 06:03:14.071052 kubelet[2662]: I0516 06:03:14.070937 2662 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 06:03:14.071052 kubelet[2662]: I0516 06:03:14.070974 2662 server.go:1287] "Started kubelet" May 16 06:03:14.077046 kubelet[2662]: I0516 06:03:14.075236 2662 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 06:03:14.084332 kubelet[2662]: I0516 06:03:14.084100 2662 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 06:03:14.085454 kubelet[2662]: I0516 06:03:14.085107 2662 server.go:479] "Adding debug handlers to kubelet server" May 16 06:03:14.086863 kubelet[2662]: I0516 06:03:14.086100 2662 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 06:03:14.086863 kubelet[2662]: I0516 06:03:14.086309 2662 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 06:03:14.086863 kubelet[2662]: I0516 06:03:14.086532 2662 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 06:03:14.089128 kubelet[2662]: I0516 06:03:14.088549 2662 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 06:03:14.089128 kubelet[2662]: E0516 06:03:14.088795 2662 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-1-1-n-15f3e1d893.novalocal\" not found" May 16 06:03:14.091486 kubelet[2662]: I0516 06:03:14.091019 2662 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 06:03:14.091486 kubelet[2662]: I0516 06:03:14.091140 2662 reconciler.go:26] "Reconciler: start to sync state" May 16 06:03:14.094056 kubelet[2662]: I0516 06:03:14.093044 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 06:03:14.094056 kubelet[2662]: I0516 06:03:14.094032 2662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 06:03:14.094056 kubelet[2662]: I0516 06:03:14.094056 2662 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 06:03:14.094223 kubelet[2662]: I0516 06:03:14.094075 2662 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 06:03:14.094223 kubelet[2662]: I0516 06:03:14.094084 2662 kubelet.go:2382] "Starting kubelet main sync loop" May 16 06:03:14.094223 kubelet[2662]: E0516 06:03:14.094126 2662 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 06:03:14.106880 kubelet[2662]: I0516 06:03:14.106581 2662 factory.go:221] Registration of the systemd container factory successfully May 16 06:03:14.107474 kubelet[2662]: I0516 06:03:14.107219 2662 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 06:03:14.116328 kubelet[2662]: I0516 06:03:14.116294 2662 factory.go:221] Registration of the containerd container factory successfully May 16 06:03:14.118728 kubelet[2662]: E0516 06:03:14.116993 2662 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 06:03:14.127116 sudo[2692]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 06:03:14.127437 sudo[2692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 16 06:03:14.179776 kubelet[2662]: I0516 06:03:14.179636 2662 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 06:03:14.179947 kubelet[2662]: I0516 06:03:14.179932 2662 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 06:03:14.180027 kubelet[2662]: I0516 06:03:14.180017 2662 state_mem.go:36] "Initialized new in-memory state store" May 16 06:03:14.180253 kubelet[2662]: I0516 06:03:14.180236 2662 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 06:03:14.180335 kubelet[2662]: I0516 06:03:14.180307 2662 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 06:03:14.180400 kubelet[2662]: I0516 06:03:14.180392 2662 policy_none.go:49] "None policy: Start" May 16 06:03:14.180481 kubelet[2662]: I0516 06:03:14.180472 2662 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 06:03:14.180550 kubelet[2662]: I0516 06:03:14.180540 2662 state_mem.go:35] "Initializing new in-memory state store" May 16 06:03:14.181201 kubelet[2662]: I0516 06:03:14.180827 2662 state_mem.go:75] "Updated machine memory state" May 16 06:03:14.186834 kubelet[2662]: I0516 06:03:14.186220 2662 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 06:03:14.186834 kubelet[2662]: I0516 06:03:14.186391 2662 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 06:03:14.186834 kubelet[2662]: I0516 06:03:14.186414 2662 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 06:03:14.186834 kubelet[2662]: I0516 06:03:14.186759 2662 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 06:03:14.189962 kubelet[2662]: E0516 06:03:14.189649 2662 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 06:03:14.197468 kubelet[2662]: I0516 06:03:14.196993 2662 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.197468 kubelet[2662]: I0516 06:03:14.197345 2662 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.197823 kubelet[2662]: I0516 06:03:14.197810 2662 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.209933 kubelet[2662]: W0516 06:03:14.209907 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 06:03:14.212146 kubelet[2662]: W0516 06:03:14.211445 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 06:03:14.215294 kubelet[2662]: W0516 06:03:14.215176 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 06:03:14.215294 kubelet[2662]: E0516 06:03:14.215261 2662 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.291088 kubelet[2662]: I0516 06:03:14.291043 2662 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.306050 kubelet[2662]: I0516 06:03:14.305663 2662 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.306050 kubelet[2662]: I0516 06:03:14.305751 2662 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.392492 kubelet[2662]: I0516 06:03:14.392134 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef7941034a3b867f2177c17b019b6b38-k8s-certs\") pod \"kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"ef7941034a3b867f2177c17b019b6b38\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.392492 kubelet[2662]: I0516 06:03:14.392178 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5af8902586b26be017b3746884c2afeb-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"5af8902586b26be017b3746884c2afeb\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.392492 kubelet[2662]: I0516 06:03:14.392205 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef7941034a3b867f2177c17b019b6b38-ca-certs\") pod \"kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"ef7941034a3b867f2177c17b019b6b38\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.392492 kubelet[2662]: I0516 06:03:14.392225 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5af8902586b26be017b3746884c2afeb-ca-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"5af8902586b26be017b3746884c2afeb\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.392492 kubelet[2662]: I0516 06:03:14.392246 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5af8902586b26be017b3746884c2afeb-k8s-certs\") pod \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"5af8902586b26be017b3746884c2afeb\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.392824 kubelet[2662]: I0516 06:03:14.392267 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5af8902586b26be017b3746884c2afeb-kubeconfig\") pod \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"5af8902586b26be017b3746884c2afeb\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.392824 kubelet[2662]: I0516 06:03:14.392290 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5af8902586b26be017b3746884c2afeb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"5af8902586b26be017b3746884c2afeb\") " pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.392824 kubelet[2662]: I0516 06:03:14.392311 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2ba21b679c79329ac524e93e19f3ae0-kubeconfig\") pod \"kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"b2ba21b679c79329ac524e93e19f3ae0\") " pod="kube-system/kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.392824 kubelet[2662]: I0516 06:03:14.392331 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef7941034a3b867f2177c17b019b6b38-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal\" (UID: \"ef7941034a3b867f2177c17b019b6b38\") " pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:14.745176 sudo[2692]: pam_unix(sudo:session): session closed for user root May 16 06:03:15.069571 kubelet[2662]: I0516 06:03:15.067488 2662 apiserver.go:52] "Watching apiserver" May 16 06:03:15.092057 kubelet[2662]: I0516 06:03:15.091954 2662 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 06:03:15.161786 kubelet[2662]: I0516 06:03:15.158824 2662 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:15.161786 kubelet[2662]: I0516 06:03:15.159463 2662 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:15.162615 kubelet[2662]: I0516 06:03:15.162583 2662 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:15.198976 kubelet[2662]: W0516 06:03:15.198190 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 06:03:15.198976 kubelet[2662]: W0516 06:03:15.198281 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 06:03:15.198976 kubelet[2662]: E0516 06:03:15.198360 2662 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:15.200465 kubelet[2662]: W0516 06:03:15.199822 2662 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 16 06:03:15.200465 kubelet[2662]: E0516 06:03:15.199904 2662 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:15.200465 kubelet[2662]: E0516 06:03:15.200045 2662 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal" May 16 06:03:15.254465 kubelet[2662]: I0516 06:03:15.253277 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-1-1-n-15f3e1d893.novalocal" podStartSLOduration=1.253256636 podStartE2EDuration="1.253256636s" podCreationTimestamp="2025-05-16 06:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 06:03:15.234824727 +0000 UTC m=+1.327358922" watchObservedRunningTime="2025-05-16 06:03:15.253256636 +0000 UTC m=+1.345790831" May 16 06:03:15.264147 kubelet[2662]: I0516 06:03:15.264082 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-1-1-n-15f3e1d893.novalocal" podStartSLOduration=3.264063765 podStartE2EDuration="3.264063765s" podCreationTimestamp="2025-05-16 06:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 06:03:15.253565195 +0000 UTC m=+1.346099400" watchObservedRunningTime="2025-05-16 06:03:15.264063765 +0000 UTC m=+1.356597960" May 16 06:03:16.870764 sudo[1705]: pam_unix(sudo:session): session closed for user root May 16 06:03:17.063163 sshd[1704]: Connection closed by 172.24.4.1 port 35578 May 16 06:03:17.063003 sshd-session[1701]: pam_unix(sshd:session): session closed for user core May 16 06:03:17.069589 systemd[1]: sshd@6-172.24.4.222:22-172.24.4.1:35578.service: Deactivated successfully. May 16 06:03:17.074560 systemd[1]: session-9.scope: Deactivated successfully. May 16 06:03:17.075290 systemd[1]: session-9.scope: Consumed 6.661s CPU time, 264.4M memory peak. May 16 06:03:17.082917 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. May 16 06:03:17.085452 systemd-logind[1455]: Removed session 9. May 16 06:03:18.007227 kubelet[2662]: I0516 06:03:18.007126 2662 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 06:03:18.009407 containerd[1477]: time="2025-05-16T06:03:18.009337741Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 06:03:18.010048 kubelet[2662]: I0516 06:03:18.009716 2662 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 06:03:18.945884 kubelet[2662]: I0516 06:03:18.945802 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-1-1-n-15f3e1d893.novalocal" podStartSLOduration=4.945779527 podStartE2EDuration="4.945779527s" podCreationTimestamp="2025-05-16 06:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 06:03:15.265039074 +0000 UTC m=+1.357573269" watchObservedRunningTime="2025-05-16 06:03:18.945779527 +0000 UTC m=+5.038313742" May 16 06:03:18.959539 systemd[1]: Created slice kubepods-besteffort-pod94019fb0_fa44_43b1_be35_43b1dcfe61e6.slice - libcontainer container kubepods-besteffort-pod94019fb0_fa44_43b1_be35_43b1dcfe61e6.slice. May 16 06:03:18.966302 kubelet[2662]: I0516 06:03:18.966067 2662 status_manager.go:890] "Failed to get status for pod" podUID="94019fb0-fa44-43b1-be35-43b1dcfe61e6" pod="kube-system/kube-proxy-prvsp" err="pods \"kube-proxy-prvsp\" is forbidden: User \"system:node:ci-4230-1-1-n-15f3e1d893.novalocal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-15f3e1d893.novalocal' and this object" May 16 06:03:18.966302 kubelet[2662]: W0516 06:03:18.966182 2662 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4230-1-1-n-15f3e1d893.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-15f3e1d893.novalocal' and this object May 16 06:03:18.966302 kubelet[2662]: E0516 06:03:18.966259 2662 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-4230-1-1-n-15f3e1d893.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-15f3e1d893.novalocal' and this object" logger="UnhandledError" May 16 06:03:18.966739 kubelet[2662]: W0516 06:03:18.966715 2662 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-1-1-n-15f3e1d893.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-15f3e1d893.novalocal' and this object May 16 06:03:18.966838 kubelet[2662]: E0516 06:03:18.966814 2662 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4230-1-1-n-15f3e1d893.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-15f3e1d893.novalocal' and this object" logger="UnhandledError" May 16 06:03:18.971906 kubelet[2662]: W0516 06:03:18.971803 2662 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-1-1-n-15f3e1d893.novalocal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-1-1-n-15f3e1d893.novalocal' and this object May 16 06:03:18.971906 kubelet[2662]: E0516 06:03:18.971839 2662 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230-1-1-n-15f3e1d893.novalocal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-1-1-n-15f3e1d893.novalocal' and this object" logger="UnhandledError" May 16 06:03:18.977944 systemd[1]: Created slice kubepods-burstable-pod8069b4e8_57dd_493e_97dd_2560a89bac2a.slice - libcontainer container kubepods-burstable-pod8069b4e8_57dd_493e_97dd_2560a89bac2a.slice. May 16 06:03:19.026940 kubelet[2662]: I0516 06:03:19.026414 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-host-proc-sys-net\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.026940 kubelet[2662]: I0516 06:03:19.026451 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-bpf-maps\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.026940 kubelet[2662]: I0516 06:03:19.026471 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-etc-cni-netd\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.026940 kubelet[2662]: I0516 06:03:19.026488 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nbkv\" (UniqueName: \"kubernetes.io/projected/8069b4e8-57dd-493e-97dd-2560a89bac2a-kube-api-access-4nbkv\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.026940 kubelet[2662]: I0516 06:03:19.026509 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8069b4e8-57dd-493e-97dd-2560a89bac2a-hubble-tls\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.026940 kubelet[2662]: I0516 06:03:19.026526 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-cgroup\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.027652 kubelet[2662]: I0516 06:03:19.026545 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94019fb0-fa44-43b1-be35-43b1dcfe61e6-lib-modules\") pod \"kube-proxy-prvsp\" (UID: \"94019fb0-fa44-43b1-be35-43b1dcfe61e6\") " pod="kube-system/kube-proxy-prvsp" May 16 06:03:19.027652 kubelet[2662]: I0516 06:03:19.026590 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wt7j\" (UniqueName: \"kubernetes.io/projected/94019fb0-fa44-43b1-be35-43b1dcfe61e6-kube-api-access-8wt7j\") pod \"kube-proxy-prvsp\" (UID: \"94019fb0-fa44-43b1-be35-43b1dcfe61e6\") " pod="kube-system/kube-proxy-prvsp" May 16 06:03:19.027652 kubelet[2662]: I0516 06:03:19.026608 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-run\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.027652 kubelet[2662]: I0516 06:03:19.026626 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-lib-modules\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.027652 kubelet[2662]: I0516 06:03:19.026646 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-config-path\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.027876 kubelet[2662]: I0516 06:03:19.026663 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/94019fb0-fa44-43b1-be35-43b1dcfe61e6-kube-proxy\") pod \"kube-proxy-prvsp\" (UID: \"94019fb0-fa44-43b1-be35-43b1dcfe61e6\") " pod="kube-system/kube-proxy-prvsp" May 16 06:03:19.027876 kubelet[2662]: I0516 06:03:19.026695 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-hostproc\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.027876 kubelet[2662]: I0516 06:03:19.026713 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cni-path\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.027876 kubelet[2662]: I0516 06:03:19.026729 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-host-proc-sys-kernel\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.027876 kubelet[2662]: I0516 06:03:19.026756 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94019fb0-fa44-43b1-be35-43b1dcfe61e6-xtables-lock\") pod \"kube-proxy-prvsp\" (UID: \"94019fb0-fa44-43b1-be35-43b1dcfe61e6\") " pod="kube-system/kube-proxy-prvsp" May 16 06:03:19.027876 kubelet[2662]: I0516 06:03:19.026773 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-xtables-lock\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.028088 kubelet[2662]: I0516 06:03:19.026795 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8069b4e8-57dd-493e-97dd-2560a89bac2a-clustermesh-secrets\") pod \"cilium-bjkfn\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " pod="kube-system/cilium-bjkfn" May 16 06:03:19.110939 systemd[1]: Created slice kubepods-besteffort-poddf3bf59b_50c8_468c_b13d_4976f127f15f.slice - libcontainer container kubepods-besteffort-poddf3bf59b_50c8_468c_b13d_4976f127f15f.slice. May 16 06:03:19.128897 kubelet[2662]: I0516 06:03:19.128850 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj4sk\" (UniqueName: \"kubernetes.io/projected/df3bf59b-50c8-468c-b13d-4976f127f15f-kube-api-access-zj4sk\") pod \"cilium-operator-6c4d7847fc-4fcmk\" (UID: \"df3bf59b-50c8-468c-b13d-4976f127f15f\") " pod="kube-system/cilium-operator-6c4d7847fc-4fcmk" May 16 06:03:19.129424 kubelet[2662]: I0516 06:03:19.128954 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df3bf59b-50c8-468c-b13d-4976f127f15f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4fcmk\" (UID: \"df3bf59b-50c8-468c-b13d-4976f127f15f\") " pod="kube-system/cilium-operator-6c4d7847fc-4fcmk" May 16 06:03:20.157170 kubelet[2662]: E0516 06:03:20.156863 2662 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 16 06:03:20.157170 kubelet[2662]: E0516 06:03:20.156922 2662 projected.go:194] Error preparing data for projected volume kube-api-access-4nbkv for pod kube-system/cilium-bjkfn: failed to sync configmap cache: timed out waiting for the condition May 16 06:03:20.157170 kubelet[2662]: E0516 06:03:20.156952 2662 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 16 06:03:20.157170 kubelet[2662]: E0516 06:03:20.157010 2662 projected.go:194] Error preparing data for projected volume kube-api-access-8wt7j for pod kube-system/kube-proxy-prvsp: failed to sync configmap cache: timed out waiting for the condition May 16 06:03:20.157170 kubelet[2662]: E0516 06:03:20.157049 2662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8069b4e8-57dd-493e-97dd-2560a89bac2a-kube-api-access-4nbkv podName:8069b4e8-57dd-493e-97dd-2560a89bac2a nodeName:}" failed. No retries permitted until 2025-05-16 06:03:20.657004982 +0000 UTC m=+6.749539227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4nbkv" (UniqueName: "kubernetes.io/projected/8069b4e8-57dd-493e-97dd-2560a89bac2a-kube-api-access-4nbkv") pod "cilium-bjkfn" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a") : failed to sync configmap cache: timed out waiting for the condition May 16 06:03:20.159196 kubelet[2662]: E0516 06:03:20.157112 2662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/94019fb0-fa44-43b1-be35-43b1dcfe61e6-kube-api-access-8wt7j podName:94019fb0-fa44-43b1-be35-43b1dcfe61e6 nodeName:}" failed. No retries permitted until 2025-05-16 06:03:20.657074242 +0000 UTC m=+6.749608487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8wt7j" (UniqueName: "kubernetes.io/projected/94019fb0-fa44-43b1-be35-43b1dcfe61e6-kube-api-access-8wt7j") pod "kube-proxy-prvsp" (UID: "94019fb0-fa44-43b1-be35-43b1dcfe61e6") : failed to sync configmap cache: timed out waiting for the condition May 16 06:03:20.317105 containerd[1477]: time="2025-05-16T06:03:20.316983275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4fcmk,Uid:df3bf59b-50c8-468c-b13d-4976f127f15f,Namespace:kube-system,Attempt:0,}" May 16 06:03:20.374390 containerd[1477]: time="2025-05-16T06:03:20.373867820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 06:03:20.374390 containerd[1477]: time="2025-05-16T06:03:20.374013974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 06:03:20.374390 containerd[1477]: time="2025-05-16T06:03:20.374059390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:20.375468 containerd[1477]: time="2025-05-16T06:03:20.375250243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:20.412348 systemd[1]: Started cri-containerd-b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814.scope - libcontainer container b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814. May 16 06:03:20.465741 containerd[1477]: time="2025-05-16T06:03:20.465701558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4fcmk,Uid:df3bf59b-50c8-468c-b13d-4976f127f15f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\"" May 16 06:03:20.468573 containerd[1477]: time="2025-05-16T06:03:20.468324949Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 06:03:20.772290 containerd[1477]: time="2025-05-16T06:03:20.771900379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prvsp,Uid:94019fb0-fa44-43b1-be35-43b1dcfe61e6,Namespace:kube-system,Attempt:0,}" May 16 06:03:20.782662 containerd[1477]: time="2025-05-16T06:03:20.782412927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bjkfn,Uid:8069b4e8-57dd-493e-97dd-2560a89bac2a,Namespace:kube-system,Attempt:0,}" May 16 06:03:20.852491 containerd[1477]: time="2025-05-16T06:03:20.850262630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 06:03:20.852491 containerd[1477]: time="2025-05-16T06:03:20.850394868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 06:03:20.852491 containerd[1477]: time="2025-05-16T06:03:20.850429563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:20.852491 containerd[1477]: time="2025-05-16T06:03:20.850581297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:20.856106 containerd[1477]: time="2025-05-16T06:03:20.855869015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 06:03:20.856106 containerd[1477]: time="2025-05-16T06:03:20.856006944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 06:03:20.856106 containerd[1477]: time="2025-05-16T06:03:20.856056577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:20.857920 containerd[1477]: time="2025-05-16T06:03:20.857533207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:20.881915 systemd[1]: Started cri-containerd-175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc.scope - libcontainer container 175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc. May 16 06:03:20.886877 systemd[1]: Started cri-containerd-dea9bacbf1c007f1e7960ee31c11c9764fa8768386ebac41599e3d4b919032be.scope - libcontainer container dea9bacbf1c007f1e7960ee31c11c9764fa8768386ebac41599e3d4b919032be. May 16 06:03:20.917520 containerd[1477]: time="2025-05-16T06:03:20.917482591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bjkfn,Uid:8069b4e8-57dd-493e-97dd-2560a89bac2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\"" May 16 06:03:20.930071 containerd[1477]: time="2025-05-16T06:03:20.930010288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prvsp,Uid:94019fb0-fa44-43b1-be35-43b1dcfe61e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"dea9bacbf1c007f1e7960ee31c11c9764fa8768386ebac41599e3d4b919032be\"" May 16 06:03:20.932975 containerd[1477]: time="2025-05-16T06:03:20.932867839Z" level=info msg="CreateContainer within sandbox \"dea9bacbf1c007f1e7960ee31c11c9764fa8768386ebac41599e3d4b919032be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 06:03:20.964873 containerd[1477]: time="2025-05-16T06:03:20.964821742Z" level=info msg="CreateContainer within sandbox \"dea9bacbf1c007f1e7960ee31c11c9764fa8768386ebac41599e3d4b919032be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dfacfbe308e6d8aefbcf934c5316a24c9825b2d9cf58fd6af18edf44399ae4b3\"" May 16 06:03:20.966840 containerd[1477]: time="2025-05-16T06:03:20.966798440Z" level=info msg="StartContainer for \"dfacfbe308e6d8aefbcf934c5316a24c9825b2d9cf58fd6af18edf44399ae4b3\"" May 16 06:03:20.995835 systemd[1]: Started cri-containerd-dfacfbe308e6d8aefbcf934c5316a24c9825b2d9cf58fd6af18edf44399ae4b3.scope - libcontainer container dfacfbe308e6d8aefbcf934c5316a24c9825b2d9cf58fd6af18edf44399ae4b3. May 16 06:03:21.027814 containerd[1477]: time="2025-05-16T06:03:21.027528539Z" level=info msg="StartContainer for \"dfacfbe308e6d8aefbcf934c5316a24c9825b2d9cf58fd6af18edf44399ae4b3\" returns successfully" May 16 06:03:21.189417 kubelet[2662]: I0516 06:03:21.189354 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-prvsp" podStartSLOduration=3.189336331 podStartE2EDuration="3.189336331s" podCreationTimestamp="2025-05-16 06:03:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 06:03:21.188337778 +0000 UTC m=+7.280871983" watchObservedRunningTime="2025-05-16 06:03:21.189336331 +0000 UTC m=+7.281870536" May 16 06:03:22.326324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1814108225.mount: Deactivated successfully. May 16 06:03:22.908195 containerd[1477]: time="2025-05-16T06:03:22.908127005Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:03:22.909529 containerd[1477]: time="2025-05-16T06:03:22.909465597Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 16 06:03:22.911074 containerd[1477]: time="2025-05-16T06:03:22.911029380Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:03:22.913454 containerd[1477]: time="2025-05-16T06:03:22.912632055Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.444270268s" May 16 06:03:22.913454 containerd[1477]: time="2025-05-16T06:03:22.912688261Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 16 06:03:22.915435 containerd[1477]: time="2025-05-16T06:03:22.915367666Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 06:03:22.916498 containerd[1477]: time="2025-05-16T06:03:22.916467288Z" level=info msg="CreateContainer within sandbox \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 06:03:22.945936 containerd[1477]: time="2025-05-16T06:03:22.945889834Z" level=info msg="CreateContainer within sandbox \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\"" May 16 06:03:22.946538 containerd[1477]: time="2025-05-16T06:03:22.946401264Z" level=info msg="StartContainer for \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\"" May 16 06:03:22.980826 systemd[1]: Started cri-containerd-38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b.scope - libcontainer container 38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b. May 16 06:03:23.008652 containerd[1477]: time="2025-05-16T06:03:23.008621455Z" level=info msg="StartContainer for \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\" returns successfully" May 16 06:03:24.192806 kubelet[2662]: I0516 06:03:24.192653 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4fcmk" podStartSLOduration=2.745361958 podStartE2EDuration="5.192617324s" podCreationTimestamp="2025-05-16 06:03:19 +0000 UTC" firstStartedPulling="2025-05-16 06:03:20.467286832 +0000 UTC m=+6.559821027" lastFinishedPulling="2025-05-16 06:03:22.914542188 +0000 UTC m=+9.007076393" observedRunningTime="2025-05-16 06:03:23.226888462 +0000 UTC m=+9.319422657" watchObservedRunningTime="2025-05-16 06:03:24.192617324 +0000 UTC m=+10.285151609" May 16 06:03:28.001607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount868852178.mount: Deactivated successfully. May 16 06:03:30.798306 containerd[1477]: time="2025-05-16T06:03:30.798120677Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:03:30.801229 containerd[1477]: time="2025-05-16T06:03:30.801088683Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 16 06:03:30.802654 containerd[1477]: time="2025-05-16T06:03:30.802543321Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 06:03:30.807946 containerd[1477]: time="2025-05-16T06:03:30.807640292Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.892183068s" May 16 06:03:30.807946 containerd[1477]: time="2025-05-16T06:03:30.807743065Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 16 06:03:30.815240 containerd[1477]: time="2025-05-16T06:03:30.815124428Z" level=info msg="CreateContainer within sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 06:03:30.845122 containerd[1477]: time="2025-05-16T06:03:30.844979854Z" level=info msg="CreateContainer within sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a\"" May 16 06:03:30.848159 containerd[1477]: time="2025-05-16T06:03:30.847990061Z" level=info msg="StartContainer for \"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a\"" May 16 06:03:30.898030 systemd[1]: Started cri-containerd-8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a.scope - libcontainer container 8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a. May 16 06:03:30.932995 containerd[1477]: time="2025-05-16T06:03:30.932944699Z" level=info msg="StartContainer for \"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a\" returns successfully" May 16 06:03:30.942976 systemd[1]: cri-containerd-8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a.scope: Deactivated successfully. May 16 06:03:31.832736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a-rootfs.mount: Deactivated successfully. May 16 06:03:32.050956 containerd[1477]: time="2025-05-16T06:03:32.050813083Z" level=info msg="shim disconnected" id=8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a namespace=k8s.io May 16 06:03:32.050956 containerd[1477]: time="2025-05-16T06:03:32.050917348Z" level=warning msg="cleaning up after shim disconnected" id=8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a namespace=k8s.io May 16 06:03:32.050956 containerd[1477]: time="2025-05-16T06:03:32.050940081Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:03:32.078983 containerd[1477]: time="2025-05-16T06:03:32.078853384Z" level=warning msg="cleanup warnings time=\"2025-05-16T06:03:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 16 06:03:32.237658 containerd[1477]: time="2025-05-16T06:03:32.237472886Z" level=info msg="CreateContainer within sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 06:03:32.277698 containerd[1477]: time="2025-05-16T06:03:32.277494991Z" level=info msg="CreateContainer within sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e\"" May 16 06:03:32.279282 containerd[1477]: time="2025-05-16T06:03:32.279214656Z" level=info msg="StartContainer for \"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e\"" May 16 06:03:32.330840 systemd[1]: Started cri-containerd-e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e.scope - libcontainer container e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e. May 16 06:03:32.356846 containerd[1477]: time="2025-05-16T06:03:32.356781348Z" level=info msg="StartContainer for \"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e\" returns successfully" May 16 06:03:32.369336 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 06:03:32.369610 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 06:03:32.370245 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 16 06:03:32.378639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 06:03:32.378914 systemd[1]: cri-containerd-e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e.scope: Deactivated successfully. May 16 06:03:32.399023 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 06:03:32.407510 containerd[1477]: time="2025-05-16T06:03:32.407457955Z" level=info msg="shim disconnected" id=e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e namespace=k8s.io May 16 06:03:32.407711 containerd[1477]: time="2025-05-16T06:03:32.407664733Z" level=warning msg="cleaning up after shim disconnected" id=e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e namespace=k8s.io May 16 06:03:32.407817 containerd[1477]: time="2025-05-16T06:03:32.407800808Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:03:32.833310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e-rootfs.mount: Deactivated successfully. May 16 06:03:33.246509 containerd[1477]: time="2025-05-16T06:03:33.243347022Z" level=info msg="CreateContainer within sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 06:03:33.296168 containerd[1477]: time="2025-05-16T06:03:33.296080156Z" level=info msg="CreateContainer within sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4\"" May 16 06:03:33.299077 containerd[1477]: time="2025-05-16T06:03:33.299018226Z" level=info msg="StartContainer for \"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4\"" May 16 06:03:33.353868 systemd[1]: Started cri-containerd-308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4.scope - libcontainer container 308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4. May 16 06:03:33.385094 systemd[1]: cri-containerd-308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4.scope: Deactivated successfully. May 16 06:03:33.390052 containerd[1477]: time="2025-05-16T06:03:33.389950916Z" level=info msg="StartContainer for \"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4\" returns successfully" May 16 06:03:33.415815 containerd[1477]: time="2025-05-16T06:03:33.415759222Z" level=info msg="shim disconnected" id=308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4 namespace=k8s.io May 16 06:03:33.415815 containerd[1477]: time="2025-05-16T06:03:33.415808564Z" level=warning msg="cleaning up after shim disconnected" id=308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4 namespace=k8s.io May 16 06:03:33.415996 containerd[1477]: time="2025-05-16T06:03:33.415817742Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:03:33.833974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4-rootfs.mount: Deactivated successfully. May 16 06:03:34.256504 containerd[1477]: time="2025-05-16T06:03:34.251912951Z" level=info msg="CreateContainer within sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 06:03:34.303388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536183366.mount: Deactivated successfully. May 16 06:03:34.315571 containerd[1477]: time="2025-05-16T06:03:34.315375950Z" level=info msg="CreateContainer within sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f\"" May 16 06:03:34.318018 containerd[1477]: time="2025-05-16T06:03:34.317778596Z" level=info msg="StartContainer for \"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f\"" May 16 06:03:34.353831 systemd[1]: Started cri-containerd-685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f.scope - libcontainer container 685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f. May 16 06:03:34.375140 systemd[1]: cri-containerd-685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f.scope: Deactivated successfully. May 16 06:03:34.379599 containerd[1477]: time="2025-05-16T06:03:34.379529834Z" level=info msg="StartContainer for \"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f\" returns successfully" May 16 06:03:34.408372 containerd[1477]: time="2025-05-16T06:03:34.408294884Z" level=info msg="shim disconnected" id=685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f namespace=k8s.io May 16 06:03:34.408372 containerd[1477]: time="2025-05-16T06:03:34.408351580Z" level=warning msg="cleaning up after shim disconnected" id=685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f namespace=k8s.io May 16 06:03:34.408372 containerd[1477]: time="2025-05-16T06:03:34.408362551Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:03:34.833413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f-rootfs.mount: Deactivated successfully. May 16 06:03:35.262796 containerd[1477]: time="2025-05-16T06:03:35.262529592Z" level=info msg="CreateContainer within sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 06:03:35.316548 containerd[1477]: time="2025-05-16T06:03:35.316102090Z" level=info msg="CreateContainer within sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\"" May 16 06:03:35.319951 containerd[1477]: time="2025-05-16T06:03:35.319880027Z" level=info msg="StartContainer for \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\"" May 16 06:03:35.368824 systemd[1]: Started cri-containerd-758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b.scope - libcontainer container 758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b. May 16 06:03:35.401932 containerd[1477]: time="2025-05-16T06:03:35.401609915Z" level=info msg="StartContainer for \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\" returns successfully" May 16 06:03:35.475119 kubelet[2662]: I0516 06:03:35.473797 2662 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 06:03:35.512729 systemd[1]: Created slice kubepods-burstable-podbe8e5016_c4f8_4cbf_a219_21bef05d89ef.slice - libcontainer container kubepods-burstable-podbe8e5016_c4f8_4cbf_a219_21bef05d89ef.slice. May 16 06:03:35.520558 systemd[1]: Created slice kubepods-burstable-poddfbffc64_ab71_47be_b2cc_a0534f2f14ed.slice - libcontainer container kubepods-burstable-poddfbffc64_ab71_47be_b2cc_a0534f2f14ed.slice. May 16 06:03:35.550485 kubelet[2662]: I0516 06:03:35.550445 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfbffc64-ab71-47be-b2cc-a0534f2f14ed-config-volume\") pod \"coredns-668d6bf9bc-zg4vz\" (UID: \"dfbffc64-ab71-47be-b2cc-a0534f2f14ed\") " pod="kube-system/coredns-668d6bf9bc-zg4vz" May 16 06:03:35.550485 kubelet[2662]: I0516 06:03:35.550487 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzhq2\" (UniqueName: \"kubernetes.io/projected/be8e5016-c4f8-4cbf-a219-21bef05d89ef-kube-api-access-mzhq2\") pod \"coredns-668d6bf9bc-q8w6d\" (UID: \"be8e5016-c4f8-4cbf-a219-21bef05d89ef\") " pod="kube-system/coredns-668d6bf9bc-q8w6d" May 16 06:03:35.550640 kubelet[2662]: I0516 06:03:35.550508 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4w86\" (UniqueName: \"kubernetes.io/projected/dfbffc64-ab71-47be-b2cc-a0534f2f14ed-kube-api-access-q4w86\") pod \"coredns-668d6bf9bc-zg4vz\" (UID: \"dfbffc64-ab71-47be-b2cc-a0534f2f14ed\") " pod="kube-system/coredns-668d6bf9bc-zg4vz" May 16 06:03:35.550640 kubelet[2662]: I0516 06:03:35.550531 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be8e5016-c4f8-4cbf-a219-21bef05d89ef-config-volume\") pod \"coredns-668d6bf9bc-q8w6d\" (UID: \"be8e5016-c4f8-4cbf-a219-21bef05d89ef\") " pod="kube-system/coredns-668d6bf9bc-q8w6d" May 16 06:03:35.818708 containerd[1477]: time="2025-05-16T06:03:35.818482704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8w6d,Uid:be8e5016-c4f8-4cbf-a219-21bef05d89ef,Namespace:kube-system,Attempt:0,}" May 16 06:03:35.826639 containerd[1477]: time="2025-05-16T06:03:35.826610578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zg4vz,Uid:dfbffc64-ab71-47be-b2cc-a0534f2f14ed,Namespace:kube-system,Attempt:0,}" May 16 06:03:36.311302 kubelet[2662]: I0516 06:03:36.309279 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bjkfn" podStartSLOduration=8.418763937 podStartE2EDuration="18.309251456s" podCreationTimestamp="2025-05-16 06:03:18 +0000 UTC" firstStartedPulling="2025-05-16 06:03:20.920523094 +0000 UTC m=+7.013057299" lastFinishedPulling="2025-05-16 06:03:30.811010563 +0000 UTC m=+16.903544818" observedRunningTime="2025-05-16 06:03:36.308037971 +0000 UTC m=+22.400572236" watchObservedRunningTime="2025-05-16 06:03:36.309251456 +0000 UTC m=+22.401785742" May 16 06:03:37.557455 systemd-networkd[1382]: cilium_host: Link UP May 16 06:03:37.559012 systemd-networkd[1382]: cilium_net: Link UP May 16 06:03:37.559024 systemd-networkd[1382]: cilium_net: Gained carrier May 16 06:03:37.559425 systemd-networkd[1382]: cilium_host: Gained carrier May 16 06:03:37.559925 systemd-networkd[1382]: cilium_host: Gained IPv6LL May 16 06:03:37.672765 systemd-networkd[1382]: cilium_vxlan: Link UP May 16 06:03:37.672892 systemd-networkd[1382]: cilium_vxlan: Gained carrier May 16 06:03:37.973732 kernel: NET: Registered PF_ALG protocol family May 16 06:03:38.216814 systemd-networkd[1382]: cilium_net: Gained IPv6LL May 16 06:03:38.809613 systemd-networkd[1382]: lxc_health: Link UP May 16 06:03:38.832267 systemd-networkd[1382]: lxc_health: Gained carrier May 16 06:03:39.422552 systemd-networkd[1382]: lxc7736c35cf8b9: Link UP May 16 06:03:39.427191 kernel: eth0: renamed from tmp8904b May 16 06:03:39.439783 systemd-networkd[1382]: lxc7736c35cf8b9: Gained carrier May 16 06:03:39.463559 systemd-networkd[1382]: lxce7c0af5ce413: Link UP May 16 06:03:39.473779 kernel: eth0: renamed from tmpbc4e6 May 16 06:03:39.482078 systemd-networkd[1382]: lxce7c0af5ce413: Gained carrier May 16 06:03:39.689866 systemd-networkd[1382]: cilium_vxlan: Gained IPv6LL May 16 06:03:39.880863 systemd-networkd[1382]: lxc_health: Gained IPv6LL May 16 06:03:40.712922 systemd-networkd[1382]: lxce7c0af5ce413: Gained IPv6LL May 16 06:03:41.096878 systemd-networkd[1382]: lxc7736c35cf8b9: Gained IPv6LL May 16 06:03:43.829354 containerd[1477]: time="2025-05-16T06:03:43.829245536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 06:03:43.829354 containerd[1477]: time="2025-05-16T06:03:43.829304336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 06:03:43.829354 containerd[1477]: time="2025-05-16T06:03:43.829326227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:43.832112 containerd[1477]: time="2025-05-16T06:03:43.829400597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:43.863874 systemd[1]: Started cri-containerd-bc4e6a4f91a1b8dd67e506b5b7b48ebe1416be156cfbfe570b53ce32b5396b3a.scope - libcontainer container bc4e6a4f91a1b8dd67e506b5b7b48ebe1416be156cfbfe570b53ce32b5396b3a. May 16 06:03:43.926918 containerd[1477]: time="2025-05-16T06:03:43.926862159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8w6d,Uid:be8e5016-c4f8-4cbf-a219-21bef05d89ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc4e6a4f91a1b8dd67e506b5b7b48ebe1416be156cfbfe570b53ce32b5396b3a\"" May 16 06:03:43.934049 containerd[1477]: time="2025-05-16T06:03:43.933249368Z" level=info msg="CreateContainer within sandbox \"bc4e6a4f91a1b8dd67e506b5b7b48ebe1416be156cfbfe570b53ce32b5396b3a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 06:03:43.957528 containerd[1477]: time="2025-05-16T06:03:43.957390656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 06:03:43.957747 containerd[1477]: time="2025-05-16T06:03:43.957716738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 06:03:43.962322 containerd[1477]: time="2025-05-16T06:03:43.957822967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:43.962322 containerd[1477]: time="2025-05-16T06:03:43.958550682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:03:43.961481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425329392.mount: Deactivated successfully. May 16 06:03:43.965470 containerd[1477]: time="2025-05-16T06:03:43.964964671Z" level=info msg="CreateContainer within sandbox \"bc4e6a4f91a1b8dd67e506b5b7b48ebe1416be156cfbfe570b53ce32b5396b3a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6342f269abe945b2033ab5a558b46faef7332ac4f4b55d326ec58046173998f4\"" May 16 06:03:43.966587 containerd[1477]: time="2025-05-16T06:03:43.965723785Z" level=info msg="StartContainer for \"6342f269abe945b2033ab5a558b46faef7332ac4f4b55d326ec58046173998f4\"" May 16 06:03:43.992430 systemd[1]: Started cri-containerd-8904b87546605a106e5eed0f5d6c525cd47c61822fec57c5b83ffa2ce841c1dd.scope - libcontainer container 8904b87546605a106e5eed0f5d6c525cd47c61822fec57c5b83ffa2ce841c1dd. May 16 06:03:44.022855 systemd[1]: Started cri-containerd-6342f269abe945b2033ab5a558b46faef7332ac4f4b55d326ec58046173998f4.scope - libcontainer container 6342f269abe945b2033ab5a558b46faef7332ac4f4b55d326ec58046173998f4. May 16 06:03:44.057944 containerd[1477]: time="2025-05-16T06:03:44.057895516Z" level=info msg="StartContainer for \"6342f269abe945b2033ab5a558b46faef7332ac4f4b55d326ec58046173998f4\" returns successfully" May 16 06:03:44.084905 containerd[1477]: time="2025-05-16T06:03:44.084756625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zg4vz,Uid:dfbffc64-ab71-47be-b2cc-a0534f2f14ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"8904b87546605a106e5eed0f5d6c525cd47c61822fec57c5b83ffa2ce841c1dd\"" May 16 06:03:44.090608 containerd[1477]: time="2025-05-16T06:03:44.090423393Z" level=info msg="CreateContainer within sandbox \"8904b87546605a106e5eed0f5d6c525cd47c61822fec57c5b83ffa2ce841c1dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 06:03:44.115649 containerd[1477]: time="2025-05-16T06:03:44.115599211Z" level=info msg="CreateContainer within sandbox \"8904b87546605a106e5eed0f5d6c525cd47c61822fec57c5b83ffa2ce841c1dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9433d38ce59bb985047b20fb16caf4c19335a8c08b134668e1fddcfbbc5ddea7\"" May 16 06:03:44.116646 containerd[1477]: time="2025-05-16T06:03:44.116618192Z" level=info msg="StartContainer for \"9433d38ce59bb985047b20fb16caf4c19335a8c08b134668e1fddcfbbc5ddea7\"" May 16 06:03:44.152832 systemd[1]: Started cri-containerd-9433d38ce59bb985047b20fb16caf4c19335a8c08b134668e1fddcfbbc5ddea7.scope - libcontainer container 9433d38ce59bb985047b20fb16caf4c19335a8c08b134668e1fddcfbbc5ddea7. May 16 06:03:44.181099 containerd[1477]: time="2025-05-16T06:03:44.180977400Z" level=info msg="StartContainer for \"9433d38ce59bb985047b20fb16caf4c19335a8c08b134668e1fddcfbbc5ddea7\" returns successfully" May 16 06:03:44.336513 kubelet[2662]: I0516 06:03:44.336165 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q8w6d" podStartSLOduration=25.336104446 podStartE2EDuration="25.336104446s" podCreationTimestamp="2025-05-16 06:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 06:03:44.333108958 +0000 UTC m=+30.425643193" watchObservedRunningTime="2025-05-16 06:03:44.336104446 +0000 UTC m=+30.428638731" May 16 06:03:44.396058 kubelet[2662]: I0516 06:03:44.395996 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zg4vz" podStartSLOduration=25.395962982 podStartE2EDuration="25.395962982s" podCreationTimestamp="2025-05-16 06:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 06:03:44.394038703 +0000 UTC m=+30.486572908" watchObservedRunningTime="2025-05-16 06:03:44.395962982 +0000 UTC m=+30.488497197" May 16 06:07:38.697637 systemd[1]: Started sshd@7-172.24.4.222:22-172.24.4.1:36340.service - OpenSSH per-connection server daemon (172.24.4.1:36340). May 16 06:07:40.065592 sshd[4063]: Accepted publickey for core from 172.24.4.1 port 36340 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:07:40.070628 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:07:40.096741 systemd-logind[1455]: New session 10 of user core. May 16 06:07:40.108100 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 06:07:40.997231 sshd[4068]: Connection closed by 172.24.4.1 port 36340 May 16 06:07:40.998198 sshd-session[4063]: pam_unix(sshd:session): session closed for user core May 16 06:07:41.006104 systemd[1]: sshd@7-172.24.4.222:22-172.24.4.1:36340.service: Deactivated successfully. May 16 06:07:41.013735 systemd[1]: session-10.scope: Deactivated successfully. May 16 06:07:41.018805 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. May 16 06:07:41.021883 systemd-logind[1455]: Removed session 10. May 16 06:07:46.035310 systemd[1]: Started sshd@8-172.24.4.222:22-172.24.4.1:40424.service - OpenSSH per-connection server daemon (172.24.4.1:40424). May 16 06:07:47.293793 sshd[4080]: Accepted publickey for core from 172.24.4.1 port 40424 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:07:47.297308 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:07:47.315862 systemd-logind[1455]: New session 11 of user core. May 16 06:07:47.325089 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 06:07:48.252436 sshd[4082]: Connection closed by 172.24.4.1 port 40424 May 16 06:07:48.253360 sshd-session[4080]: pam_unix(sshd:session): session closed for user core May 16 06:07:48.259032 systemd[1]: sshd@8-172.24.4.222:22-172.24.4.1:40424.service: Deactivated successfully. May 16 06:07:48.261802 systemd[1]: session-11.scope: Deactivated successfully. May 16 06:07:48.263202 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. May 16 06:07:48.265313 systemd-logind[1455]: Removed session 11. May 16 06:07:53.284930 systemd[1]: Started sshd@9-172.24.4.222:22-172.24.4.1:40440.service - OpenSSH per-connection server daemon (172.24.4.1:40440). May 16 06:07:54.681426 sshd[4097]: Accepted publickey for core from 172.24.4.1 port 40440 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:07:54.684890 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:07:54.700577 systemd-logind[1455]: New session 12 of user core. May 16 06:07:54.709091 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 06:07:55.525910 sshd[4099]: Connection closed by 172.24.4.1 port 40440 May 16 06:07:55.527353 sshd-session[4097]: pam_unix(sshd:session): session closed for user core May 16 06:07:55.535170 systemd[1]: sshd@9-172.24.4.222:22-172.24.4.1:40440.service: Deactivated successfully. May 16 06:07:55.541282 systemd[1]: session-12.scope: Deactivated successfully. May 16 06:07:55.543882 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. May 16 06:07:55.546952 systemd-logind[1455]: Removed session 12. May 16 06:08:00.569529 systemd[1]: Started sshd@10-172.24.4.222:22-172.24.4.1:49190.service - OpenSSH per-connection server daemon (172.24.4.1:49190). May 16 06:08:01.972057 sshd[4111]: Accepted publickey for core from 172.24.4.1 port 49190 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:01.975979 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:01.992926 systemd-logind[1455]: New session 13 of user core. May 16 06:08:02.007115 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 06:08:02.649722 sshd[4113]: Connection closed by 172.24.4.1 port 49190 May 16 06:08:02.651026 sshd-session[4111]: pam_unix(sshd:session): session closed for user core May 16 06:08:02.689921 systemd[1]: sshd@10-172.24.4.222:22-172.24.4.1:49190.service: Deactivated successfully. May 16 06:08:02.697889 systemd[1]: session-13.scope: Deactivated successfully. May 16 06:08:02.702595 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. May 16 06:08:02.718309 systemd[1]: Started sshd@11-172.24.4.222:22-172.24.4.1:49200.service - OpenSSH per-connection server daemon (172.24.4.1:49200). May 16 06:08:02.721366 systemd-logind[1455]: Removed session 13. May 16 06:08:03.819919 sshd[4125]: Accepted publickey for core from 172.24.4.1 port 49200 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:03.822238 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:03.830765 systemd-logind[1455]: New session 14 of user core. May 16 06:08:03.839878 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 06:08:04.723910 sshd[4128]: Connection closed by 172.24.4.1 port 49200 May 16 06:08:04.727221 sshd-session[4125]: pam_unix(sshd:session): session closed for user core May 16 06:08:04.746043 systemd[1]: sshd@11-172.24.4.222:22-172.24.4.1:49200.service: Deactivated successfully. May 16 06:08:04.750398 systemd[1]: session-14.scope: Deactivated successfully. May 16 06:08:04.753381 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. May 16 06:08:04.768532 systemd[1]: Started sshd@12-172.24.4.222:22-172.24.4.1:35730.service - OpenSSH per-connection server daemon (172.24.4.1:35730). May 16 06:08:04.773911 systemd-logind[1455]: Removed session 14. May 16 06:08:06.109988 sshd[4138]: Accepted publickey for core from 172.24.4.1 port 35730 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:06.112662 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:06.128457 systemd-logind[1455]: New session 15 of user core. May 16 06:08:06.138038 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 06:08:06.990952 sshd[4141]: Connection closed by 172.24.4.1 port 35730 May 16 06:08:06.992325 sshd-session[4138]: pam_unix(sshd:session): session closed for user core May 16 06:08:07.000386 systemd[1]: sshd@12-172.24.4.222:22-172.24.4.1:35730.service: Deactivated successfully. May 16 06:08:07.005534 systemd[1]: session-15.scope: Deactivated successfully. May 16 06:08:07.010947 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. May 16 06:08:07.013590 systemd-logind[1455]: Removed session 15. May 16 06:08:12.023739 systemd[1]: Started sshd@13-172.24.4.222:22-172.24.4.1:35740.service - OpenSSH per-connection server daemon (172.24.4.1:35740). May 16 06:08:12.976051 sshd[4154]: Accepted publickey for core from 172.24.4.1 port 35740 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:12.980387 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:12.996820 systemd-logind[1455]: New session 16 of user core. May 16 06:08:13.005112 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 06:08:13.821932 sshd[4156]: Connection closed by 172.24.4.1 port 35740 May 16 06:08:13.822630 sshd-session[4154]: pam_unix(sshd:session): session closed for user core May 16 06:08:13.851635 systemd[1]: sshd@13-172.24.4.222:22-172.24.4.1:35740.service: Deactivated successfully. May 16 06:08:13.862899 systemd[1]: session-16.scope: Deactivated successfully. May 16 06:08:13.865994 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. May 16 06:08:13.879551 systemd[1]: Started sshd@14-172.24.4.222:22-172.24.4.1:37762.service - OpenSSH per-connection server daemon (172.24.4.1:37762). May 16 06:08:13.885805 systemd-logind[1455]: Removed session 16. May 16 06:08:15.123753 sshd[4167]: Accepted publickey for core from 172.24.4.1 port 37762 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:15.126995 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:15.145840 systemd-logind[1455]: New session 17 of user core. May 16 06:08:15.161032 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 06:08:15.952131 sshd[4172]: Connection closed by 172.24.4.1 port 37762 May 16 06:08:15.954611 sshd-session[4167]: pam_unix(sshd:session): session closed for user core May 16 06:08:15.964625 systemd[1]: sshd@14-172.24.4.222:22-172.24.4.1:37762.service: Deactivated successfully. May 16 06:08:15.967350 systemd[1]: session-17.scope: Deactivated successfully. May 16 06:08:15.968512 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. May 16 06:08:15.982230 systemd[1]: Started sshd@15-172.24.4.222:22-172.24.4.1:37776.service - OpenSSH per-connection server daemon (172.24.4.1:37776). May 16 06:08:15.988185 systemd-logind[1455]: Removed session 17. May 16 06:08:17.131564 sshd[4181]: Accepted publickey for core from 172.24.4.1 port 37776 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:17.135429 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:17.149848 systemd-logind[1455]: New session 18 of user core. May 16 06:08:17.160093 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 06:08:19.260725 sshd[4184]: Connection closed by 172.24.4.1 port 37776 May 16 06:08:19.264416 sshd-session[4181]: pam_unix(sshd:session): session closed for user core May 16 06:08:19.286059 systemd[1]: sshd@15-172.24.4.222:22-172.24.4.1:37776.service: Deactivated successfully. May 16 06:08:19.290976 systemd[1]: session-18.scope: Deactivated successfully. May 16 06:08:19.295434 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. May 16 06:08:19.311534 systemd[1]: Started sshd@16-172.24.4.222:22-172.24.4.1:37784.service - OpenSSH per-connection server daemon (172.24.4.1:37784). May 16 06:08:19.318369 systemd-logind[1455]: Removed session 18. May 16 06:08:20.891931 sshd[4200]: Accepted publickey for core from 172.24.4.1 port 37784 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:20.895781 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:20.909030 systemd-logind[1455]: New session 19 of user core. May 16 06:08:20.920083 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 06:08:21.895271 sshd[4203]: Connection closed by 172.24.4.1 port 37784 May 16 06:08:21.896829 sshd-session[4200]: pam_unix(sshd:session): session closed for user core May 16 06:08:21.915955 systemd[1]: sshd@16-172.24.4.222:22-172.24.4.1:37784.service: Deactivated successfully. May 16 06:08:21.921318 systemd[1]: session-19.scope: Deactivated successfully. May 16 06:08:21.925991 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. May 16 06:08:21.936464 systemd[1]: Started sshd@17-172.24.4.222:22-172.24.4.1:37800.service - OpenSSH per-connection server daemon (172.24.4.1:37800). May 16 06:08:21.939388 systemd-logind[1455]: Removed session 19. May 16 06:08:23.179428 sshd[4215]: Accepted publickey for core from 172.24.4.1 port 37800 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:23.184280 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:23.204126 systemd-logind[1455]: New session 20 of user core. May 16 06:08:23.219272 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 06:08:24.008833 sshd[4218]: Connection closed by 172.24.4.1 port 37800 May 16 06:08:24.010341 sshd-session[4215]: pam_unix(sshd:session): session closed for user core May 16 06:08:24.020557 systemd[1]: sshd@17-172.24.4.222:22-172.24.4.1:37800.service: Deactivated successfully. May 16 06:08:24.028796 systemd[1]: session-20.scope: Deactivated successfully. May 16 06:08:24.032509 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. May 16 06:08:24.036365 systemd-logind[1455]: Removed session 20. May 16 06:08:29.052455 systemd[1]: Started sshd@18-172.24.4.222:22-172.24.4.1:39608.service - OpenSSH per-connection server daemon (172.24.4.1:39608). May 16 06:08:30.212287 sshd[4231]: Accepted publickey for core from 172.24.4.1 port 39608 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:30.216313 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:30.233958 systemd-logind[1455]: New session 21 of user core. May 16 06:08:30.248070 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 06:08:31.035862 sshd[4233]: Connection closed by 172.24.4.1 port 39608 May 16 06:08:31.037557 sshd-session[4231]: pam_unix(sshd:session): session closed for user core May 16 06:08:31.047660 systemd[1]: sshd@18-172.24.4.222:22-172.24.4.1:39608.service: Deactivated successfully. May 16 06:08:31.055440 systemd[1]: session-21.scope: Deactivated successfully. May 16 06:08:31.058899 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. May 16 06:08:31.063511 systemd-logind[1455]: Removed session 21. May 16 06:08:36.073649 systemd[1]: Started sshd@19-172.24.4.222:22-172.24.4.1:36598.service - OpenSSH per-connection server daemon (172.24.4.1:36598). May 16 06:08:37.257475 sshd[4245]: Accepted publickey for core from 172.24.4.1 port 36598 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:37.261092 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:37.275910 systemd-logind[1455]: New session 22 of user core. May 16 06:08:37.284654 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 06:08:38.119530 sshd[4247]: Connection closed by 172.24.4.1 port 36598 May 16 06:08:38.120329 sshd-session[4245]: pam_unix(sshd:session): session closed for user core May 16 06:08:38.131801 systemd[1]: sshd@19-172.24.4.222:22-172.24.4.1:36598.service: Deactivated successfully. May 16 06:08:38.140207 systemd[1]: session-22.scope: Deactivated successfully. May 16 06:08:38.143286 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. May 16 06:08:38.146978 systemd-logind[1455]: Removed session 22. May 16 06:08:43.162434 systemd[1]: Started sshd@20-172.24.4.222:22-172.24.4.1:36606.service - OpenSSH per-connection server daemon (172.24.4.1:36606). May 16 06:08:44.287895 sshd[4259]: Accepted publickey for core from 172.24.4.1 port 36606 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:44.291273 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:44.317638 systemd-logind[1455]: New session 23 of user core. May 16 06:08:44.324175 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 06:08:45.114973 sshd[4261]: Connection closed by 172.24.4.1 port 36606 May 16 06:08:45.117180 sshd-session[4259]: pam_unix(sshd:session): session closed for user core May 16 06:08:45.133128 systemd[1]: sshd@20-172.24.4.222:22-172.24.4.1:36606.service: Deactivated successfully. May 16 06:08:45.140079 systemd[1]: session-23.scope: Deactivated successfully. May 16 06:08:45.142334 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. May 16 06:08:45.152426 systemd[1]: Started sshd@21-172.24.4.222:22-172.24.4.1:38492.service - OpenSSH per-connection server daemon (172.24.4.1:38492). May 16 06:08:45.156962 systemd-logind[1455]: Removed session 23. May 16 06:08:46.327782 sshd[4272]: Accepted publickey for core from 172.24.4.1 port 38492 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:46.330802 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:46.344790 systemd-logind[1455]: New session 24 of user core. May 16 06:08:46.357146 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 06:08:48.658452 containerd[1477]: time="2025-05-16T06:08:48.656549223Z" level=info msg="StopContainer for \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\" with timeout 30 (s)" May 16 06:08:48.663984 containerd[1477]: time="2025-05-16T06:08:48.660221719Z" level=info msg="Stop container \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\" with signal terminated" May 16 06:08:48.686481 systemd[1]: cri-containerd-38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b.scope: Deactivated successfully. May 16 06:08:48.687480 systemd[1]: cri-containerd-38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b.scope: Consumed 1.234s CPU time, 28.9M memory peak, 4K written to disk. May 16 06:08:48.702502 systemd[1]: run-containerd-runc-k8s.io-758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b-runc.61EVgK.mount: Deactivated successfully. May 16 06:08:48.738574 containerd[1477]: time="2025-05-16T06:08:48.738183543Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 06:08:48.758207 containerd[1477]: time="2025-05-16T06:08:48.757812927Z" level=info msg="StopContainer for \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\" with timeout 2 (s)" May 16 06:08:48.759073 containerd[1477]: time="2025-05-16T06:08:48.758989443Z" level=info msg="Stop container \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\" with signal terminated" May 16 06:08:48.763529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b-rootfs.mount: Deactivated successfully. May 16 06:08:48.778055 systemd-networkd[1382]: lxc_health: Link DOWN May 16 06:08:48.778067 systemd-networkd[1382]: lxc_health: Lost carrier May 16 06:08:48.789346 containerd[1477]: time="2025-05-16T06:08:48.785094298Z" level=info msg="shim disconnected" id=38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b namespace=k8s.io May 16 06:08:48.789346 containerd[1477]: time="2025-05-16T06:08:48.785235634Z" level=warning msg="cleaning up after shim disconnected" id=38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b namespace=k8s.io May 16 06:08:48.789346 containerd[1477]: time="2025-05-16T06:08:48.785259238Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:08:48.801780 systemd[1]: cri-containerd-758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b.scope: Deactivated successfully. May 16 06:08:48.802084 systemd[1]: cri-containerd-758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b.scope: Consumed 10.416s CPU time, 122.4M memory peak, 136K read from disk, 13.3M written to disk. May 16 06:08:48.835891 containerd[1477]: time="2025-05-16T06:08:48.835842138Z" level=info msg="StopContainer for \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\" returns successfully" May 16 06:08:48.837650 containerd[1477]: time="2025-05-16T06:08:48.837624269Z" level=info msg="StopPodSandbox for \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\"" May 16 06:08:48.838143 containerd[1477]: time="2025-05-16T06:08:48.837826519Z" level=info msg="Container to stop \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 06:08:48.841121 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814-shm.mount: Deactivated successfully. May 16 06:08:48.853736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b-rootfs.mount: Deactivated successfully. May 16 06:08:48.856459 systemd[1]: cri-containerd-b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814.scope: Deactivated successfully. May 16 06:08:48.883841 containerd[1477]: time="2025-05-16T06:08:48.883750674Z" level=info msg="shim disconnected" id=758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b namespace=k8s.io May 16 06:08:48.884983 containerd[1477]: time="2025-05-16T06:08:48.884475313Z" level=warning msg="cleaning up after shim disconnected" id=758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b namespace=k8s.io May 16 06:08:48.885243 containerd[1477]: time="2025-05-16T06:08:48.885225038Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:08:48.901576 containerd[1477]: time="2025-05-16T06:08:48.901509701Z" level=info msg="shim disconnected" id=b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814 namespace=k8s.io May 16 06:08:48.903298 containerd[1477]: time="2025-05-16T06:08:48.903262988Z" level=warning msg="cleaning up after shim disconnected" id=b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814 namespace=k8s.io May 16 06:08:48.903430 containerd[1477]: time="2025-05-16T06:08:48.903404884Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:08:48.926563 containerd[1477]: time="2025-05-16T06:08:48.926421529Z" level=info msg="StopContainer for \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\" returns successfully" May 16 06:08:48.928261 containerd[1477]: time="2025-05-16T06:08:48.928234819Z" level=info msg="TearDown network for sandbox \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\" successfully" May 16 06:08:48.928411 containerd[1477]: time="2025-05-16T06:08:48.928389689Z" level=info msg="StopPodSandbox for \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\" returns successfully" May 16 06:08:48.928992 containerd[1477]: time="2025-05-16T06:08:48.928969286Z" level=info msg="StopPodSandbox for \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\"" May 16 06:08:48.929193 containerd[1477]: time="2025-05-16T06:08:48.929124888Z" level=info msg="Container to stop \"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 06:08:48.929318 containerd[1477]: time="2025-05-16T06:08:48.929280941Z" level=info msg="Container to stop \"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 06:08:48.929421 containerd[1477]: time="2025-05-16T06:08:48.929402248Z" level=info msg="Container to stop \"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 06:08:48.929508 containerd[1477]: time="2025-05-16T06:08:48.929491075Z" level=info msg="Container to stop \"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 06:08:48.929605 containerd[1477]: time="2025-05-16T06:08:48.929586694Z" level=info msg="Container to stop \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 06:08:48.938918 systemd[1]: cri-containerd-175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc.scope: Deactivated successfully. May 16 06:08:48.999854 containerd[1477]: time="2025-05-16T06:08:48.999770395Z" level=info msg="shim disconnected" id=175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc namespace=k8s.io May 16 06:08:49.000558 containerd[1477]: time="2025-05-16T06:08:49.000081759Z" level=warning msg="cleaning up after shim disconnected" id=175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc namespace=k8s.io May 16 06:08:49.000558 containerd[1477]: time="2025-05-16T06:08:49.000101546Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:08:49.023542 containerd[1477]: time="2025-05-16T06:08:49.023462506Z" level=info msg="TearDown network for sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" successfully" May 16 06:08:49.023542 containerd[1477]: time="2025-05-16T06:08:49.023516888Z" level=info msg="StopPodSandbox for \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" returns successfully" May 16 06:08:49.119794 kubelet[2662]: I0516 06:08:49.119242 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj4sk\" (UniqueName: \"kubernetes.io/projected/df3bf59b-50c8-468c-b13d-4976f127f15f-kube-api-access-zj4sk\") pod \"df3bf59b-50c8-468c-b13d-4976f127f15f\" (UID: \"df3bf59b-50c8-468c-b13d-4976f127f15f\") " May 16 06:08:49.119794 kubelet[2662]: I0516 06:08:49.119457 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df3bf59b-50c8-468c-b13d-4976f127f15f-cilium-config-path\") pod \"df3bf59b-50c8-468c-b13d-4976f127f15f\" (UID: \"df3bf59b-50c8-468c-b13d-4976f127f15f\") " May 16 06:08:49.125732 kubelet[2662]: I0516 06:08:49.124903 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df3bf59b-50c8-468c-b13d-4976f127f15f-kube-api-access-zj4sk" (OuterVolumeSpecName: "kube-api-access-zj4sk") pod "df3bf59b-50c8-468c-b13d-4976f127f15f" (UID: "df3bf59b-50c8-468c-b13d-4976f127f15f"). InnerVolumeSpecName "kube-api-access-zj4sk". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 06:08:49.128388 kubelet[2662]: I0516 06:08:49.128325 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df3bf59b-50c8-468c-b13d-4976f127f15f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "df3bf59b-50c8-468c-b13d-4976f127f15f" (UID: "df3bf59b-50c8-468c-b13d-4976f127f15f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 06:08:49.220669 kubelet[2662]: I0516 06:08:49.220464 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8069b4e8-57dd-493e-97dd-2560a89bac2a-clustermesh-secrets\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.221885 kubelet[2662]: I0516 06:08:49.221156 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-lib-modules\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.221885 kubelet[2662]: I0516 06:08:49.221371 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-run\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.221885 kubelet[2662]: I0516 06:08:49.221511 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-host-proc-sys-kernel\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.221885 kubelet[2662]: I0516 06:08:49.221623 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-host-proc-sys-net\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.222890 kubelet[2662]: I0516 06:08:49.221831 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-bpf-maps\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.222890 kubelet[2662]: I0516 06:08:49.222222 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-config-path\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.223732 kubelet[2662]: I0516 06:08:49.223111 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-hostproc\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.223732 kubelet[2662]: I0516 06:08:49.223185 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-cgroup\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.223732 kubelet[2662]: I0516 06:08:49.223247 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8069b4e8-57dd-493e-97dd-2560a89bac2a-hubble-tls\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.223732 kubelet[2662]: I0516 06:08:49.223288 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cni-path\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.223732 kubelet[2662]: I0516 06:08:49.223345 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-etc-cni-netd\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.223732 kubelet[2662]: I0516 06:08:49.223401 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-xtables-lock\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.224327 kubelet[2662]: I0516 06:08:49.223495 2662 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nbkv\" (UniqueName: \"kubernetes.io/projected/8069b4e8-57dd-493e-97dd-2560a89bac2a-kube-api-access-4nbkv\") pod \"8069b4e8-57dd-493e-97dd-2560a89bac2a\" (UID: \"8069b4e8-57dd-493e-97dd-2560a89bac2a\") " May 16 06:08:49.224600 kubelet[2662]: I0516 06:08:49.223669 2662 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zj4sk\" (UniqueName: \"kubernetes.io/projected/df3bf59b-50c8-468c-b13d-4976f127f15f-kube-api-access-zj4sk\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.225606 kubelet[2662]: I0516 06:08:49.224924 2662 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/df3bf59b-50c8-468c-b13d-4976f127f15f-cilium-config-path\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.226929 kubelet[2662]: I0516 06:08:49.226874 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 06:08:49.227305 kubelet[2662]: I0516 06:08:49.227260 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 06:08:49.227572 kubelet[2662]: I0516 06:08:49.227499 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 06:08:49.227891 kubelet[2662]: I0516 06:08:49.227846 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 06:08:49.229716 kubelet[2662]: I0516 06:08:49.228233 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 06:08:49.229716 kubelet[2662]: I0516 06:08:49.228778 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8069b4e8-57dd-493e-97dd-2560a89bac2a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 06:08:49.234379 kubelet[2662]: I0516 06:08:49.234301 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8069b4e8-57dd-493e-97dd-2560a89bac2a-kube-api-access-4nbkv" (OuterVolumeSpecName: "kube-api-access-4nbkv") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "kube-api-access-4nbkv". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 06:08:49.235505 kubelet[2662]: I0516 06:08:49.235448 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 06:08:49.235911 kubelet[2662]: I0516 06:08:49.235843 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cni-path" (OuterVolumeSpecName: "cni-path") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 06:08:49.236207 kubelet[2662]: I0516 06:08:49.236165 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 06:08:49.236440 kubelet[2662]: I0516 06:08:49.236401 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 06:08:49.236737 kubelet[2662]: I0516 06:08:49.236638 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-hostproc" (OuterVolumeSpecName: "hostproc") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 06:08:49.237017 kubelet[2662]: I0516 06:08:49.236975 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 06:08:49.240066 kubelet[2662]: I0516 06:08:49.240001 2662 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8069b4e8-57dd-493e-97dd-2560a89bac2a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8069b4e8-57dd-493e-97dd-2560a89bac2a" (UID: "8069b4e8-57dd-493e-97dd-2560a89bac2a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 06:08:49.325383 kubelet[2662]: I0516 06:08:49.325271 2662 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-cgroup\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.325383 kubelet[2662]: I0516 06:08:49.325349 2662 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8069b4e8-57dd-493e-97dd-2560a89bac2a-hubble-tls\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.325383 kubelet[2662]: I0516 06:08:49.325381 2662 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cni-path\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.325383 kubelet[2662]: I0516 06:08:49.325407 2662 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-etc-cni-netd\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.326074 kubelet[2662]: I0516 06:08:49.325437 2662 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-xtables-lock\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.326074 kubelet[2662]: I0516 06:08:49.325464 2662 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4nbkv\" (UniqueName: \"kubernetes.io/projected/8069b4e8-57dd-493e-97dd-2560a89bac2a-kube-api-access-4nbkv\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.326074 kubelet[2662]: I0516 06:08:49.325491 2662 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8069b4e8-57dd-493e-97dd-2560a89bac2a-clustermesh-secrets\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.326074 kubelet[2662]: I0516 06:08:49.325516 2662 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-lib-modules\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.326074 kubelet[2662]: I0516 06:08:49.325541 2662 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-run\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.326074 kubelet[2662]: I0516 06:08:49.325568 2662 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-host-proc-sys-kernel\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.326074 kubelet[2662]: I0516 06:08:49.325592 2662 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-host-proc-sys-net\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.326783 kubelet[2662]: I0516 06:08:49.325616 2662 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-bpf-maps\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.326783 kubelet[2662]: I0516 06:08:49.325640 2662 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8069b4e8-57dd-493e-97dd-2560a89bac2a-cilium-config-path\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.326783 kubelet[2662]: I0516 06:08:49.325793 2662 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8069b4e8-57dd-493e-97dd-2560a89bac2a-hostproc\") on node \"ci-4230-1-1-n-15f3e1d893.novalocal\" DevicePath \"\"" May 16 06:08:49.354608 kubelet[2662]: E0516 06:08:49.354458 2662 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 06:08:49.610747 kubelet[2662]: I0516 06:08:49.610242 2662 scope.go:117] "RemoveContainer" containerID="758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b" May 16 06:08:49.620747 containerd[1477]: time="2025-05-16T06:08:49.620506162Z" level=info msg="RemoveContainer for \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\"" May 16 06:08:49.645774 systemd[1]: Removed slice kubepods-burstable-pod8069b4e8_57dd_493e_97dd_2560a89bac2a.slice - libcontainer container kubepods-burstable-pod8069b4e8_57dd_493e_97dd_2560a89bac2a.slice. May 16 06:08:49.646817 systemd[1]: kubepods-burstable-pod8069b4e8_57dd_493e_97dd_2560a89bac2a.slice: Consumed 10.493s CPU time, 122.8M memory peak, 136K read from disk, 13.3M written to disk. May 16 06:08:49.655215 containerd[1477]: time="2025-05-16T06:08:49.654957096Z" level=info msg="RemoveContainer for \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\" returns successfully" May 16 06:08:49.662807 kubelet[2662]: I0516 06:08:49.659998 2662 scope.go:117] "RemoveContainer" containerID="685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f" May 16 06:08:49.662191 systemd[1]: Removed slice kubepods-besteffort-poddf3bf59b_50c8_468c_b13d_4976f127f15f.slice - libcontainer container kubepods-besteffort-poddf3bf59b_50c8_468c_b13d_4976f127f15f.slice. May 16 06:08:49.663279 containerd[1477]: time="2025-05-16T06:08:49.661709035Z" level=info msg="RemoveContainer for \"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f\"" May 16 06:08:49.664297 systemd[1]: kubepods-besteffort-poddf3bf59b_50c8_468c_b13d_4976f127f15f.slice: Consumed 1.261s CPU time, 29.1M memory peak, 4K written to disk. May 16 06:08:49.668669 containerd[1477]: time="2025-05-16T06:08:49.667937213Z" level=info msg="RemoveContainer for \"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f\" returns successfully" May 16 06:08:49.669793 kubelet[2662]: I0516 06:08:49.668131 2662 scope.go:117] "RemoveContainer" containerID="308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4" May 16 06:08:49.673790 containerd[1477]: time="2025-05-16T06:08:49.671626330Z" level=info msg="RemoveContainer for \"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4\"" May 16 06:08:49.678853 containerd[1477]: time="2025-05-16T06:08:49.678389921Z" level=info msg="RemoveContainer for \"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4\" returns successfully" May 16 06:08:49.679025 kubelet[2662]: I0516 06:08:49.678724 2662 scope.go:117] "RemoveContainer" containerID="e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e" May 16 06:08:49.680626 containerd[1477]: time="2025-05-16T06:08:49.680282850Z" level=info msg="RemoveContainer for \"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e\"" May 16 06:08:49.684916 containerd[1477]: time="2025-05-16T06:08:49.684845715Z" level=info msg="RemoveContainer for \"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e\" returns successfully" May 16 06:08:49.687753 kubelet[2662]: I0516 06:08:49.687701 2662 scope.go:117] "RemoveContainer" containerID="8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a" May 16 06:08:49.689600 containerd[1477]: time="2025-05-16T06:08:49.689120761Z" level=info msg="RemoveContainer for \"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a\"" May 16 06:08:49.694952 containerd[1477]: time="2025-05-16T06:08:49.694919103Z" level=info msg="RemoveContainer for \"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a\" returns successfully" May 16 06:08:49.696610 kubelet[2662]: I0516 06:08:49.696511 2662 scope.go:117] "RemoveContainer" containerID="758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b" May 16 06:08:49.697319 containerd[1477]: time="2025-05-16T06:08:49.697153552Z" level=error msg="ContainerStatus for \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\": not found" May 16 06:08:49.697942 kubelet[2662]: E0516 06:08:49.697568 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\": not found" containerID="758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b" May 16 06:08:49.697942 kubelet[2662]: I0516 06:08:49.697647 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b"} err="failed to get container status \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\": rpc error: code = NotFound desc = an error occurred when try to find container \"758f0ff5a917b69d04c9414989b529bcd9024a707cd63427ba1643bb4bc3509b\": not found" May 16 06:08:49.697942 kubelet[2662]: I0516 06:08:49.697868 2662 scope.go:117] "RemoveContainer" containerID="685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f" May 16 06:08:49.698207 containerd[1477]: time="2025-05-16T06:08:49.698177111Z" level=error msg="ContainerStatus for \"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f\": not found" May 16 06:08:49.698643 kubelet[2662]: E0516 06:08:49.698510 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f\": not found" containerID="685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f" May 16 06:08:49.698643 kubelet[2662]: I0516 06:08:49.698579 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f"} err="failed to get container status \"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f\": rpc error: code = NotFound desc = an error occurred when try to find container \"685094a71a31e08ba2968400f18fc2cc096fc45a3758289664a1f6a4b00cd51f\": not found" May 16 06:08:49.698643 kubelet[2662]: I0516 06:08:49.698598 2662 scope.go:117] "RemoveContainer" containerID="308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4" May 16 06:08:49.699382 containerd[1477]: time="2025-05-16T06:08:49.698995105Z" level=error msg="ContainerStatus for \"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4\": not found" May 16 06:08:49.699369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc-rootfs.mount: Deactivated successfully. May 16 06:08:49.699739 kubelet[2662]: E0516 06:08:49.699174 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4\": not found" containerID="308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4" May 16 06:08:49.699739 kubelet[2662]: I0516 06:08:49.699195 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4"} err="failed to get container status \"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4\": rpc error: code = NotFound desc = an error occurred when try to find container \"308395bca46455a3c56d37279f9ee89c43c5adfd68451c18588d85262247bac4\": not found" May 16 06:08:49.699739 kubelet[2662]: I0516 06:08:49.699293 2662 scope.go:117] "RemoveContainer" containerID="e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e" May 16 06:08:49.700072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc-shm.mount: Deactivated successfully. May 16 06:08:49.701153 kubelet[2662]: E0516 06:08:49.700326 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e\": not found" containerID="e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e" May 16 06:08:49.701153 kubelet[2662]: I0516 06:08:49.700346 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e"} err="failed to get container status \"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e\": not found" May 16 06:08:49.701153 kubelet[2662]: I0516 06:08:49.700362 2662 scope.go:117] "RemoveContainer" containerID="8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a" May 16 06:08:49.701153 kubelet[2662]: E0516 06:08:49.700642 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a\": not found" containerID="8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a" May 16 06:08:49.701153 kubelet[2662]: I0516 06:08:49.700662 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a"} err="failed to get container status \"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a\": not found" May 16 06:08:49.701153 kubelet[2662]: I0516 06:08:49.700706 2662 scope.go:117] "RemoveContainer" containerID="38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b" May 16 06:08:49.701360 containerd[1477]: time="2025-05-16T06:08:49.700203891Z" level=error msg="ContainerStatus for \"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e90000559617bb60fe2b232beb89655c28b652c3e4280a038c490c15d9ce893e\": not found" May 16 06:08:49.701360 containerd[1477]: time="2025-05-16T06:08:49.700541975Z" level=error msg="ContainerStatus for \"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8798981885fa35dd7342f63cb9bf0028e8680eb01d6c67dfad71397024e71a7a\": not found" May 16 06:08:49.700282 systemd[1]: var-lib-kubelet-pods-8069b4e8\x2d57dd\x2d493e\x2d97dd\x2d2560a89bac2a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4nbkv.mount: Deactivated successfully. May 16 06:08:49.700429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814-rootfs.mount: Deactivated successfully. May 16 06:08:49.700566 systemd[1]: var-lib-kubelet-pods-df3bf59b\x2d50c8\x2d468c\x2db13d\x2d4976f127f15f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzj4sk.mount: Deactivated successfully. May 16 06:08:49.700760 systemd[1]: var-lib-kubelet-pods-8069b4e8\x2d57dd\x2d493e\x2d97dd\x2d2560a89bac2a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 06:08:49.700908 systemd[1]: var-lib-kubelet-pods-8069b4e8\x2d57dd\x2d493e\x2d97dd\x2d2560a89bac2a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 06:08:49.707821 containerd[1477]: time="2025-05-16T06:08:49.707182065Z" level=info msg="RemoveContainer for \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\"" May 16 06:08:49.713409 containerd[1477]: time="2025-05-16T06:08:49.713359337Z" level=info msg="RemoveContainer for \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\" returns successfully" May 16 06:08:49.714378 kubelet[2662]: I0516 06:08:49.714041 2662 scope.go:117] "RemoveContainer" containerID="38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b" May 16 06:08:49.714890 containerd[1477]: time="2025-05-16T06:08:49.714639036Z" level=error msg="ContainerStatus for \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\": not found" May 16 06:08:49.715309 kubelet[2662]: E0516 06:08:49.715161 2662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\": not found" containerID="38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b" May 16 06:08:49.715309 kubelet[2662]: I0516 06:08:49.715285 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b"} err="failed to get container status \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\": rpc error: code = NotFound desc = an error occurred when try to find container \"38c97c768b18356e39083d67aeb28f3d0d8404707f616d029afe961b1806ba4b\": not found" May 16 06:08:50.101842 kubelet[2662]: I0516 06:08:50.101002 2662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8069b4e8-57dd-493e-97dd-2560a89bac2a" path="/var/lib/kubelet/pods/8069b4e8-57dd-493e-97dd-2560a89bac2a/volumes" May 16 06:08:50.103345 kubelet[2662]: I0516 06:08:50.103243 2662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df3bf59b-50c8-468c-b13d-4976f127f15f" path="/var/lib/kubelet/pods/df3bf59b-50c8-468c-b13d-4976f127f15f/volumes" May 16 06:08:50.675804 sshd[4275]: Connection closed by 172.24.4.1 port 38492 May 16 06:08:50.677906 sshd-session[4272]: pam_unix(sshd:session): session closed for user core May 16 06:08:50.705218 systemd[1]: sshd@21-172.24.4.222:22-172.24.4.1:38492.service: Deactivated successfully. May 16 06:08:50.711951 systemd[1]: session-24.scope: Deactivated successfully. May 16 06:08:50.712780 systemd[1]: session-24.scope: Consumed 1.247s CPU time, 23.7M memory peak. May 16 06:08:50.718172 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. May 16 06:08:50.729454 systemd[1]: Started sshd@22-172.24.4.222:22-172.24.4.1:38506.service - OpenSSH per-connection server daemon (172.24.4.1:38506). May 16 06:08:50.734410 systemd-logind[1455]: Removed session 24. May 16 06:08:50.956876 kubelet[2662]: I0516 06:08:50.954845 2662 setters.go:602] "Node became not ready" node="ci-4230-1-1-n-15f3e1d893.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T06:08:50Z","lastTransitionTime":"2025-05-16T06:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 06:08:51.851705 sshd[4434]: Accepted publickey for core from 172.24.4.1 port 38506 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:51.853250 sshd-session[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:51.865866 systemd-logind[1455]: New session 25 of user core. May 16 06:08:51.874983 systemd[1]: Started session-25.scope - Session 25 of User core. May 16 06:08:53.205192 kubelet[2662]: I0516 06:08:53.204328 2662 memory_manager.go:355] "RemoveStaleState removing state" podUID="df3bf59b-50c8-468c-b13d-4976f127f15f" containerName="cilium-operator" May 16 06:08:53.205192 kubelet[2662]: I0516 06:08:53.204378 2662 memory_manager.go:355] "RemoveStaleState removing state" podUID="8069b4e8-57dd-493e-97dd-2560a89bac2a" containerName="cilium-agent" May 16 06:08:53.219533 systemd[1]: Created slice kubepods-burstable-podb9c26b9b_414f_4f31_80b1_863308413c35.slice - libcontainer container kubepods-burstable-podb9c26b9b_414f_4f31_80b1_863308413c35.slice. May 16 06:08:53.337309 sshd[4439]: Connection closed by 172.24.4.1 port 38506 May 16 06:08:53.338209 sshd-session[4434]: pam_unix(sshd:session): session closed for user core May 16 06:08:53.356526 kubelet[2662]: I0516 06:08:53.355939 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b9c26b9b-414f-4f31-80b1-863308413c35-hubble-tls\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.356526 kubelet[2662]: I0516 06:08:53.356040 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b9c26b9b-414f-4f31-80b1-863308413c35-bpf-maps\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.356526 kubelet[2662]: I0516 06:08:53.356127 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b9c26b9b-414f-4f31-80b1-863308413c35-cilium-cgroup\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.356526 kubelet[2662]: I0516 06:08:53.356289 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b9c26b9b-414f-4f31-80b1-863308413c35-cilium-ipsec-secrets\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.356526 kubelet[2662]: I0516 06:08:53.356462 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b9c26b9b-414f-4f31-80b1-863308413c35-etc-cni-netd\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.356526 kubelet[2662]: I0516 06:08:53.356525 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9c26b9b-414f-4f31-80b1-863308413c35-xtables-lock\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.358335 kubelet[2662]: I0516 06:08:53.356582 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b9c26b9b-414f-4f31-80b1-863308413c35-clustermesh-secrets\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.358335 kubelet[2662]: I0516 06:08:53.356637 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b9c26b9b-414f-4f31-80b1-863308413c35-host-proc-sys-kernel\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.358335 kubelet[2662]: I0516 06:08:53.356826 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kjbw\" (UniqueName: \"kubernetes.io/projected/b9c26b9b-414f-4f31-80b1-863308413c35-kube-api-access-4kjbw\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.358335 kubelet[2662]: I0516 06:08:53.356925 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b9c26b9b-414f-4f31-80b1-863308413c35-cilium-run\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.358335 kubelet[2662]: I0516 06:08:53.356986 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b9c26b9b-414f-4f31-80b1-863308413c35-hostproc\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.358335 kubelet[2662]: I0516 06:08:53.357083 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9c26b9b-414f-4f31-80b1-863308413c35-lib-modules\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.358990 kubelet[2662]: I0516 06:08:53.357241 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b9c26b9b-414f-4f31-80b1-863308413c35-cni-path\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.358990 kubelet[2662]: I0516 06:08:53.357298 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b9c26b9b-414f-4f31-80b1-863308413c35-cilium-config-path\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.358990 kubelet[2662]: I0516 06:08:53.357345 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b9c26b9b-414f-4f31-80b1-863308413c35-host-proc-sys-net\") pod \"cilium-7s9kr\" (UID: \"b9c26b9b-414f-4f31-80b1-863308413c35\") " pod="kube-system/cilium-7s9kr" May 16 06:08:53.361500 systemd[1]: sshd@22-172.24.4.222:22-172.24.4.1:38506.service: Deactivated successfully. May 16 06:08:53.368013 systemd[1]: session-25.scope: Deactivated successfully. May 16 06:08:53.370741 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. May 16 06:08:53.384543 systemd[1]: Started sshd@23-172.24.4.222:22-172.24.4.1:38514.service - OpenSSH per-connection server daemon (172.24.4.1:38514). May 16 06:08:53.388997 systemd-logind[1455]: Removed session 25. May 16 06:08:53.826417 containerd[1477]: time="2025-05-16T06:08:53.826321939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7s9kr,Uid:b9c26b9b-414f-4f31-80b1-863308413c35,Namespace:kube-system,Attempt:0,}" May 16 06:08:53.892795 containerd[1477]: time="2025-05-16T06:08:53.891219549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 06:08:53.892795 containerd[1477]: time="2025-05-16T06:08:53.891470188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 06:08:53.892795 containerd[1477]: time="2025-05-16T06:08:53.891537775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:08:53.892795 containerd[1477]: time="2025-05-16T06:08:53.891808954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 06:08:53.949139 systemd[1]: Started cri-containerd-9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e.scope - libcontainer container 9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e. May 16 06:08:53.976518 containerd[1477]: time="2025-05-16T06:08:53.976213509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7s9kr,Uid:b9c26b9b-414f-4f31-80b1-863308413c35,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e\"" May 16 06:08:53.981443 containerd[1477]: time="2025-05-16T06:08:53.981387180Z" level=info msg="CreateContainer within sandbox \"9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 06:08:53.999701 containerd[1477]: time="2025-05-16T06:08:53.999590891Z" level=info msg="CreateContainer within sandbox \"9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3629418ae604e5ebde0435c685daca0cd415762c992dce8dabfbbc3b9efd4351\"" May 16 06:08:54.000696 containerd[1477]: time="2025-05-16T06:08:54.000450582Z" level=info msg="StartContainer for \"3629418ae604e5ebde0435c685daca0cd415762c992dce8dabfbbc3b9efd4351\"" May 16 06:08:54.027852 systemd[1]: Started cri-containerd-3629418ae604e5ebde0435c685daca0cd415762c992dce8dabfbbc3b9efd4351.scope - libcontainer container 3629418ae604e5ebde0435c685daca0cd415762c992dce8dabfbbc3b9efd4351. May 16 06:08:54.057165 containerd[1477]: time="2025-05-16T06:08:54.057065694Z" level=info msg="StartContainer for \"3629418ae604e5ebde0435c685daca0cd415762c992dce8dabfbbc3b9efd4351\" returns successfully" May 16 06:08:54.074386 systemd[1]: cri-containerd-3629418ae604e5ebde0435c685daca0cd415762c992dce8dabfbbc3b9efd4351.scope: Deactivated successfully. May 16 06:08:54.120329 containerd[1477]: time="2025-05-16T06:08:54.120042422Z" level=info msg="shim disconnected" id=3629418ae604e5ebde0435c685daca0cd415762c992dce8dabfbbc3b9efd4351 namespace=k8s.io May 16 06:08:54.120329 containerd[1477]: time="2025-05-16T06:08:54.120097034Z" level=warning msg="cleaning up after shim disconnected" id=3629418ae604e5ebde0435c685daca0cd415762c992dce8dabfbbc3b9efd4351 namespace=k8s.io May 16 06:08:54.120329 containerd[1477]: time="2025-05-16T06:08:54.120106101Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:08:54.356818 kubelet[2662]: E0516 06:08:54.356734 2662 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 06:08:54.671229 containerd[1477]: time="2025-05-16T06:08:54.671056822Z" level=info msg="CreateContainer within sandbox \"9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 06:08:54.715557 containerd[1477]: time="2025-05-16T06:08:54.715460427Z" level=info msg="CreateContainer within sandbox \"9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d504f71d4a5765f484f1b82e4e28d7c8ff61f941df333f8f1cdc4feb204a1197\"" May 16 06:08:54.718025 containerd[1477]: time="2025-05-16T06:08:54.717883951Z" level=info msg="StartContainer for \"d504f71d4a5765f484f1b82e4e28d7c8ff61f941df333f8f1cdc4feb204a1197\"" May 16 06:08:54.768910 systemd[1]: Started cri-containerd-d504f71d4a5765f484f1b82e4e28d7c8ff61f941df333f8f1cdc4feb204a1197.scope - libcontainer container d504f71d4a5765f484f1b82e4e28d7c8ff61f941df333f8f1cdc4feb204a1197. May 16 06:08:54.811998 containerd[1477]: time="2025-05-16T06:08:54.811925285Z" level=info msg="StartContainer for \"d504f71d4a5765f484f1b82e4e28d7c8ff61f941df333f8f1cdc4feb204a1197\" returns successfully" May 16 06:08:54.820617 systemd[1]: cri-containerd-d504f71d4a5765f484f1b82e4e28d7c8ff61f941df333f8f1cdc4feb204a1197.scope: Deactivated successfully. May 16 06:08:54.845453 containerd[1477]: time="2025-05-16T06:08:54.845391482Z" level=info msg="shim disconnected" id=d504f71d4a5765f484f1b82e4e28d7c8ff61f941df333f8f1cdc4feb204a1197 namespace=k8s.io May 16 06:08:54.845453 containerd[1477]: time="2025-05-16T06:08:54.845447356Z" level=warning msg="cleaning up after shim disconnected" id=d504f71d4a5765f484f1b82e4e28d7c8ff61f941df333f8f1cdc4feb204a1197 namespace=k8s.io May 16 06:08:54.845453 containerd[1477]: time="2025-05-16T06:08:54.845457345Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:08:54.925741 sshd[4448]: Accepted publickey for core from 172.24.4.1 port 38514 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:54.926515 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:54.940120 systemd-logind[1455]: New session 26 of user core. May 16 06:08:54.948024 systemd[1]: Started session-26.scope - Session 26 of User core. May 16 06:08:55.481499 sshd[4620]: Connection closed by 172.24.4.1 port 38514 May 16 06:08:55.480662 sshd-session[4448]: pam_unix(sshd:session): session closed for user core May 16 06:08:55.484197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d504f71d4a5765f484f1b82e4e28d7c8ff61f941df333f8f1cdc4feb204a1197-rootfs.mount: Deactivated successfully. May 16 06:08:55.498315 systemd[1]: sshd@23-172.24.4.222:22-172.24.4.1:38514.service: Deactivated successfully. May 16 06:08:55.501613 systemd[1]: session-26.scope: Deactivated successfully. May 16 06:08:55.504747 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. May 16 06:08:55.513126 systemd[1]: Started sshd@24-172.24.4.222:22-172.24.4.1:43412.service - OpenSSH per-connection server daemon (172.24.4.1:43412). May 16 06:08:55.517626 systemd-logind[1455]: Removed session 26. May 16 06:08:55.679732 containerd[1477]: time="2025-05-16T06:08:55.678081924Z" level=info msg="CreateContainer within sandbox \"9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 06:08:55.716407 containerd[1477]: time="2025-05-16T06:08:55.715545928Z" level=info msg="CreateContainer within sandbox \"9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"51e513a4344e09252c5e42ef6231f8e7256706b9d6906fc6b80b876edd581021\"" May 16 06:08:55.717054 containerd[1477]: time="2025-05-16T06:08:55.716852157Z" level=info msg="StartContainer for \"51e513a4344e09252c5e42ef6231f8e7256706b9d6906fc6b80b876edd581021\"" May 16 06:08:55.776852 systemd[1]: Started cri-containerd-51e513a4344e09252c5e42ef6231f8e7256706b9d6906fc6b80b876edd581021.scope - libcontainer container 51e513a4344e09252c5e42ef6231f8e7256706b9d6906fc6b80b876edd581021. May 16 06:08:55.822028 containerd[1477]: time="2025-05-16T06:08:55.820626591Z" level=info msg="StartContainer for \"51e513a4344e09252c5e42ef6231f8e7256706b9d6906fc6b80b876edd581021\" returns successfully" May 16 06:08:55.821401 systemd[1]: cri-containerd-51e513a4344e09252c5e42ef6231f8e7256706b9d6906fc6b80b876edd581021.scope: Deactivated successfully. May 16 06:08:55.861232 containerd[1477]: time="2025-05-16T06:08:55.860972397Z" level=info msg="shim disconnected" id=51e513a4344e09252c5e42ef6231f8e7256706b9d6906fc6b80b876edd581021 namespace=k8s.io May 16 06:08:55.861232 containerd[1477]: time="2025-05-16T06:08:55.861201277Z" level=warning msg="cleaning up after shim disconnected" id=51e513a4344e09252c5e42ef6231f8e7256706b9d6906fc6b80b876edd581021 namespace=k8s.io May 16 06:08:55.862000 containerd[1477]: time="2025-05-16T06:08:55.861385561Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:08:56.487313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51e513a4344e09252c5e42ef6231f8e7256706b9d6906fc6b80b876edd581021-rootfs.mount: Deactivated successfully. May 16 06:08:56.743129 containerd[1477]: time="2025-05-16T06:08:56.742498186Z" level=info msg="CreateContainer within sandbox \"9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 06:08:56.743383 sshd[4626]: Accepted publickey for core from 172.24.4.1 port 43412 ssh2: RSA SHA256:z1LWItCHXBSMgXvA9vEtKueYwpTpeMyEci+7MaEaswY May 16 06:08:56.750825 sshd-session[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 06:08:56.775776 systemd-logind[1455]: New session 27 of user core. May 16 06:08:56.781661 systemd[1]: Started session-27.scope - Session 27 of User core. May 16 06:08:56.798871 containerd[1477]: time="2025-05-16T06:08:56.798829475Z" level=info msg="CreateContainer within sandbox \"9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7ad30fb8cc3cc41411499f37416313913ee31cafeb42f6832b33c3901aef4432\"" May 16 06:08:56.801837 containerd[1477]: time="2025-05-16T06:08:56.800701485Z" level=info msg="StartContainer for \"7ad30fb8cc3cc41411499f37416313913ee31cafeb42f6832b33c3901aef4432\"" May 16 06:08:56.862867 systemd[1]: Started cri-containerd-7ad30fb8cc3cc41411499f37416313913ee31cafeb42f6832b33c3901aef4432.scope - libcontainer container 7ad30fb8cc3cc41411499f37416313913ee31cafeb42f6832b33c3901aef4432. May 16 06:08:56.901945 systemd[1]: cri-containerd-7ad30fb8cc3cc41411499f37416313913ee31cafeb42f6832b33c3901aef4432.scope: Deactivated successfully. May 16 06:08:56.907512 containerd[1477]: time="2025-05-16T06:08:56.907476995Z" level=info msg="StartContainer for \"7ad30fb8cc3cc41411499f37416313913ee31cafeb42f6832b33c3901aef4432\" returns successfully" May 16 06:08:56.951019 containerd[1477]: time="2025-05-16T06:08:56.950931731Z" level=info msg="shim disconnected" id=7ad30fb8cc3cc41411499f37416313913ee31cafeb42f6832b33c3901aef4432 namespace=k8s.io May 16 06:08:56.951019 containerd[1477]: time="2025-05-16T06:08:56.951008234Z" level=warning msg="cleaning up after shim disconnected" id=7ad30fb8cc3cc41411499f37416313913ee31cafeb42f6832b33c3901aef4432 namespace=k8s.io May 16 06:08:56.951335 containerd[1477]: time="2025-05-16T06:08:56.951022241Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 16 06:08:57.485333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ad30fb8cc3cc41411499f37416313913ee31cafeb42f6832b33c3901aef4432-rootfs.mount: Deactivated successfully. May 16 06:08:57.728381 containerd[1477]: time="2025-05-16T06:08:57.727540446Z" level=info msg="CreateContainer within sandbox \"9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 06:08:57.918814 containerd[1477]: time="2025-05-16T06:08:57.916310022Z" level=info msg="CreateContainer within sandbox \"9d1602f18898e4aa76f31f52aa159824e53478223e4f51efc1f2c6ada534e24e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5ad54667548c82fbd10d9c42bc6a56a75fd411f14c0b927a18e804fb3af4c446\"" May 16 06:08:57.924829 containerd[1477]: time="2025-05-16T06:08:57.923498240Z" level=info msg="StartContainer for \"5ad54667548c82fbd10d9c42bc6a56a75fd411f14c0b927a18e804fb3af4c446\"" May 16 06:08:57.923889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082790618.mount: Deactivated successfully. May 16 06:08:57.986963 systemd[1]: Started cri-containerd-5ad54667548c82fbd10d9c42bc6a56a75fd411f14c0b927a18e804fb3af4c446.scope - libcontainer container 5ad54667548c82fbd10d9c42bc6a56a75fd411f14c0b927a18e804fb3af4c446. May 16 06:08:58.032651 containerd[1477]: time="2025-05-16T06:08:58.032604089Z" level=info msg="StartContainer for \"5ad54667548c82fbd10d9c42bc6a56a75fd411f14c0b927a18e804fb3af4c446\" returns successfully" May 16 06:08:58.564740 kernel: cryptd: max_cpu_qlen set to 1000 May 16 06:08:58.639847 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) May 16 06:08:58.793140 kubelet[2662]: I0516 06:08:58.792916 2662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7s9kr" podStartSLOduration=5.792818006 podStartE2EDuration="5.792818006s" podCreationTimestamp="2025-05-16 06:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 06:08:58.788573087 +0000 UTC m=+344.881107292" watchObservedRunningTime="2025-05-16 06:08:58.792818006 +0000 UTC m=+344.885352221" May 16 06:09:01.645126 systemd[1]: run-containerd-runc-k8s.io-5ad54667548c82fbd10d9c42bc6a56a75fd411f14c0b927a18e804fb3af4c446-runc.Ebqw5S.mount: Deactivated successfully. May 16 06:09:02.228782 systemd-networkd[1382]: lxc_health: Link UP May 16 06:09:02.237821 systemd-networkd[1382]: lxc_health: Gained carrier May 16 06:09:03.912857 systemd-networkd[1382]: lxc_health: Gained IPv6LL May 16 06:09:08.294097 systemd[1]: run-containerd-runc-k8s.io-5ad54667548c82fbd10d9c42bc6a56a75fd411f14c0b927a18e804fb3af4c446-runc.GYULxA.mount: Deactivated successfully. May 16 06:09:08.679260 sshd[4686]: Connection closed by 172.24.4.1 port 43412 May 16 06:09:08.682341 sshd-session[4626]: pam_unix(sshd:session): session closed for user core May 16 06:09:08.695153 systemd[1]: sshd@24-172.24.4.222:22-172.24.4.1:43412.service: Deactivated successfully. May 16 06:09:08.704954 systemd[1]: session-27.scope: Deactivated successfully. May 16 06:09:08.708086 systemd-logind[1455]: Session 27 logged out. Waiting for processes to exit. May 16 06:09:08.713090 systemd-logind[1455]: Removed session 27. May 16 06:09:14.149121 containerd[1477]: time="2025-05-16T06:09:14.148891007Z" level=info msg="StopPodSandbox for \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\"" May 16 06:09:14.150429 containerd[1477]: time="2025-05-16T06:09:14.149446908Z" level=info msg="TearDown network for sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" successfully" May 16 06:09:14.150429 containerd[1477]: time="2025-05-16T06:09:14.149485691Z" level=info msg="StopPodSandbox for \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" returns successfully" May 16 06:09:14.151527 containerd[1477]: time="2025-05-16T06:09:14.151425829Z" level=info msg="RemovePodSandbox for \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\"" May 16 06:09:14.151728 containerd[1477]: time="2025-05-16T06:09:14.151552406Z" level=info msg="Forcibly stopping sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\"" May 16 06:09:14.152052 containerd[1477]: time="2025-05-16T06:09:14.151736411Z" level=info msg="TearDown network for sandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" successfully" May 16 06:09:14.159530 containerd[1477]: time="2025-05-16T06:09:14.159432061Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 06:09:14.159772 containerd[1477]: time="2025-05-16T06:09:14.159666130Z" level=info msg="RemovePodSandbox \"175531a849fb792fb08bdcf4f13623afdfef3c1edfa2c1c1e73ee4ddf67c31dc\" returns successfully" May 16 06:09:14.161262 containerd[1477]: time="2025-05-16T06:09:14.160819783Z" level=info msg="StopPodSandbox for \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\"" May 16 06:09:14.161262 containerd[1477]: time="2025-05-16T06:09:14.161089248Z" level=info msg="TearDown network for sandbox \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\" successfully" May 16 06:09:14.161262 containerd[1477]: time="2025-05-16T06:09:14.161122750Z" level=info msg="StopPodSandbox for \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\" returns successfully" May 16 06:09:14.162855 containerd[1477]: time="2025-05-16T06:09:14.162808201Z" level=info msg="RemovePodSandbox for \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\"" May 16 06:09:14.164744 containerd[1477]: time="2025-05-16T06:09:14.163187492Z" level=info msg="Forcibly stopping sandbox \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\"" May 16 06:09:14.164744 containerd[1477]: time="2025-05-16T06:09:14.163361909Z" level=info msg="TearDown network for sandbox \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\" successfully" May 16 06:09:14.170382 containerd[1477]: time="2025-05-16T06:09:14.170314085Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 16 06:09:14.170818 containerd[1477]: time="2025-05-16T06:09:14.170766684Z" level=info msg="RemovePodSandbox \"b792e775f2dd347f4343ba7fcaa4abae976dfcaf353032c827d97ba5ebf6b814\" returns successfully" May 16 06:10:50.628308 update_engine[1460]: I20250516 06:10:50.623958 1460 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 16 06:10:50.628308 update_engine[1460]: I20250516 06:10:50.624274 1460 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 16 06:10:50.628308 update_engine[1460]: I20250516 06:10:50.625289 1460 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 16 06:10:50.634071 update_engine[1460]: I20250516 06:10:50.631111 1460 omaha_request_params.cc:62] Current group set to beta May 16 06:10:50.634071 update_engine[1460]: I20250516 06:10:50.633100 1460 update_attempter.cc:499] Already updated boot flags. Skipping. May 16 06:10:50.634071 update_engine[1460]: I20250516 06:10:50.633148 1460 update_attempter.cc:643] Scheduling an action processor start. May 16 06:10:50.634071 update_engine[1460]: I20250516 06:10:50.633471 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 16 06:10:50.634071 update_engine[1460]: I20250516 06:10:50.633911 1460 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 16 06:10:50.637003 update_engine[1460]: I20250516 06:10:50.634260 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled May 16 06:10:50.637003 update_engine[1460]: I20250516 06:10:50.634307 1460 omaha_request_action.cc:272] Request: May 16 06:10:50.637003 update_engine[1460]: May 16 06:10:50.637003 update_engine[1460]: May 16 06:10:50.637003 update_engine[1460]: May 16 06:10:50.637003 update_engine[1460]: May 16 06:10:50.637003 update_engine[1460]: May 16 06:10:50.637003 update_engine[1460]: May 16 06:10:50.637003 update_engine[1460]: May 16 06:10:50.637003 update_engine[1460]: May 16 06:10:50.637003 update_engine[1460]: I20250516 06:10:50.634341 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 06:10:50.642938 locksmithd[1500]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 16 06:10:50.644051 update_engine[1460]: I20250516 06:10:50.643150 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 06:10:50.644914 update_engine[1460]: I20250516 06:10:50.644730 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 06:10:50.650786 update_engine[1460]: E20250516 06:10:50.650599 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 06:10:50.651006 update_engine[1460]: I20250516 06:10:50.650897 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 16 06:11:00.531728 update_engine[1460]: I20250516 06:11:00.531505 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 16 06:11:00.533361 update_engine[1460]: I20250516 06:11:00.532596 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 16 06:11:00.533670 update_engine[1460]: I20250516 06:11:00.533521 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 16 06:11:00.540013 update_engine[1460]: E20250516 06:11:00.539889 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 16 06:11:00.540289 update_engine[1460]: I20250516 06:11:00.540237 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 2