Jun 21 06:14:17.984487 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 23:59:04 -00 2025 Jun 21 06:14:17.984513 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 06:14:17.984523 kernel: BIOS-provided physical RAM map: Jun 21 06:14:17.984533 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 21 06:14:17.984540 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 21 06:14:17.984548 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 21 06:14:17.984556 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jun 21 06:14:17.984564 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jun 21 06:14:17.984572 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 21 06:14:17.984580 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 21 06:14:17.984587 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jun 21 06:14:17.984595 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 21 06:14:17.984604 kernel: NX (Execute Disable) protection: active Jun 21 06:14:17.984612 kernel: APIC: Static calls initialized Jun 21 06:14:17.984621 kernel: SMBIOS 3.0.0 present. Jun 21 06:14:17.984630 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jun 21 06:14:17.984637 kernel: DMI: Memory slots populated: 1/1 Jun 21 06:14:17.984647 kernel: Hypervisor detected: KVM Jun 21 06:14:17.984655 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 21 06:14:17.984663 kernel: kvm-clock: using sched offset of 5737859909 cycles Jun 21 06:14:17.984671 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 21 06:14:17.984680 kernel: tsc: Detected 1996.249 MHz processor Jun 21 06:14:17.984688 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 21 06:14:17.984697 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 21 06:14:17.984705 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jun 21 06:14:17.984714 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 21 06:14:17.984724 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 21 06:14:17.984732 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jun 21 06:14:17.984740 kernel: ACPI: Early table checksum verification disabled Jun 21 06:14:17.984749 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jun 21 06:14:17.984757 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 06:14:17.984765 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 06:14:17.984774 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 06:14:17.984782 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jun 21 06:14:17.984790 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 06:14:17.984800 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 21 06:14:17.984808 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jun 21 06:14:17.984816 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jun 21 06:14:17.984824 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jun 21 06:14:17.984833 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jun 21 06:14:17.984844 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jun 21 06:14:17.984853 kernel: No NUMA configuration found Jun 21 06:14:17.984862 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jun 21 06:14:17.984871 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] Jun 21 06:14:17.984880 kernel: Zone ranges: Jun 21 06:14:17.984889 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 21 06:14:17.984897 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 21 06:14:17.984906 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jun 21 06:14:17.984914 kernel: Device empty Jun 21 06:14:17.984923 kernel: Movable zone start for each node Jun 21 06:14:17.984933 kernel: Early memory node ranges Jun 21 06:14:17.984941 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 21 06:14:17.984949 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jun 21 06:14:17.984958 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jun 21 06:14:17.984966 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jun 21 06:14:17.984975 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 21 06:14:17.984983 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 21 06:14:17.984992 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jun 21 06:14:17.985001 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 21 06:14:17.985011 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 21 06:14:17.985020 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 21 06:14:17.985028 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 21 06:14:17.985037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 21 06:14:17.985045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 21 06:14:17.985054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 21 06:14:17.985063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 21 06:14:17.985071 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 21 06:14:17.985080 kernel: CPU topo: Max. logical packages: 2 Jun 21 06:14:17.985089 kernel: CPU topo: Max. logical dies: 2 Jun 21 06:14:17.985098 kernel: CPU topo: Max. dies per package: 1 Jun 21 06:14:17.985106 kernel: CPU topo: Max. threads per core: 1 Jun 21 06:14:17.985115 kernel: CPU topo: Num. cores per package: 1 Jun 21 06:14:17.985123 kernel: CPU topo: Num. threads per package: 1 Jun 21 06:14:17.985132 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 21 06:14:17.985140 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 21 06:14:17.985207 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jun 21 06:14:17.985217 kernel: Booting paravirtualized kernel on KVM Jun 21 06:14:17.985228 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 21 06:14:17.985237 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 21 06:14:17.985246 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 21 06:14:17.985254 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 21 06:14:17.985263 kernel: pcpu-alloc: [0] 0 1 Jun 21 06:14:17.985271 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 21 06:14:17.985281 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 06:14:17.985290 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 21 06:14:17.985301 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 21 06:14:17.985310 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 21 06:14:17.985318 kernel: Fallback order for Node 0: 0 Jun 21 06:14:17.985327 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jun 21 06:14:17.985335 kernel: Policy zone: Normal Jun 21 06:14:17.985344 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 21 06:14:17.985352 kernel: software IO TLB: area num 2. Jun 21 06:14:17.985361 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 21 06:14:17.985369 kernel: ftrace: allocating 40093 entries in 157 pages Jun 21 06:14:17.985380 kernel: ftrace: allocated 157 pages with 5 groups Jun 21 06:14:17.985388 kernel: Dynamic Preempt: voluntary Jun 21 06:14:17.985397 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 21 06:14:17.985406 kernel: rcu: RCU event tracing is enabled. Jun 21 06:14:17.985415 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 21 06:14:17.985424 kernel: Trampoline variant of Tasks RCU enabled. Jun 21 06:14:17.985433 kernel: Rude variant of Tasks RCU enabled. Jun 21 06:14:17.985441 kernel: Tracing variant of Tasks RCU enabled. Jun 21 06:14:17.985450 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 21 06:14:17.985458 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 21 06:14:17.985469 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 06:14:17.985478 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 06:14:17.985487 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 21 06:14:17.985495 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 21 06:14:17.985504 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 21 06:14:17.985512 kernel: Console: colour VGA+ 80x25 Jun 21 06:14:17.985521 kernel: printk: legacy console [tty0] enabled Jun 21 06:14:17.985529 kernel: printk: legacy console [ttyS0] enabled Jun 21 06:14:17.985539 kernel: ACPI: Core revision 20240827 Jun 21 06:14:17.985548 kernel: APIC: Switch to symmetric I/O mode setup Jun 21 06:14:17.985556 kernel: x2apic enabled Jun 21 06:14:17.985565 kernel: APIC: Switched APIC routing to: physical x2apic Jun 21 06:14:17.985573 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 21 06:14:17.985582 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 21 06:14:17.985611 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jun 21 06:14:17.985622 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 21 06:14:17.985631 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 21 06:14:17.985640 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 21 06:14:17.985649 kernel: Spectre V2 : Mitigation: Retpolines Jun 21 06:14:17.985658 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 21 06:14:17.985669 kernel: Speculative Store Bypass: Vulnerable Jun 21 06:14:17.985678 kernel: x86/fpu: x87 FPU will use FXSAVE Jun 21 06:14:17.985687 kernel: Freeing SMP alternatives memory: 32K Jun 21 06:14:17.985696 kernel: pid_max: default: 32768 minimum: 301 Jun 21 06:14:17.985705 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 21 06:14:17.985715 kernel: landlock: Up and running. Jun 21 06:14:17.985724 kernel: SELinux: Initializing. Jun 21 06:14:17.985733 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 06:14:17.985742 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 21 06:14:17.985751 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jun 21 06:14:17.985760 kernel: Performance Events: AMD PMU driver. Jun 21 06:14:17.985769 kernel: ... version: 0 Jun 21 06:14:17.985778 kernel: ... bit width: 48 Jun 21 06:14:17.985787 kernel: ... generic registers: 4 Jun 21 06:14:17.985798 kernel: ... value mask: 0000ffffffffffff Jun 21 06:14:17.985807 kernel: ... max period: 00007fffffffffff Jun 21 06:14:17.985816 kernel: ... fixed-purpose events: 0 Jun 21 06:14:17.985825 kernel: ... event mask: 000000000000000f Jun 21 06:14:17.985834 kernel: signal: max sigframe size: 1440 Jun 21 06:14:17.985842 kernel: rcu: Hierarchical SRCU implementation. Jun 21 06:14:17.985852 kernel: rcu: Max phase no-delay instances is 400. Jun 21 06:14:17.985861 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 21 06:14:17.985870 kernel: smp: Bringing up secondary CPUs ... Jun 21 06:14:17.985880 kernel: smpboot: x86: Booting SMP configuration: Jun 21 06:14:17.985889 kernel: .... node #0, CPUs: #1 Jun 21 06:14:17.985898 kernel: smp: Brought up 1 node, 2 CPUs Jun 21 06:14:17.985907 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jun 21 06:14:17.985916 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 227296K reserved, 0K cma-reserved) Jun 21 06:14:17.985925 kernel: devtmpfs: initialized Jun 21 06:14:17.985934 kernel: x86/mm: Memory block size: 128MB Jun 21 06:14:17.985943 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 21 06:14:17.985953 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 21 06:14:17.985963 kernel: pinctrl core: initialized pinctrl subsystem Jun 21 06:14:17.985972 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 21 06:14:17.985981 kernel: audit: initializing netlink subsys (disabled) Jun 21 06:14:17.985990 kernel: audit: type=2000 audit(1750486454.544:1): state=initialized audit_enabled=0 res=1 Jun 21 06:14:17.985999 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 21 06:14:17.986008 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 21 06:14:17.986017 kernel: cpuidle: using governor menu Jun 21 06:14:17.986026 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 21 06:14:17.986035 kernel: dca service started, version 1.12.1 Jun 21 06:14:17.986046 kernel: PCI: Using configuration type 1 for base access Jun 21 06:14:17.986055 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 21 06:14:17.986064 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 21 06:14:17.986073 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 21 06:14:17.986082 kernel: ACPI: Added _OSI(Module Device) Jun 21 06:14:17.986091 kernel: ACPI: Added _OSI(Processor Device) Jun 21 06:14:17.986100 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 21 06:14:17.986109 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 21 06:14:17.986118 kernel: ACPI: Interpreter enabled Jun 21 06:14:17.986129 kernel: ACPI: PM: (supports S0 S3 S5) Jun 21 06:14:17.986138 kernel: ACPI: Using IOAPIC for interrupt routing Jun 21 06:14:17.986159 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 21 06:14:17.986169 kernel: PCI: Using E820 reservations for host bridge windows Jun 21 06:14:17.986178 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 21 06:14:17.986187 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 21 06:14:17.986318 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 21 06:14:17.986408 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 21 06:14:17.986496 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 21 06:14:17.986510 kernel: acpiphp: Slot [3] registered Jun 21 06:14:17.986519 kernel: acpiphp: Slot [4] registered Jun 21 06:14:17.986528 kernel: acpiphp: Slot [5] registered Jun 21 06:14:17.986537 kernel: acpiphp: Slot [6] registered Jun 21 06:14:17.986546 kernel: acpiphp: Slot [7] registered Jun 21 06:14:17.986555 kernel: acpiphp: Slot [8] registered Jun 21 06:14:17.986563 kernel: acpiphp: Slot [9] registered Jun 21 06:14:17.986572 kernel: acpiphp: Slot [10] registered Jun 21 06:14:17.986584 kernel: acpiphp: Slot [11] registered Jun 21 06:14:17.986592 kernel: acpiphp: Slot [12] registered Jun 21 06:14:17.986601 kernel: acpiphp: Slot [13] registered Jun 21 06:14:17.986610 kernel: acpiphp: Slot [14] registered Jun 21 06:14:17.986619 kernel: acpiphp: Slot [15] registered Jun 21 06:14:17.986628 kernel: acpiphp: Slot [16] registered Jun 21 06:14:17.986637 kernel: acpiphp: Slot [17] registered Jun 21 06:14:17.986645 kernel: acpiphp: Slot [18] registered Jun 21 06:14:17.986654 kernel: acpiphp: Slot [19] registered Jun 21 06:14:17.986665 kernel: acpiphp: Slot [20] registered Jun 21 06:14:17.986674 kernel: acpiphp: Slot [21] registered Jun 21 06:14:17.986683 kernel: acpiphp: Slot [22] registered Jun 21 06:14:17.986692 kernel: acpiphp: Slot [23] registered Jun 21 06:14:17.986701 kernel: acpiphp: Slot [24] registered Jun 21 06:14:17.986709 kernel: acpiphp: Slot [25] registered Jun 21 06:14:17.986718 kernel: acpiphp: Slot [26] registered Jun 21 06:14:17.986727 kernel: acpiphp: Slot [27] registered Jun 21 06:14:17.986736 kernel: acpiphp: Slot [28] registered Jun 21 06:14:17.986745 kernel: acpiphp: Slot [29] registered Jun 21 06:14:17.986756 kernel: acpiphp: Slot [30] registered Jun 21 06:14:17.986764 kernel: acpiphp: Slot [31] registered Jun 21 06:14:17.986773 kernel: PCI host bridge to bus 0000:00 Jun 21 06:14:17.986865 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 21 06:14:17.986943 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 21 06:14:17.987018 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 21 06:14:17.987092 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 21 06:14:17.988684 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jun 21 06:14:17.989312 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 21 06:14:17.989427 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jun 21 06:14:17.989528 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jun 21 06:14:17.989645 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jun 21 06:14:17.989737 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jun 21 06:14:17.989831 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jun 21 06:14:17.989917 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jun 21 06:14:17.990001 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jun 21 06:14:17.990086 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jun 21 06:14:17.993392 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 21 06:14:17.993499 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 21 06:14:17.993589 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 21 06:14:17.993716 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jun 21 06:14:17.993810 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jun 21 06:14:17.993918 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jun 21 06:14:17.994007 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jun 21 06:14:17.994094 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jun 21 06:14:17.994235 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 21 06:14:17.994332 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 21 06:14:17.994442 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jun 21 06:14:17.994538 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jun 21 06:14:17.994631 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jun 21 06:14:17.994723 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jun 21 06:14:17.994822 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 21 06:14:17.994916 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jun 21 06:14:17.995050 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jun 21 06:14:17.997202 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jun 21 06:14:17.997375 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 06:14:17.997494 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jun 21 06:14:17.997616 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jun 21 06:14:17.997728 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 21 06:14:17.997831 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jun 21 06:14:17.997938 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jun 21 06:14:17.998037 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jun 21 06:14:17.998053 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 21 06:14:17.998064 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 21 06:14:17.998076 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 21 06:14:17.998087 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 21 06:14:17.998098 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 21 06:14:17.998109 kernel: iommu: Default domain type: Translated Jun 21 06:14:17.998121 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 21 06:14:17.998135 kernel: PCI: Using ACPI for IRQ routing Jun 21 06:14:17.998170 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 21 06:14:17.998182 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 21 06:14:17.998193 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jun 21 06:14:17.998297 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 21 06:14:17.998397 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 21 06:14:17.998504 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 21 06:14:17.998519 kernel: vgaarb: loaded Jun 21 06:14:17.998531 kernel: clocksource: Switched to clocksource kvm-clock Jun 21 06:14:17.998546 kernel: VFS: Disk quotas dquot_6.6.0 Jun 21 06:14:17.998557 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 21 06:14:17.998568 kernel: pnp: PnP ACPI init Jun 21 06:14:17.998665 kernel: pnp 00:03: [dma 2] Jun 21 06:14:17.998681 kernel: pnp: PnP ACPI: found 5 devices Jun 21 06:14:17.998691 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 21 06:14:17.998702 kernel: NET: Registered PF_INET protocol family Jun 21 06:14:17.998712 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 21 06:14:17.998726 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 21 06:14:17.998736 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 21 06:14:17.998746 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 21 06:14:17.998757 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 21 06:14:17.998767 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 21 06:14:17.998777 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 06:14:17.998788 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 21 06:14:17.998798 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 21 06:14:17.998808 kernel: NET: Registered PF_XDP protocol family Jun 21 06:14:17.998896 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 21 06:14:17.998978 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 21 06:14:17.999057 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 21 06:14:17.999137 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jun 21 06:14:18.002787 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jun 21 06:14:18.002880 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 21 06:14:18.002970 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 21 06:14:18.002984 kernel: PCI: CLS 0 bytes, default 64 Jun 21 06:14:18.002997 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 21 06:14:18.003007 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jun 21 06:14:18.003016 kernel: Initialise system trusted keyrings Jun 21 06:14:18.003026 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 21 06:14:18.003035 kernel: Key type asymmetric registered Jun 21 06:14:18.003044 kernel: Asymmetric key parser 'x509' registered Jun 21 06:14:18.003053 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 21 06:14:18.003063 kernel: io scheduler mq-deadline registered Jun 21 06:14:18.003074 kernel: io scheduler kyber registered Jun 21 06:14:18.003083 kernel: io scheduler bfq registered Jun 21 06:14:18.003092 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 21 06:14:18.003102 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 21 06:14:18.003111 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 21 06:14:18.003121 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 21 06:14:18.003130 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 21 06:14:18.003139 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 21 06:14:18.003190 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 21 06:14:18.003200 kernel: random: crng init done Jun 21 06:14:18.003212 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 21 06:14:18.003221 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 21 06:14:18.003230 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 21 06:14:18.003321 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 21 06:14:18.003336 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 21 06:14:18.003411 kernel: rtc_cmos 00:04: registered as rtc0 Jun 21 06:14:18.003489 kernel: rtc_cmos 00:04: setting system clock to 2025-06-21T06:14:17 UTC (1750486457) Jun 21 06:14:18.003567 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 21 06:14:18.003584 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 21 06:14:18.003615 kernel: NET: Registered PF_INET6 protocol family Jun 21 06:14:18.003629 kernel: Segment Routing with IPv6 Jun 21 06:14:18.003638 kernel: In-situ OAM (IOAM) with IPv6 Jun 21 06:14:18.003647 kernel: NET: Registered PF_PACKET protocol family Jun 21 06:14:18.003656 kernel: Key type dns_resolver registered Jun 21 06:14:18.003665 kernel: IPI shorthand broadcast: enabled Jun 21 06:14:18.003674 kernel: sched_clock: Marking stable (3640020105, 182893229)->(3861802960, -38889626) Jun 21 06:14:18.003685 kernel: registered taskstats version 1 Jun 21 06:14:18.003694 kernel: Loading compiled-in X.509 certificates Jun 21 06:14:18.003704 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: ec4617d162e00e1890f71f252cdf44036a7b66f7' Jun 21 06:14:18.003713 kernel: Demotion targets for Node 0: null Jun 21 06:14:18.003722 kernel: Key type .fscrypt registered Jun 21 06:14:18.003731 kernel: Key type fscrypt-provisioning registered Jun 21 06:14:18.003740 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 21 06:14:18.003749 kernel: ima: Allocated hash algorithm: sha1 Jun 21 06:14:18.003758 kernel: ima: No architecture policies found Jun 21 06:14:18.003769 kernel: clk: Disabling unused clocks Jun 21 06:14:18.003778 kernel: Warning: unable to open an initial console. Jun 21 06:14:18.003787 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 21 06:14:18.003797 kernel: Write protecting the kernel read-only data: 24576k Jun 21 06:14:18.003806 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 21 06:14:18.003815 kernel: Run /init as init process Jun 21 06:14:18.003843 kernel: with arguments: Jun 21 06:14:18.003853 kernel: /init Jun 21 06:14:18.003862 kernel: with environment: Jun 21 06:14:18.003873 kernel: HOME=/ Jun 21 06:14:18.003882 kernel: TERM=linux Jun 21 06:14:18.003891 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 21 06:14:18.003901 systemd[1]: Successfully made /usr/ read-only. Jun 21 06:14:18.003914 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 06:14:18.003925 systemd[1]: Detected virtualization kvm. Jun 21 06:14:18.003935 systemd[1]: Detected architecture x86-64. Jun 21 06:14:18.003953 systemd[1]: Running in initrd. Jun 21 06:14:18.003964 systemd[1]: No hostname configured, using default hostname. Jun 21 06:14:18.003974 systemd[1]: Hostname set to . Jun 21 06:14:18.003985 systemd[1]: Initializing machine ID from VM UUID. Jun 21 06:14:18.003995 systemd[1]: Queued start job for default target initrd.target. Jun 21 06:14:18.004005 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 06:14:18.004017 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 06:14:18.004027 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 21 06:14:18.004037 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 06:14:18.004048 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 21 06:14:18.004059 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 21 06:14:18.004070 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 21 06:14:18.004080 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 21 06:14:18.004092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 06:14:18.004102 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 06:14:18.004112 systemd[1]: Reached target paths.target - Path Units. Jun 21 06:14:18.004122 systemd[1]: Reached target slices.target - Slice Units. Jun 21 06:14:18.004132 systemd[1]: Reached target swap.target - Swaps. Jun 21 06:14:18.004142 systemd[1]: Reached target timers.target - Timer Units. Jun 21 06:14:18.004168 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 06:14:18.004178 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 06:14:18.004188 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 21 06:14:18.004200 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 21 06:14:18.004210 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 06:14:18.004220 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 06:14:18.004231 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 06:14:18.004241 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 06:14:18.004251 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 21 06:14:18.004261 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 06:14:18.004271 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 21 06:14:18.004283 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 21 06:14:18.004293 systemd[1]: Starting systemd-fsck-usr.service... Jun 21 06:14:18.004305 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 06:14:18.004315 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 06:14:18.004325 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:14:18.004337 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 21 06:14:18.004348 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 06:14:18.004358 systemd[1]: Finished systemd-fsck-usr.service. Jun 21 06:14:18.004368 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 21 06:14:18.004402 systemd-journald[211]: Collecting audit messages is disabled. Jun 21 06:14:18.004429 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 21 06:14:18.004440 systemd-journald[211]: Journal started Jun 21 06:14:18.004465 systemd-journald[211]: Runtime Journal (/run/log/journal/b8a9dbdd2607479ab5bbcbdadf9c8d64) is 8M, max 78.5M, 70.5M free. Jun 21 06:14:18.003894 systemd-modules-load[214]: Inserted module 'overlay' Jun 21 06:14:18.008755 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 06:14:18.012378 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 06:14:18.018307 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 06:14:18.066930 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 21 06:14:18.066963 kernel: Bridge firewalling registered Jun 21 06:14:18.039581 systemd-modules-load[214]: Inserted module 'br_netfilter' Jun 21 06:14:18.066283 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 06:14:18.071656 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:14:18.074251 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 21 06:14:18.080760 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:14:18.084121 systemd-tmpfiles[230]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 21 06:14:18.087132 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 06:14:18.095249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 06:14:18.102215 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:14:18.103698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 06:14:18.106238 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 21 06:14:18.109249 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 06:14:18.134427 dracut-cmdline[252]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=d3c0be6f64121476b0313f5d7d7bbd73e21bc1a219aacd38b8006b291898eca1 Jun 21 06:14:18.158912 systemd-resolved[253]: Positive Trust Anchors: Jun 21 06:14:18.159375 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 06:14:18.159418 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 06:14:18.165553 systemd-resolved[253]: Defaulting to hostname 'linux'. Jun 21 06:14:18.166909 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 06:14:18.167814 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 06:14:18.221180 kernel: SCSI subsystem initialized Jun 21 06:14:18.231247 kernel: Loading iSCSI transport class v2.0-870. Jun 21 06:14:18.243212 kernel: iscsi: registered transport (tcp) Jun 21 06:14:18.266401 kernel: iscsi: registered transport (qla4xxx) Jun 21 06:14:18.266501 kernel: QLogic iSCSI HBA Driver Jun 21 06:14:18.291513 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 06:14:18.311136 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 06:14:18.312451 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 06:14:18.388346 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 21 06:14:18.390891 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 21 06:14:18.459252 kernel: raid6: sse2x4 gen() 12914 MB/s Jun 21 06:14:18.477242 kernel: raid6: sse2x2 gen() 14586 MB/s Jun 21 06:14:18.495667 kernel: raid6: sse2x1 gen() 9851 MB/s Jun 21 06:14:18.495741 kernel: raid6: using algorithm sse2x2 gen() 14586 MB/s Jun 21 06:14:18.514637 kernel: raid6: .... xor() 9380 MB/s, rmw enabled Jun 21 06:14:18.514700 kernel: raid6: using ssse3x2 recovery algorithm Jun 21 06:14:18.538322 kernel: xor: measuring software checksum speed Jun 21 06:14:18.538389 kernel: prefetch64-sse : 18538 MB/sec Jun 21 06:14:18.539224 kernel: generic_sse : 15457 MB/sec Jun 21 06:14:18.541662 kernel: xor: using function: prefetch64-sse (18538 MB/sec) Jun 21 06:14:18.742230 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 21 06:14:18.751141 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 21 06:14:18.756845 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 06:14:18.782226 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jun 21 06:14:18.788281 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 06:14:18.794643 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 21 06:14:18.821229 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jun 21 06:14:18.860918 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 06:14:18.866454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 06:14:18.950592 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 06:14:18.952873 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 21 06:14:19.050169 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jun 21 06:14:19.062578 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jun 21 06:14:19.085249 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 21 06:14:19.085307 kernel: GPT:17805311 != 20971519 Jun 21 06:14:19.085321 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 21 06:14:19.085332 kernel: GPT:17805311 != 20971519 Jun 21 06:14:19.085351 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 21 06:14:19.085363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 06:14:19.084122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 06:14:19.084275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:14:19.089470 kernel: libata version 3.00 loaded. Jun 21 06:14:19.086239 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:14:19.090293 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:14:19.092058 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 06:14:19.094182 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 21 06:14:19.094466 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 21 06:14:19.100196 kernel: scsi host0: ata_piix Jun 21 06:14:19.102208 kernel: scsi host1: ata_piix Jun 21 06:14:19.108104 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jun 21 06:14:19.108134 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jun 21 06:14:19.175845 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 21 06:14:19.177427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:14:19.188932 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 06:14:19.200062 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 21 06:14:19.209216 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 21 06:14:19.209812 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 21 06:14:19.214286 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 21 06:14:19.239206 disk-uuid[562]: Primary Header is updated. Jun 21 06:14:19.239206 disk-uuid[562]: Secondary Entries is updated. Jun 21 06:14:19.239206 disk-uuid[562]: Secondary Header is updated. Jun 21 06:14:19.247216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 06:14:19.355011 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 21 06:14:19.395346 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 06:14:19.395968 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 06:14:19.398312 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 06:14:19.402247 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 21 06:14:19.439027 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 21 06:14:20.266252 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 21 06:14:20.266355 disk-uuid[563]: The operation has completed successfully. Jun 21 06:14:20.358712 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 21 06:14:20.358929 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 21 06:14:20.402534 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 21 06:14:20.437411 sh[587]: Success Jun 21 06:14:20.462457 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 21 06:14:20.462554 kernel: device-mapper: uevent: version 1.0.3 Jun 21 06:14:20.466241 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 21 06:14:20.480236 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jun 21 06:14:20.581856 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 21 06:14:20.588788 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 21 06:14:20.612935 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 21 06:14:20.639220 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 21 06:14:20.640283 kernel: BTRFS: device fsid bfb8168c-5be0-428c-83e7-820ccaf1f8e9 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (599) Jun 21 06:14:20.654674 kernel: BTRFS info (device dm-0): first mount of filesystem bfb8168c-5be0-428c-83e7-820ccaf1f8e9 Jun 21 06:14:20.654736 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:14:20.658891 kernel: BTRFS info (device dm-0): using free-space-tree Jun 21 06:14:20.681218 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 21 06:14:20.683274 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 21 06:14:20.685223 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 21 06:14:20.688369 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 21 06:14:20.693357 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 21 06:14:20.751284 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (636) Jun 21 06:14:20.764004 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:14:20.764114 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:14:20.764199 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 06:14:20.787242 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:14:20.788716 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 21 06:14:20.793070 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 21 06:14:20.882543 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 06:14:20.886269 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 06:14:20.932059 systemd-networkd[768]: lo: Link UP Jun 21 06:14:20.932069 systemd-networkd[768]: lo: Gained carrier Jun 21 06:14:20.933113 systemd-networkd[768]: Enumeration completed Jun 21 06:14:20.933212 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 06:14:20.933931 systemd[1]: Reached target network.target - Network. Jun 21 06:14:20.934439 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:14:20.934443 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 06:14:20.935539 systemd-networkd[768]: eth0: Link UP Jun 21 06:14:20.935542 systemd-networkd[768]: eth0: Gained carrier Jun 21 06:14:20.935551 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:14:20.946253 systemd-networkd[768]: eth0: DHCPv4 address 172.24.4.45/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 21 06:14:21.013562 ignition[683]: Ignition 2.21.0 Jun 21 06:14:21.013576 ignition[683]: Stage: fetch-offline Jun 21 06:14:21.013626 ignition[683]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:14:21.013635 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:14:21.013716 ignition[683]: parsed url from cmdline: "" Jun 21 06:14:21.013719 ignition[683]: no config URL provided Jun 21 06:14:21.013724 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 06:14:21.013731 ignition[683]: no config at "/usr/lib/ignition/user.ign" Jun 21 06:14:21.018304 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 06:14:21.013738 ignition[683]: failed to fetch config: resource requires networking Jun 21 06:14:21.014625 ignition[683]: Ignition finished successfully Jun 21 06:14:21.022401 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 21 06:14:21.054850 ignition[779]: Ignition 2.21.0 Jun 21 06:14:21.055624 ignition[779]: Stage: fetch Jun 21 06:14:21.055759 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:14:21.055769 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:14:21.055838 ignition[779]: parsed url from cmdline: "" Jun 21 06:14:21.055842 ignition[779]: no config URL provided Jun 21 06:14:21.055847 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Jun 21 06:14:21.055854 ignition[779]: no config at "/usr/lib/ignition/user.ign" Jun 21 06:14:21.055949 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jun 21 06:14:21.056026 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jun 21 06:14:21.056917 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jun 21 06:14:21.346341 ignition[779]: GET result: OK Jun 21 06:14:21.346522 ignition[779]: parsing config with SHA512: 71263ef0d0e6c4a69a04336e1f624a41b116db73ddbfb35179729797bb9818992f7331788cc16f698c5653fdaa3b59e39f2f2316fa5be90c71cfae0828970c3b Jun 21 06:14:21.355659 unknown[779]: fetched base config from "system" Jun 21 06:14:21.355683 unknown[779]: fetched base config from "system" Jun 21 06:14:21.356850 ignition[779]: fetch: fetch complete Jun 21 06:14:21.355696 unknown[779]: fetched user config from "openstack" Jun 21 06:14:21.356863 ignition[779]: fetch: fetch passed Jun 21 06:14:21.362234 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 21 06:14:21.356948 ignition[779]: Ignition finished successfully Jun 21 06:14:21.366439 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 21 06:14:21.418603 ignition[786]: Ignition 2.21.0 Jun 21 06:14:21.418635 ignition[786]: Stage: kargs Jun 21 06:14:21.418952 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:14:21.418977 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:14:21.424597 ignition[786]: kargs: kargs passed Jun 21 06:14:21.427432 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 21 06:14:21.424721 ignition[786]: Ignition finished successfully Jun 21 06:14:21.432529 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 21 06:14:21.477898 ignition[792]: Ignition 2.21.0 Jun 21 06:14:21.477931 ignition[792]: Stage: disks Jun 21 06:14:21.478240 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jun 21 06:14:21.478261 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:14:21.481679 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 21 06:14:21.479715 ignition[792]: disks: disks passed Jun 21 06:14:21.483352 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 21 06:14:21.479786 ignition[792]: Ignition finished successfully Jun 21 06:14:21.485015 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 21 06:14:21.486907 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 06:14:21.488500 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 06:14:21.490608 systemd[1]: Reached target basic.target - Basic System. Jun 21 06:14:21.492970 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 21 06:14:21.529517 systemd-fsck[800]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jun 21 06:14:21.545980 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 21 06:14:21.551804 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 21 06:14:21.766223 kernel: EXT4-fs (vda9): mounted filesystem 6d18c974-0fd6-4e4a-98cf-62524fcf9e99 r/w with ordered data mode. Quota mode: none. Jun 21 06:14:21.767729 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 21 06:14:21.769780 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 21 06:14:21.774776 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 06:14:21.779061 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 21 06:14:21.792394 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 21 06:14:21.797130 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jun 21 06:14:21.802936 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 21 06:14:21.805632 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 06:14:21.812026 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 21 06:14:21.815692 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (808) Jun 21 06:14:21.815737 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:14:21.829428 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 21 06:14:21.841092 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:14:21.841139 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 06:14:21.856252 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 06:14:21.931773 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:21.947325 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jun 21 06:14:21.951993 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jun 21 06:14:21.956699 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jun 21 06:14:21.962189 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jun 21 06:14:22.100142 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 21 06:14:22.103623 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 21 06:14:22.105745 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 21 06:14:22.132044 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 21 06:14:22.137674 kernel: BTRFS info (device vda6): last unmount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:14:22.166020 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 21 06:14:22.171375 ignition[927]: INFO : Ignition 2.21.0 Jun 21 06:14:22.171375 ignition[927]: INFO : Stage: mount Jun 21 06:14:22.172447 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 06:14:22.172447 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:14:22.173782 ignition[927]: INFO : mount: mount passed Jun 21 06:14:22.173782 ignition[927]: INFO : Ignition finished successfully Jun 21 06:14:22.174889 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 21 06:14:22.893461 systemd-networkd[768]: eth0: Gained IPv6LL Jun 21 06:14:22.970308 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:24.982236 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:28.997261 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:29.005236 coreos-metadata[810]: Jun 21 06:14:29.005 WARN failed to locate config-drive, using the metadata service API instead Jun 21 06:14:29.046576 coreos-metadata[810]: Jun 21 06:14:29.046 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 21 06:14:29.064384 coreos-metadata[810]: Jun 21 06:14:29.064 INFO Fetch successful Jun 21 06:14:29.064384 coreos-metadata[810]: Jun 21 06:14:29.064 INFO wrote hostname ci-4372-0-0-b-cad5e61be6.novalocal to /sysroot/etc/hostname Jun 21 06:14:29.068866 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jun 21 06:14:29.069116 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jun 21 06:14:29.077393 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 21 06:14:29.115949 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 21 06:14:29.164217 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (943) Jun 21 06:14:29.172244 kernel: BTRFS info (device vda6): first mount of filesystem 57d2b200-37a8-4067-8765-910d3ed0182c Jun 21 06:14:29.172326 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 21 06:14:29.172356 kernel: BTRFS info (device vda6): using free-space-tree Jun 21 06:14:29.181305 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 21 06:14:29.228114 ignition[960]: INFO : Ignition 2.21.0 Jun 21 06:14:29.228114 ignition[960]: INFO : Stage: files Jun 21 06:14:29.231210 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 06:14:29.231210 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:14:29.231210 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jun 21 06:14:29.236530 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 21 06:14:29.236530 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 21 06:14:29.240283 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 21 06:14:29.240283 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 21 06:14:29.240283 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 21 06:14:29.240125 unknown[960]: wrote ssh authorized keys file for user: core Jun 21 06:14:29.247859 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 21 06:14:29.247859 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 21 06:14:29.324776 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 21 06:14:29.677629 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 21 06:14:29.677629 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 06:14:29.677629 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 21 06:14:30.377723 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 21 06:14:30.789963 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 21 06:14:30.789963 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 21 06:14:30.792383 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 21 06:14:30.792383 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 21 06:14:30.792383 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 21 06:14:30.792383 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 06:14:30.792383 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 21 06:14:30.792383 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 06:14:30.792383 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 21 06:14:30.798992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 06:14:30.798992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 21 06:14:30.798992 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 06:14:30.802303 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 06:14:30.802303 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 06:14:30.802303 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 21 06:14:31.353650 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 21 06:14:32.944697 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 21 06:14:32.944697 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 21 06:14:32.950523 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 06:14:32.954886 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 21 06:14:32.954886 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 21 06:14:32.954886 ignition[960]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 21 06:14:32.963305 ignition[960]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 21 06:14:32.963305 ignition[960]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 21 06:14:32.963305 ignition[960]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 21 06:14:32.963305 ignition[960]: INFO : files: files passed Jun 21 06:14:32.963305 ignition[960]: INFO : Ignition finished successfully Jun 21 06:14:32.956882 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 21 06:14:32.962254 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 21 06:14:32.964499 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 21 06:14:32.983089 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 21 06:14:32.983806 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 21 06:14:32.995565 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 06:14:32.995565 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 21 06:14:32.999493 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 21 06:14:33.002069 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 06:14:33.002916 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 21 06:14:33.005832 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 21 06:14:33.053359 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 21 06:14:33.053634 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 21 06:14:33.056445 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 21 06:14:33.058331 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 21 06:14:33.060843 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 21 06:14:33.064525 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 21 06:14:33.105423 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 06:14:33.110952 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 21 06:14:33.175070 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 21 06:14:33.176976 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 06:14:33.180309 systemd[1]: Stopped target timers.target - Timer Units. Jun 21 06:14:33.183400 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 21 06:14:33.183848 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 21 06:14:33.187206 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 21 06:14:33.189442 systemd[1]: Stopped target basic.target - Basic System. Jun 21 06:14:33.192736 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 21 06:14:33.195662 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 21 06:14:33.198392 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 21 06:14:33.201444 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 21 06:14:33.204585 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 21 06:14:33.207559 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 21 06:14:33.211265 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 21 06:14:33.214612 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 21 06:14:33.218002 systemd[1]: Stopped target swap.target - Swaps. Jun 21 06:14:33.220693 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 21 06:14:33.220976 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 21 06:14:33.224219 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 21 06:14:33.226349 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 06:14:33.228955 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 21 06:14:33.230350 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 06:14:33.232113 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 21 06:14:33.232567 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 21 06:14:33.236454 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 21 06:14:33.236873 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 21 06:14:33.239955 systemd[1]: ignition-files.service: Deactivated successfully. Jun 21 06:14:33.240372 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 21 06:14:33.245504 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 21 06:14:33.248959 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 21 06:14:33.250442 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 06:14:33.260545 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 21 06:14:33.262770 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 21 06:14:33.263142 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 06:14:33.266720 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 21 06:14:33.267000 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 21 06:14:33.283520 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 21 06:14:33.284132 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 21 06:14:33.293359 ignition[1014]: INFO : Ignition 2.21.0 Jun 21 06:14:33.293359 ignition[1014]: INFO : Stage: umount Jun 21 06:14:33.296131 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 21 06:14:33.296131 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 21 06:14:33.296131 ignition[1014]: INFO : umount: umount passed Jun 21 06:14:33.296131 ignition[1014]: INFO : Ignition finished successfully Jun 21 06:14:33.297022 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 21 06:14:33.297137 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 21 06:14:33.298623 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 21 06:14:33.298693 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 21 06:14:33.301011 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 21 06:14:33.301052 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 21 06:14:33.303246 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 21 06:14:33.303307 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 21 06:14:33.304004 systemd[1]: Stopped target network.target - Network. Jun 21 06:14:33.304529 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 21 06:14:33.304574 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 21 06:14:33.305683 systemd[1]: Stopped target paths.target - Path Units. Jun 21 06:14:33.306652 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 21 06:14:33.310244 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 06:14:33.311325 systemd[1]: Stopped target slices.target - Slice Units. Jun 21 06:14:33.312572 systemd[1]: Stopped target sockets.target - Socket Units. Jun 21 06:14:33.313881 systemd[1]: iscsid.socket: Deactivated successfully. Jun 21 06:14:33.313918 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 21 06:14:33.314897 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 21 06:14:33.314926 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 21 06:14:33.315900 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 21 06:14:33.315948 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 21 06:14:33.316933 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 21 06:14:33.316972 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 21 06:14:33.321745 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 21 06:14:33.322804 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 21 06:14:33.324857 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 21 06:14:33.325534 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 21 06:14:33.325642 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 21 06:14:33.328456 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 21 06:14:33.328564 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 21 06:14:33.331622 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 21 06:14:33.331846 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 21 06:14:33.331952 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 21 06:14:33.334053 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 21 06:14:33.334655 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 21 06:14:33.335865 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 21 06:14:33.335913 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 21 06:14:33.336855 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 21 06:14:33.336905 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 21 06:14:33.338980 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 21 06:14:33.340444 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 21 06:14:33.340490 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 21 06:14:33.341731 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 06:14:33.341772 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:14:33.343939 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 21 06:14:33.343983 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 21 06:14:33.345276 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 21 06:14:33.345318 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 06:14:33.346779 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 06:14:33.348530 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 06:14:33.348587 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 21 06:14:33.354894 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 21 06:14:33.356459 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 06:14:33.358952 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 21 06:14:33.359008 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 21 06:14:33.359551 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 21 06:14:33.359580 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 06:14:33.360117 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 21 06:14:33.360184 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 21 06:14:33.361813 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 21 06:14:33.361854 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 21 06:14:33.362948 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 21 06:14:33.362989 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 21 06:14:33.366251 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 21 06:14:33.367352 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 21 06:14:33.367405 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 06:14:33.368711 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 21 06:14:33.368755 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 06:14:33.370846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 06:14:33.370890 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:14:33.374142 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 21 06:14:33.374254 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 21 06:14:33.374320 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 21 06:14:33.374758 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 21 06:14:33.374888 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 21 06:14:33.381638 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 21 06:14:33.381752 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 21 06:14:33.382550 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 21 06:14:33.384402 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 21 06:14:33.402537 systemd[1]: Switching root. Jun 21 06:14:33.438143 systemd-journald[211]: Journal stopped Jun 21 06:14:35.293454 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). Jun 21 06:14:35.293516 kernel: SELinux: policy capability network_peer_controls=1 Jun 21 06:14:35.293533 kernel: SELinux: policy capability open_perms=1 Jun 21 06:14:35.293545 kernel: SELinux: policy capability extended_socket_class=1 Jun 21 06:14:35.293590 kernel: SELinux: policy capability always_check_network=0 Jun 21 06:14:35.293605 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 21 06:14:35.293617 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 21 06:14:35.293628 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 21 06:14:35.293639 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 21 06:14:35.293650 kernel: SELinux: policy capability userspace_initial_context=0 Jun 21 06:14:35.293661 kernel: audit: type=1403 audit(1750486474.172:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 21 06:14:35.293674 systemd[1]: Successfully loaded SELinux policy in 93.711ms. Jun 21 06:14:35.293695 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.072ms. Jun 21 06:14:35.293709 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 21 06:14:35.293723 systemd[1]: Detected virtualization kvm. Jun 21 06:14:35.293735 systemd[1]: Detected architecture x86-64. Jun 21 06:14:35.293757 systemd[1]: Detected first boot. Jun 21 06:14:35.293770 systemd[1]: Hostname set to . Jun 21 06:14:35.293790 systemd[1]: Initializing machine ID from VM UUID. Jun 21 06:14:35.293820 zram_generator::config[1057]: No configuration found. Jun 21 06:14:35.293833 kernel: Guest personality initialized and is inactive Jun 21 06:14:35.293844 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 21 06:14:35.293858 kernel: Initialized host personality Jun 21 06:14:35.293869 kernel: NET: Registered PF_VSOCK protocol family Jun 21 06:14:35.293881 systemd[1]: Populated /etc with preset unit settings. Jun 21 06:14:35.293894 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 21 06:14:35.293906 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 21 06:14:35.293918 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 21 06:14:35.293930 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 21 06:14:35.293942 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 21 06:14:35.293955 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 21 06:14:35.293968 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 21 06:14:35.293984 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 21 06:14:35.293997 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 21 06:14:35.294009 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 21 06:14:35.294020 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 21 06:14:35.294032 systemd[1]: Created slice user.slice - User and Session Slice. Jun 21 06:14:35.294044 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 21 06:14:35.294056 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 21 06:14:35.294070 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 21 06:14:35.294082 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 21 06:14:35.294095 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 21 06:14:35.294107 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 21 06:14:35.294119 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 21 06:14:35.294131 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 21 06:14:35.294162 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 21 06:14:35.294176 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 21 06:14:35.294187 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 21 06:14:35.294200 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 21 06:14:35.294213 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 21 06:14:35.294225 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 21 06:14:35.294237 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 21 06:14:35.294249 systemd[1]: Reached target slices.target - Slice Units. Jun 21 06:14:35.294261 systemd[1]: Reached target swap.target - Swaps. Jun 21 06:14:35.294272 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 21 06:14:35.294286 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 21 06:14:35.294298 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 21 06:14:35.294311 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 21 06:14:35.294323 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 21 06:14:35.294335 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 21 06:14:35.294347 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 21 06:14:35.294359 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 21 06:14:35.294371 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 21 06:14:35.294383 systemd[1]: Mounting media.mount - External Media Directory... Jun 21 06:14:35.294398 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:14:35.294411 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 21 06:14:35.294423 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 21 06:14:35.294435 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 21 06:14:35.294447 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 21 06:14:35.294460 systemd[1]: Reached target machines.target - Containers. Jun 21 06:14:35.294472 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 21 06:14:35.294484 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 06:14:35.294498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 21 06:14:35.294510 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 21 06:14:35.294522 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 06:14:35.294534 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 06:14:35.294548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 06:14:35.294560 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 21 06:14:35.294572 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 06:14:35.294584 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 21 06:14:35.294601 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 21 06:14:35.294613 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 21 06:14:35.294625 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 21 06:14:35.294637 systemd[1]: Stopped systemd-fsck-usr.service. Jun 21 06:14:35.294649 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 06:14:35.294661 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 21 06:14:35.294673 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 21 06:14:35.294684 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 21 06:14:35.294695 kernel: loop: module loaded Jun 21 06:14:35.294709 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 21 06:14:35.294724 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 21 06:14:35.294737 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 21 06:14:35.294749 systemd[1]: verity-setup.service: Deactivated successfully. Jun 21 06:14:35.294761 systemd[1]: Stopped verity-setup.service. Jun 21 06:14:35.294774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:14:35.294786 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 21 06:14:35.294798 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 21 06:14:35.294812 systemd[1]: Mounted media.mount - External Media Directory. Jun 21 06:14:35.294823 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 21 06:14:35.294838 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 21 06:14:35.294850 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 21 06:14:35.294862 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 21 06:14:35.294873 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 21 06:14:35.294885 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 21 06:14:35.294920 systemd-journald[1140]: Collecting audit messages is disabled. Jun 21 06:14:35.294946 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 06:14:35.294958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 06:14:35.294973 systemd-journald[1140]: Journal started Jun 21 06:14:35.294998 systemd-journald[1140]: Runtime Journal (/run/log/journal/b8a9dbdd2607479ab5bbcbdadf9c8d64) is 8M, max 78.5M, 70.5M free. Jun 21 06:14:34.932288 systemd[1]: Queued start job for default target multi-user.target. Jun 21 06:14:34.954359 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 21 06:14:34.954806 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 21 06:14:35.299317 systemd[1]: Started systemd-journald.service - Journal Service. Jun 21 06:14:35.300963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 06:14:35.301907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 06:14:35.303707 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 06:14:35.303852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 06:14:35.304603 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 21 06:14:35.307449 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 21 06:14:35.308232 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 21 06:14:35.319187 kernel: ACPI: bus type drm_connector registered Jun 21 06:14:35.320015 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 21 06:14:35.322930 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 06:14:35.325328 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 06:14:35.334633 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 21 06:14:35.335195 kernel: fuse: init (API version 7.41) Jun 21 06:14:35.338318 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 21 06:14:35.338946 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 21 06:14:35.338983 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 21 06:14:35.342044 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 21 06:14:35.346285 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 21 06:14:35.350866 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 06:14:35.355524 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 21 06:14:35.360303 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 21 06:14:35.360990 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 06:14:35.361946 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 21 06:14:35.362551 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 06:14:35.363373 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:14:35.367279 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 21 06:14:35.372365 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 21 06:14:35.375610 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 21 06:14:35.375804 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 21 06:14:35.380380 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 21 06:14:35.381282 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 21 06:14:35.388914 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 21 06:14:35.407174 kernel: loop0: detected capacity change from 0 to 224512 Jun 21 06:14:35.409481 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 21 06:14:35.427483 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 21 06:14:35.432292 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 21 06:14:35.432708 systemd-journald[1140]: Time spent on flushing to /var/log/journal/b8a9dbdd2607479ab5bbcbdadf9c8d64 is 29.843ms for 978 entries. Jun 21 06:14:35.432708 systemd-journald[1140]: System Journal (/var/log/journal/b8a9dbdd2607479ab5bbcbdadf9c8d64) is 8M, max 584.8M, 576.8M free. Jun 21 06:14:35.490213 systemd-journald[1140]: Received client request to flush runtime journal. Jun 21 06:14:35.435643 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 21 06:14:35.438875 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 21 06:14:35.460835 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:14:35.493039 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 21 06:14:35.501205 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 21 06:14:35.523512 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 21 06:14:35.527196 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 21 06:14:35.531315 kernel: loop1: detected capacity change from 0 to 146240 Jun 21 06:14:35.529602 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 21 06:14:35.579848 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jun 21 06:14:35.579867 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Jun 21 06:14:35.586332 kernel: loop2: detected capacity change from 0 to 8 Jun 21 06:14:35.588844 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 21 06:14:35.600264 kernel: loop3: detected capacity change from 0 to 113872 Jun 21 06:14:35.672209 kernel: loop4: detected capacity change from 0 to 224512 Jun 21 06:14:35.854543 kernel: loop5: detected capacity change from 0 to 146240 Jun 21 06:14:35.941229 kernel: loop6: detected capacity change from 0 to 8 Jun 21 06:14:35.950290 kernel: loop7: detected capacity change from 0 to 113872 Jun 21 06:14:35.956334 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 21 06:14:36.024676 (sd-merge)[1218]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jun 21 06:14:36.025710 (sd-merge)[1218]: Merged extensions into '/usr'. Jun 21 06:14:36.035932 systemd[1]: Reload requested from client PID 1192 ('systemd-sysext') (unit systemd-sysext.service)... Jun 21 06:14:36.035952 systemd[1]: Reloading... Jun 21 06:14:36.108173 zram_generator::config[1240]: No configuration found. Jun 21 06:14:36.310585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:14:36.422846 systemd[1]: Reloading finished in 386 ms. Jun 21 06:14:36.444503 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 21 06:14:36.445350 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 21 06:14:36.454227 systemd[1]: Starting ensure-sysext.service... Jun 21 06:14:36.455585 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 21 06:14:36.459305 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 21 06:14:36.495346 systemd-udevd[1302]: Using default interface naming scheme 'v255'. Jun 21 06:14:36.508756 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 21 06:14:36.508802 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 21 06:14:36.509066 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 21 06:14:36.509335 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 21 06:14:36.510128 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 21 06:14:36.510474 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jun 21 06:14:36.510530 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jun 21 06:14:36.526298 systemd[1]: Reload requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Jun 21 06:14:36.526326 systemd[1]: Reloading... Jun 21 06:14:36.551020 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 06:14:36.551042 systemd-tmpfiles[1301]: Skipping /boot Jun 21 06:14:36.574518 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Jun 21 06:14:36.574544 systemd-tmpfiles[1301]: Skipping /boot Jun 21 06:14:36.620180 zram_generator::config[1331]: No configuration found. Jun 21 06:14:36.778805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:14:36.832693 ldconfig[1187]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 21 06:14:36.976668 systemd[1]: Reloading finished in 449 ms. Jun 21 06:14:36.983626 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 21 06:14:36.986650 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 21 06:14:36.999424 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 21 06:14:37.037180 kernel: mousedev: PS/2 mouse device common for all mice Jun 21 06:14:37.063174 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 21 06:14:37.064321 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 21 06:14:37.067676 systemd[1]: Finished ensure-sysext.service. Jun 21 06:14:37.071165 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 21 06:14:37.072315 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 21 06:14:37.076725 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:14:37.078167 kernel: ACPI: button: Power Button [PWRF] Jun 21 06:14:37.078874 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 06:14:37.084317 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 21 06:14:37.085467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 21 06:14:37.087971 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 21 06:14:37.092929 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 21 06:14:37.095361 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 21 06:14:37.109060 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 21 06:14:37.109905 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 21 06:14:37.111874 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 21 06:14:37.112906 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 21 06:14:37.114646 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 21 06:14:37.117960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 21 06:14:37.125169 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 21 06:14:37.124218 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 21 06:14:37.143392 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 21 06:14:37.146324 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 21 06:14:37.146930 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 21 06:14:37.147989 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 21 06:14:37.148509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 21 06:14:37.149379 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 21 06:14:37.149660 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 21 06:14:37.150975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 21 06:14:37.151291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 21 06:14:37.152062 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 21 06:14:37.153392 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 21 06:14:37.158298 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 21 06:14:37.158372 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 21 06:14:37.165470 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 21 06:14:37.172109 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 21 06:14:37.194020 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 21 06:14:37.217655 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 21 06:14:37.219827 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 21 06:14:37.222903 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 21 06:14:37.257972 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 21 06:14:37.265285 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 21 06:14:37.266838 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 21 06:14:37.268901 augenrules[1481]: No rules Jun 21 06:14:37.269190 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 21 06:14:37.270692 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 06:14:37.270913 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 06:14:37.300871 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 21 06:14:37.326172 kernel: Console: switching to colour dummy device 80x25 Jun 21 06:14:37.327733 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 21 06:14:37.327769 kernel: [drm] features: -context_init Jun 21 06:14:37.338542 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:14:37.344183 kernel: [drm] number of scanouts: 1 Jun 21 06:14:37.352230 kernel: [drm] number of cap sets: 0 Jun 21 06:14:37.354172 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jun 21 06:14:37.361979 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 21 06:14:37.362281 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:14:37.363641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 21 06:14:37.520496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 21 06:14:37.533242 systemd-networkd[1434]: lo: Link UP Jun 21 06:14:37.533251 systemd-networkd[1434]: lo: Gained carrier Jun 21 06:14:37.534464 systemd-networkd[1434]: Enumeration completed Jun 21 06:14:37.534543 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 21 06:14:37.535538 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:14:37.535548 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 21 06:14:37.537339 systemd-networkd[1434]: eth0: Link UP Jun 21 06:14:37.537706 systemd-networkd[1434]: eth0: Gained carrier Jun 21 06:14:37.537747 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 21 06:14:37.539610 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 21 06:14:37.545401 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 21 06:14:37.545723 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 21 06:14:37.545872 systemd[1]: Reached target time-set.target - System Time Set. Jun 21 06:14:37.550272 systemd-networkd[1434]: eth0: DHCPv4 address 172.24.4.45/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 21 06:14:37.551919 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Jun 21 06:14:37.560312 systemd-resolved[1435]: Positive Trust Anchors: Jun 21 06:14:37.560639 systemd-resolved[1435]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 21 06:14:37.560759 systemd-resolved[1435]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 21 06:14:37.569070 systemd-resolved[1435]: Using system hostname 'ci-4372-0-0-b-cad5e61be6.novalocal'. Jun 21 06:14:37.571351 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 21 06:14:37.571528 systemd[1]: Reached target network.target - Network. Jun 21 06:14:37.571603 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 21 06:14:37.571701 systemd[1]: Reached target sysinit.target - System Initialization. Jun 21 06:14:37.571870 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 21 06:14:37.571983 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 21 06:14:37.572065 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 21 06:14:37.572308 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 21 06:14:37.572460 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 21 06:14:37.572545 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 21 06:14:37.572623 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 21 06:14:37.572672 systemd[1]: Reached target paths.target - Path Units. Jun 21 06:14:37.572748 systemd[1]: Reached target timers.target - Timer Units. Jun 21 06:14:37.573953 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 21 06:14:37.575685 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 21 06:14:37.579425 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 21 06:14:37.579649 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 21 06:14:37.579746 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 21 06:14:37.581573 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 21 06:14:37.581986 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 21 06:14:37.582948 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 21 06:14:37.583199 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 21 06:14:37.584650 systemd[1]: Reached target sockets.target - Socket Units. Jun 21 06:14:37.584745 systemd[1]: Reached target basic.target - Basic System. Jun 21 06:14:37.584892 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 21 06:14:37.584936 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 21 06:14:37.585997 systemd[1]: Starting containerd.service - containerd container runtime... Jun 21 06:14:37.589297 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 21 06:14:37.591311 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 21 06:14:37.596169 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 21 06:14:37.598722 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 21 06:14:37.601161 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:37.601129 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 21 06:14:37.601264 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 21 06:14:37.604610 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 21 06:14:37.609274 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 21 06:14:37.613353 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 21 06:14:37.624782 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 21 06:14:37.633212 jq[1516]: false Jun 21 06:14:37.633513 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Jun 21 06:14:37.633392 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Jun 21 06:14:37.634461 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 21 06:14:37.638432 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 21 06:14:37.640022 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 21 06:14:37.640540 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 21 06:14:37.642209 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Jun 21 06:14:37.642209 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 06:14:37.642201 oslogin_cache_refresh[1519]: Failure getting users, quitting Jun 21 06:14:37.642339 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Jun 21 06:14:37.642218 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 21 06:14:37.642263 oslogin_cache_refresh[1519]: Refreshing group entry cache Jun 21 06:14:37.642593 systemd[1]: Starting update-engine.service - Update Engine... Jun 21 06:14:37.645581 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 21 06:14:37.652320 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 21 06:14:37.652688 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 21 06:14:37.652863 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 21 06:14:37.653593 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Jun 21 06:14:37.653593 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 06:14:37.653587 oslogin_cache_refresh[1519]: Failure getting groups, quitting Jun 21 06:14:37.653598 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 21 06:14:37.654717 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 21 06:14:37.654881 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 21 06:14:37.660888 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 21 06:14:37.665595 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 21 06:14:37.666424 extend-filesystems[1518]: Found /dev/vda6 Jun 21 06:14:37.681008 extend-filesystems[1518]: Found /dev/vda9 Jun 21 06:14:37.687463 (ntainerd)[1545]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 21 06:14:37.688943 extend-filesystems[1518]: Checking size of /dev/vda9 Jun 21 06:14:37.689072 jq[1530]: true Jun 21 06:14:37.700815 tar[1533]: linux-amd64/LICENSE Jun 21 06:14:37.700815 tar[1533]: linux-amd64/helm Jun 21 06:14:37.713915 update_engine[1529]: I20250621 06:14:37.701280 1529 main.cc:92] Flatcar Update Engine starting Jun 21 06:14:37.724417 systemd[1]: motdgen.service: Deactivated successfully. Jun 21 06:14:37.728087 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 21 06:14:37.733335 extend-filesystems[1518]: Resized partition /dev/vda9 Jun 21 06:14:37.741503 extend-filesystems[1560]: resize2fs 1.47.2 (1-Jan-2025) Jun 21 06:14:37.745172 jq[1554]: true Jun 21 06:14:37.753290 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jun 21 06:14:37.754415 dbus-daemon[1514]: [system] SELinux support is enabled Jun 21 06:14:37.755398 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 21 06:14:37.760368 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 21 06:14:37.760401 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 21 06:14:37.760501 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 21 06:14:37.760518 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 21 06:14:37.767084 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jun 21 06:14:38.375444 systemd-resolved[1435]: Clock change detected. Flushing caches. Jun 21 06:14:38.379396 systemd[1]: Started update-engine.service - Update Engine. Jun 21 06:14:38.379685 update_engine[1529]: I20250621 06:14:38.379630 1529 update_check_scheduler.cc:74] Next update check in 6m33s Jun 21 06:14:38.383888 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 21 06:14:38.431040 systemd-timesyncd[1439]: Contacted time server 66.59.198.178:123 (0.flatcar.pool.ntp.org). Jun 21 06:14:38.431222 systemd-timesyncd[1439]: Initial clock synchronization to Sat 2025-06-21 06:14:38.374973 UTC. Jun 21 06:14:38.433549 extend-filesystems[1560]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 21 06:14:38.433549 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 21 06:14:38.433549 extend-filesystems[1560]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jun 21 06:14:38.433966 extend-filesystems[1518]: Resized filesystem in /dev/vda9 Jun 21 06:14:38.434891 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 21 06:14:38.435797 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 21 06:14:38.511607 systemd-logind[1528]: New seat seat0. Jun 21 06:14:38.517861 systemd-logind[1528]: Watching system buttons on /dev/input/event2 (Power Button) Jun 21 06:14:38.517878 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 21 06:14:38.518089 systemd[1]: Started systemd-logind.service - User Login Management. Jun 21 06:14:38.533666 bash[1581]: Updated "/home/core/.ssh/authorized_keys" Jun 21 06:14:38.535160 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 21 06:14:38.539330 systemd[1]: Starting sshkeys.service... Jun 21 06:14:38.605206 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 21 06:14:38.607920 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 21 06:14:38.612363 locksmithd[1563]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 21 06:14:38.626430 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:38.798466 containerd[1545]: time="2025-06-21T06:14:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 21 06:14:38.799377 containerd[1545]: time="2025-06-21T06:14:38.799344229Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.818896399Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.652µs" Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.818935212Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.818957483Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.819149023Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.819168359Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.819194989Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.819256394Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.819270440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.819522172Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.819538132Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.819549985Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 21 06:14:38.820003 containerd[1545]: time="2025-06-21T06:14:38.819560614Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 21 06:14:38.820352 containerd[1545]: time="2025-06-21T06:14:38.819632970Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 21 06:14:38.820352 containerd[1545]: time="2025-06-21T06:14:38.819832554Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 06:14:38.820352 containerd[1545]: time="2025-06-21T06:14:38.819863452Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 21 06:14:38.820352 containerd[1545]: time="2025-06-21T06:14:38.819876196Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 21 06:14:38.820352 containerd[1545]: time="2025-06-21T06:14:38.819900161Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 21 06:14:38.820352 containerd[1545]: time="2025-06-21T06:14:38.820159728Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 21 06:14:38.820352 containerd[1545]: time="2025-06-21T06:14:38.820225031Z" level=info msg="metadata content store policy set" policy=shared Jun 21 06:14:38.832110 containerd[1545]: time="2025-06-21T06:14:38.832076658Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 21 06:14:38.832193 containerd[1545]: time="2025-06-21T06:14:38.832125360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 21 06:14:38.832193 containerd[1545]: time="2025-06-21T06:14:38.832141680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 21 06:14:38.832193 containerd[1545]: time="2025-06-21T06:14:38.832160766Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 21 06:14:38.832193 containerd[1545]: time="2025-06-21T06:14:38.832175604Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 21 06:14:38.832193 containerd[1545]: time="2025-06-21T06:14:38.832187797Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 21 06:14:38.832299 containerd[1545]: time="2025-06-21T06:14:38.832202915Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 21 06:14:38.832299 containerd[1545]: time="2025-06-21T06:14:38.832216931Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 21 06:14:38.832299 containerd[1545]: time="2025-06-21T06:14:38.832229846Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 21 06:14:38.832299 containerd[1545]: time="2025-06-21T06:14:38.832242109Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 21 06:14:38.832299 containerd[1545]: time="2025-06-21T06:14:38.832256516Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 21 06:14:38.832299 containerd[1545]: time="2025-06-21T06:14:38.832274199Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 21 06:14:38.832426 containerd[1545]: time="2025-06-21T06:14:38.832380999Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 21 06:14:38.832426 containerd[1545]: time="2025-06-21T06:14:38.832404253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 21 06:14:38.832426 containerd[1545]: time="2025-06-21T06:14:38.832420223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 21 06:14:38.832489 containerd[1545]: time="2025-06-21T06:14:38.832437395Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 21 06:14:38.832489 containerd[1545]: time="2025-06-21T06:14:38.832451231Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 21 06:14:38.832489 containerd[1545]: time="2025-06-21T06:14:38.832462282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 21 06:14:38.832489 containerd[1545]: time="2025-06-21T06:14:38.832474174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 21 06:14:38.832489 containerd[1545]: time="2025-06-21T06:14:38.832484754Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 21 06:14:38.832628 containerd[1545]: time="2025-06-21T06:14:38.832497007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 21 06:14:38.832628 containerd[1545]: time="2025-06-21T06:14:38.832509159Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 21 06:14:38.832628 containerd[1545]: time="2025-06-21T06:14:38.832525490Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 21 06:14:38.832628 containerd[1545]: time="2025-06-21T06:14:38.832583118Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 21 06:14:38.832628 containerd[1545]: time="2025-06-21T06:14:38.832597806Z" level=info msg="Start snapshots syncer" Jun 21 06:14:38.832628 containerd[1545]: time="2025-06-21T06:14:38.832617182Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 21 06:14:38.832945 containerd[1545]: time="2025-06-21T06:14:38.832865468Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 21 06:14:38.832945 containerd[1545]: time="2025-06-21T06:14:38.832943855Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 21 06:14:38.834127 containerd[1545]: time="2025-06-21T06:14:38.834095134Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 21 06:14:38.834242 containerd[1545]: time="2025-06-21T06:14:38.834215560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 21 06:14:38.834284 containerd[1545]: time="2025-06-21T06:14:38.834271915Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 21 06:14:38.834313 containerd[1545]: time="2025-06-21T06:14:38.834287695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 21 06:14:38.834313 containerd[1545]: time="2025-06-21T06:14:38.834301220Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 21 06:14:38.834359 containerd[1545]: time="2025-06-21T06:14:38.834313594Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 21 06:14:38.834359 containerd[1545]: time="2025-06-21T06:14:38.834326287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 21 06:14:38.834414 containerd[1545]: time="2025-06-21T06:14:38.834356945Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 21 06:14:38.834414 containerd[1545]: time="2025-06-21T06:14:38.834398373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 21 06:14:38.834460 containerd[1545]: time="2025-06-21T06:14:38.834411678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 21 06:14:38.834484 containerd[1545]: time="2025-06-21T06:14:38.834461251Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 21 06:14:38.837058 containerd[1545]: time="2025-06-21T06:14:38.837034607Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 06:14:38.837116 containerd[1545]: time="2025-06-21T06:14:38.837063231Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 21 06:14:38.837116 containerd[1545]: time="2025-06-21T06:14:38.837075003Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 06:14:38.837116 containerd[1545]: time="2025-06-21T06:14:38.837106021Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 21 06:14:38.837191 containerd[1545]: time="2025-06-21T06:14:38.837117252Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 21 06:14:38.837191 containerd[1545]: time="2025-06-21T06:14:38.837128443Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 21 06:14:38.837191 containerd[1545]: time="2025-06-21T06:14:38.837140666Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 21 06:14:38.837191 containerd[1545]: time="2025-06-21T06:14:38.837157858Z" level=info msg="runtime interface created" Jun 21 06:14:38.837191 containerd[1545]: time="2025-06-21T06:14:38.837163909Z" level=info msg="created NRI interface" Jun 21 06:14:38.837293 containerd[1545]: time="2025-06-21T06:14:38.837191982Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 21 06:14:38.837293 containerd[1545]: time="2025-06-21T06:14:38.837206800Z" level=info msg="Connect containerd service" Jun 21 06:14:38.837293 containerd[1545]: time="2025-06-21T06:14:38.837239702Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 21 06:14:38.838272 containerd[1545]: time="2025-06-21T06:14:38.838231993Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 06:14:38.868690 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 21 06:14:38.919320 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 21 06:14:38.922253 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 21 06:14:38.954483 systemd[1]: issuegen.service: Deactivated successfully. Jun 21 06:14:38.954693 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 21 06:14:38.957752 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 21 06:14:38.986839 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 21 06:14:38.990408 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 21 06:14:38.994425 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 21 06:14:38.994828 systemd[1]: Reached target getty.target - Login Prompts. Jun 21 06:14:39.063434 containerd[1545]: time="2025-06-21T06:14:39.062917881Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 21 06:14:39.063434 containerd[1545]: time="2025-06-21T06:14:39.063177738Z" level=info msg="Start subscribing containerd event" Jun 21 06:14:39.063434 containerd[1545]: time="2025-06-21T06:14:39.063241127Z" level=info msg="Start recovering state" Jun 21 06:14:39.064505 containerd[1545]: time="2025-06-21T06:14:39.063359629Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 21 06:14:39.064505 containerd[1545]: time="2025-06-21T06:14:39.063668819Z" level=info msg="Start event monitor" Jun 21 06:14:39.064505 containerd[1545]: time="2025-06-21T06:14:39.063693195Z" level=info msg="Start cni network conf syncer for default" Jun 21 06:14:39.064505 containerd[1545]: time="2025-06-21T06:14:39.063703184Z" level=info msg="Start streaming server" Jun 21 06:14:39.064505 containerd[1545]: time="2025-06-21T06:14:39.063739372Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 21 06:14:39.064505 containerd[1545]: time="2025-06-21T06:14:39.063751224Z" level=info msg="runtime interface starting up..." Jun 21 06:14:39.064505 containerd[1545]: time="2025-06-21T06:14:39.063762946Z" level=info msg="starting plugins..." Jun 21 06:14:39.064505 containerd[1545]: time="2025-06-21T06:14:39.063778275Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 21 06:14:39.064505 containerd[1545]: time="2025-06-21T06:14:39.063975304Z" level=info msg="containerd successfully booted in 0.266406s" Jun 21 06:14:39.064956 systemd[1]: Started containerd.service - containerd container runtime. Jun 21 06:14:39.148402 tar[1533]: linux-amd64/README.md Jun 21 06:14:39.150221 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 21 06:14:39.153300 systemd[1]: Started sshd@0-172.24.4.45:22-172.24.4.1:37996.service - OpenSSH per-connection server daemon (172.24.4.1:37996). Jun 21 06:14:39.178427 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 21 06:14:39.233024 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:39.646213 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:40.074270 systemd-networkd[1434]: eth0: Gained IPv6LL Jun 21 06:14:40.078932 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 21 06:14:40.080884 systemd[1]: Reached target network-online.target - Network is Online. Jun 21 06:14:40.085254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:14:40.088603 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 21 06:14:40.163703 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 21 06:14:40.524317 sshd[1627]: Accepted publickey for core from 172.24.4.1 port 37996 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:40.529296 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:40.579160 systemd-logind[1528]: New session 1 of user core. Jun 21 06:14:40.581123 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 21 06:14:40.584187 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 21 06:14:40.612578 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 21 06:14:40.617217 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 21 06:14:40.629437 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 21 06:14:40.636135 systemd-logind[1528]: New session c1 of user core. Jun 21 06:14:40.794641 systemd[1648]: Queued start job for default target default.target. Jun 21 06:14:40.799107 systemd[1648]: Created slice app.slice - User Application Slice. Jun 21 06:14:40.799136 systemd[1648]: Reached target paths.target - Paths. Jun 21 06:14:40.799173 systemd[1648]: Reached target timers.target - Timers. Jun 21 06:14:40.802076 systemd[1648]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 21 06:14:40.811251 systemd[1648]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 21 06:14:40.811355 systemd[1648]: Reached target sockets.target - Sockets. Jun 21 06:14:40.811392 systemd[1648]: Reached target basic.target - Basic System. Jun 21 06:14:40.811426 systemd[1648]: Reached target default.target - Main User Target. Jun 21 06:14:40.811451 systemd[1648]: Startup finished in 166ms. Jun 21 06:14:40.812474 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 21 06:14:40.820189 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 21 06:14:41.251480 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:41.277389 systemd[1]: Started sshd@1-172.24.4.45:22-172.24.4.1:38000.service - OpenSSH per-connection server daemon (172.24.4.1:38000). Jun 21 06:14:41.684191 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:42.039325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:14:42.062629 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:14:42.503536 sshd[1660]: Accepted publickey for core from 172.24.4.1 port 38000 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:42.507302 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:42.514363 systemd-logind[1528]: New session 2 of user core. Jun 21 06:14:42.520286 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 21 06:14:43.092127 sshd[1673]: Connection closed by 172.24.4.1 port 38000 Jun 21 06:14:43.091818 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:43.107621 systemd[1]: sshd@1-172.24.4.45:22-172.24.4.1:38000.service: Deactivated successfully. Jun 21 06:14:43.112482 systemd[1]: session-2.scope: Deactivated successfully. Jun 21 06:14:43.116858 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Jun 21 06:14:43.124530 systemd[1]: Started sshd@2-172.24.4.45:22-172.24.4.1:38014.service - OpenSSH per-connection server daemon (172.24.4.1:38014). Jun 21 06:14:43.127520 systemd-logind[1528]: Removed session 2. Jun 21 06:14:43.377882 kubelet[1668]: E0621 06:14:43.377763 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:14:43.382958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:14:43.383348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:14:43.384096 systemd[1]: kubelet.service: Consumed 2.144s CPU time, 266.4M memory peak. Jun 21 06:14:44.065529 login[1621]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 21 06:14:44.067343 login[1620]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 21 06:14:44.076068 systemd-logind[1528]: New session 3 of user core. Jun 21 06:14:44.085364 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 21 06:14:44.090570 systemd-logind[1528]: New session 4 of user core. Jun 21 06:14:44.100025 sshd[1680]: Accepted publickey for core from 172.24.4.1 port 38014 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:44.101396 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 21 06:14:44.101833 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:44.118248 systemd-logind[1528]: New session 5 of user core. Jun 21 06:14:44.126762 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 21 06:14:44.901040 sshd[1696]: Connection closed by 172.24.4.1 port 38014 Jun 21 06:14:44.899666 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:44.907568 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Jun 21 06:14:44.908068 systemd[1]: sshd@2-172.24.4.45:22-172.24.4.1:38014.service: Deactivated successfully. Jun 21 06:14:44.911740 systemd[1]: session-5.scope: Deactivated successfully. Jun 21 06:14:44.915277 systemd-logind[1528]: Removed session 5. Jun 21 06:14:45.279059 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:45.295169 coreos-metadata[1513]: Jun 21 06:14:45.295 WARN failed to locate config-drive, using the metadata service API instead Jun 21 06:14:45.520661 coreos-metadata[1513]: Jun 21 06:14:45.520 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jun 21 06:14:45.721049 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 21 06:14:45.726081 coreos-metadata[1513]: Jun 21 06:14:45.725 INFO Fetch successful Jun 21 06:14:45.727188 coreos-metadata[1513]: Jun 21 06:14:45.727 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 21 06:14:45.739437 coreos-metadata[1591]: Jun 21 06:14:45.739 WARN failed to locate config-drive, using the metadata service API instead Jun 21 06:14:45.742613 coreos-metadata[1513]: Jun 21 06:14:45.742 INFO Fetch successful Jun 21 06:14:45.742613 coreos-metadata[1513]: Jun 21 06:14:45.742 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jun 21 06:14:45.758284 coreos-metadata[1513]: Jun 21 06:14:45.758 INFO Fetch successful Jun 21 06:14:45.758814 coreos-metadata[1513]: Jun 21 06:14:45.758 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jun 21 06:14:45.772091 coreos-metadata[1513]: Jun 21 06:14:45.772 INFO Fetch successful Jun 21 06:14:45.772523 coreos-metadata[1513]: Jun 21 06:14:45.772 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jun 21 06:14:45.778881 coreos-metadata[1591]: Jun 21 06:14:45.778 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jun 21 06:14:45.790181 coreos-metadata[1513]: Jun 21 06:14:45.790 INFO Fetch successful Jun 21 06:14:45.790594 coreos-metadata[1513]: Jun 21 06:14:45.790 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jun 21 06:14:45.797901 coreos-metadata[1591]: Jun 21 06:14:45.797 INFO Fetch successful Jun 21 06:14:45.797901 coreos-metadata[1591]: Jun 21 06:14:45.797 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 21 06:14:45.807161 coreos-metadata[1513]: Jun 21 06:14:45.807 INFO Fetch successful Jun 21 06:14:45.811909 coreos-metadata[1591]: Jun 21 06:14:45.811 INFO Fetch successful Jun 21 06:14:45.820728 unknown[1591]: wrote ssh authorized keys file for user: core Jun 21 06:14:45.866147 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 21 06:14:45.868416 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 21 06:14:45.871870 update-ssh-keys[1722]: Updated "/home/core/.ssh/authorized_keys" Jun 21 06:14:45.872972 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 21 06:14:45.877246 systemd[1]: Finished sshkeys.service. Jun 21 06:14:45.882686 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 21 06:14:45.883232 systemd[1]: Startup finished in 3.782s (kernel) + 16.404s (initrd) + 11.197s (userspace) = 31.384s. Jun 21 06:14:53.637382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 21 06:14:53.647750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:14:54.190742 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:14:54.201306 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:14:54.307473 kubelet[1736]: E0621 06:14:54.307255 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:14:54.315828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:14:54.316214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:14:54.317427 systemd[1]: kubelet.service: Consumed 480ms CPU time, 110.4M memory peak. Jun 21 06:14:54.930430 systemd[1]: Started sshd@3-172.24.4.45:22-172.24.4.1:41042.service - OpenSSH per-connection server daemon (172.24.4.1:41042). Jun 21 06:14:56.053363 sshd[1744]: Accepted publickey for core from 172.24.4.1 port 41042 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:56.057097 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:56.108848 systemd-logind[1528]: New session 6 of user core. Jun 21 06:14:56.122726 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 21 06:14:56.647921 sshd[1746]: Connection closed by 172.24.4.1 port 41042 Jun 21 06:14:56.647716 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:56.684351 systemd[1]: sshd@3-172.24.4.45:22-172.24.4.1:41042.service: Deactivated successfully. Jun 21 06:14:56.691936 systemd[1]: session-6.scope: Deactivated successfully. Jun 21 06:14:56.696427 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Jun 21 06:14:56.706701 systemd[1]: Started sshd@4-172.24.4.45:22-172.24.4.1:41050.service - OpenSSH per-connection server daemon (172.24.4.1:41050). Jun 21 06:14:56.710897 systemd-logind[1528]: Removed session 6. Jun 21 06:14:57.901871 sshd[1752]: Accepted publickey for core from 172.24.4.1 port 41050 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:57.905617 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:57.925508 systemd-logind[1528]: New session 7 of user core. Jun 21 06:14:57.940449 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 21 06:14:58.531060 sshd[1754]: Connection closed by 172.24.4.1 port 41050 Jun 21 06:14:58.532675 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Jun 21 06:14:58.555576 systemd[1]: sshd@4-172.24.4.45:22-172.24.4.1:41050.service: Deactivated successfully. Jun 21 06:14:58.559523 systemd[1]: session-7.scope: Deactivated successfully. Jun 21 06:14:58.562637 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Jun 21 06:14:58.573108 systemd[1]: Started sshd@5-172.24.4.45:22-172.24.4.1:41054.service - OpenSSH per-connection server daemon (172.24.4.1:41054). Jun 21 06:14:58.576148 systemd-logind[1528]: Removed session 7. Jun 21 06:14:59.713237 sshd[1760]: Accepted publickey for core from 172.24.4.1 port 41054 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:14:59.716647 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:14:59.735177 systemd-logind[1528]: New session 8 of user core. Jun 21 06:14:59.747567 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 21 06:15:00.333033 sshd[1762]: Connection closed by 172.24.4.1 port 41054 Jun 21 06:15:00.334518 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Jun 21 06:15:00.355706 systemd[1]: sshd@5-172.24.4.45:22-172.24.4.1:41054.service: Deactivated successfully. Jun 21 06:15:00.359727 systemd[1]: session-8.scope: Deactivated successfully. Jun 21 06:15:00.361884 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Jun 21 06:15:00.369423 systemd[1]: Started sshd@6-172.24.4.45:22-172.24.4.1:41064.service - OpenSSH per-connection server daemon (172.24.4.1:41064). Jun 21 06:15:00.371483 systemd-logind[1528]: Removed session 8. Jun 21 06:15:01.563383 sshd[1768]: Accepted publickey for core from 172.24.4.1 port 41064 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:15:01.566846 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:15:01.580669 systemd-logind[1528]: New session 9 of user core. Jun 21 06:15:01.602533 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 21 06:15:02.065856 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 21 06:15:02.066646 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:15:02.094270 sudo[1771]: pam_unix(sudo:session): session closed for user root Jun 21 06:15:02.291898 sshd[1770]: Connection closed by 172.24.4.1 port 41064 Jun 21 06:15:02.292452 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Jun 21 06:15:02.306158 systemd[1]: sshd@6-172.24.4.45:22-172.24.4.1:41064.service: Deactivated successfully. Jun 21 06:15:02.309940 systemd[1]: session-9.scope: Deactivated successfully. Jun 21 06:15:02.312464 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Jun 21 06:15:02.319165 systemd[1]: Started sshd@7-172.24.4.45:22-172.24.4.1:41078.service - OpenSSH per-connection server daemon (172.24.4.1:41078). Jun 21 06:15:02.321957 systemd-logind[1528]: Removed session 9. Jun 21 06:15:03.586645 sshd[1777]: Accepted publickey for core from 172.24.4.1 port 41078 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:15:03.590281 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:15:03.608934 systemd-logind[1528]: New session 10 of user core. Jun 21 06:15:03.622411 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 21 06:15:04.049092 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 21 06:15:04.049853 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:15:04.066121 sudo[1781]: pam_unix(sudo:session): session closed for user root Jun 21 06:15:04.081619 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 21 06:15:04.082343 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:15:04.112915 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 21 06:15:04.232754 augenrules[1803]: No rules Jun 21 06:15:04.235535 systemd[1]: audit-rules.service: Deactivated successfully. Jun 21 06:15:04.236366 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 21 06:15:04.239709 sudo[1780]: pam_unix(sudo:session): session closed for user root Jun 21 06:15:04.391856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 21 06:15:04.395952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:15:04.490619 sshd[1779]: Connection closed by 172.24.4.1 port 41078 Jun 21 06:15:04.488783 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jun 21 06:15:04.514732 systemd[1]: sshd@7-172.24.4.45:22-172.24.4.1:41078.service: Deactivated successfully. Jun 21 06:15:04.524140 systemd[1]: session-10.scope: Deactivated successfully. Jun 21 06:15:04.529698 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Jun 21 06:15:04.542788 systemd[1]: Started sshd@8-172.24.4.45:22-172.24.4.1:53794.service - OpenSSH per-connection server daemon (172.24.4.1:53794). Jun 21 06:15:04.549630 systemd-logind[1528]: Removed session 10. Jun 21 06:15:04.960405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:15:04.971526 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:15:05.064576 kubelet[1821]: E0621 06:15:05.064511 1821 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:15:05.069691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:15:05.069914 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:15:05.070266 systemd[1]: kubelet.service: Consumed 613ms CPU time, 109.9M memory peak. Jun 21 06:15:05.951390 sshd[1815]: Accepted publickey for core from 172.24.4.1 port 53794 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:15:05.956187 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:15:05.974848 systemd-logind[1528]: New session 11 of user core. Jun 21 06:15:05.988404 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 21 06:15:06.352833 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 21 06:15:06.353678 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 21 06:15:07.441453 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 21 06:15:07.471608 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 21 06:15:07.979421 dockerd[1850]: time="2025-06-21T06:15:07.978377518Z" level=info msg="Starting up" Jun 21 06:15:07.980282 dockerd[1850]: time="2025-06-21T06:15:07.980240562Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 21 06:15:08.050154 systemd[1]: var-lib-docker-metacopy\x2dcheck2891720583-merged.mount: Deactivated successfully. Jun 21 06:15:08.073963 dockerd[1850]: time="2025-06-21T06:15:08.073820612Z" level=info msg="Loading containers: start." Jun 21 06:15:08.098133 kernel: Initializing XFRM netlink socket Jun 21 06:15:08.474467 systemd-networkd[1434]: docker0: Link UP Jun 21 06:15:08.481347 dockerd[1850]: time="2025-06-21T06:15:08.481278990Z" level=info msg="Loading containers: done." Jun 21 06:15:08.504241 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2652730539-merged.mount: Deactivated successfully. Jun 21 06:15:08.506572 dockerd[1850]: time="2025-06-21T06:15:08.506537372Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 21 06:15:08.506774 dockerd[1850]: time="2025-06-21T06:15:08.506744881Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 21 06:15:08.507044 dockerd[1850]: time="2025-06-21T06:15:08.507017803Z" level=info msg="Initializing buildkit" Jun 21 06:15:08.563446 dockerd[1850]: time="2025-06-21T06:15:08.563311652Z" level=info msg="Completed buildkit initialization" Jun 21 06:15:08.583131 dockerd[1850]: time="2025-06-21T06:15:08.582922262Z" level=info msg="Daemon has completed initialization" Jun 21 06:15:08.583447 dockerd[1850]: time="2025-06-21T06:15:08.583170758Z" level=info msg="API listen on /run/docker.sock" Jun 21 06:15:08.583762 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 21 06:15:10.263498 containerd[1545]: time="2025-06-21T06:15:10.263394006Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 21 06:15:10.957380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2594187873.mount: Deactivated successfully. Jun 21 06:15:12.777594 containerd[1545]: time="2025-06-21T06:15:12.777541596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:12.779238 containerd[1545]: time="2025-06-21T06:15:12.779203693Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jun 21 06:15:12.780808 containerd[1545]: time="2025-06-21T06:15:12.780760490Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:12.785280 containerd[1545]: time="2025-06-21T06:15:12.785235769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:12.786457 containerd[1545]: time="2025-06-21T06:15:12.786212986Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.522702833s" Jun 21 06:15:12.786457 containerd[1545]: time="2025-06-21T06:15:12.786269402Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 21 06:15:12.787346 containerd[1545]: time="2025-06-21T06:15:12.787153014Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 21 06:15:14.884022 containerd[1545]: time="2025-06-21T06:15:14.883182867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:14.887876 containerd[1545]: time="2025-06-21T06:15:14.886373837Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jun 21 06:15:14.887876 containerd[1545]: time="2025-06-21T06:15:14.887745465Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:14.892532 containerd[1545]: time="2025-06-21T06:15:14.892452206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:14.894974 containerd[1545]: time="2025-06-21T06:15:14.894742733Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 2.107553841s" Jun 21 06:15:14.894974 containerd[1545]: time="2025-06-21T06:15:14.894865944Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 21 06:15:14.897324 containerd[1545]: time="2025-06-21T06:15:14.897279863Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 21 06:15:15.147165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 21 06:15:15.161911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:15:16.095876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:15:16.114683 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:15:16.242949 kubelet[2118]: E0621 06:15:16.242838 2118 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:15:16.248631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:15:16.248952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:15:16.250759 systemd[1]: kubelet.service: Consumed 623ms CPU time, 108.6M memory peak. Jun 21 06:15:17.326728 containerd[1545]: time="2025-06-21T06:15:17.326671481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:17.328159 containerd[1545]: time="2025-06-21T06:15:17.328132096Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jun 21 06:15:17.329334 containerd[1545]: time="2025-06-21T06:15:17.329288920Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:17.332831 containerd[1545]: time="2025-06-21T06:15:17.332781234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:17.334151 containerd[1545]: time="2025-06-21T06:15:17.333958796Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 2.436614652s" Jun 21 06:15:17.334151 containerd[1545]: time="2025-06-21T06:15:17.334021274Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 21 06:15:17.334852 containerd[1545]: time="2025-06-21T06:15:17.334734846Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 21 06:15:18.724459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3759785686.mount: Deactivated successfully. Jun 21 06:15:19.329728 containerd[1545]: time="2025-06-21T06:15:19.329610808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:19.331442 containerd[1545]: time="2025-06-21T06:15:19.331413686Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jun 21 06:15:19.333006 containerd[1545]: time="2025-06-21T06:15:19.332814437Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:19.336085 containerd[1545]: time="2025-06-21T06:15:19.336058413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:19.336729 containerd[1545]: time="2025-06-21T06:15:19.336679721Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.001886886s" Jun 21 06:15:19.336913 containerd[1545]: time="2025-06-21T06:15:19.336731157Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 21 06:15:19.338033 containerd[1545]: time="2025-06-21T06:15:19.338007545Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 21 06:15:19.929803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626823208.mount: Deactivated successfully. Jun 21 06:15:21.460308 containerd[1545]: time="2025-06-21T06:15:21.458569433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:21.468536 containerd[1545]: time="2025-06-21T06:15:21.462056684Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jun 21 06:15:21.468536 containerd[1545]: time="2025-06-21T06:15:21.466496645Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:21.485956 containerd[1545]: time="2025-06-21T06:15:21.485740636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:21.488964 containerd[1545]: time="2025-06-21T06:15:21.488131226Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.150075851s" Jun 21 06:15:21.488964 containerd[1545]: time="2025-06-21T06:15:21.488293811Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 21 06:15:21.491037 containerd[1545]: time="2025-06-21T06:15:21.490570248Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 21 06:15:22.077503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4276885795.mount: Deactivated successfully. Jun 21 06:15:22.094504 containerd[1545]: time="2025-06-21T06:15:22.094410336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 06:15:22.096702 containerd[1545]: time="2025-06-21T06:15:22.096515861Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 21 06:15:22.098838 containerd[1545]: time="2025-06-21T06:15:22.098674586Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 06:15:22.104734 containerd[1545]: time="2025-06-21T06:15:22.104212438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 21 06:15:22.107359 containerd[1545]: time="2025-06-21T06:15:22.106449389Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 615.803751ms" Jun 21 06:15:22.107359 containerd[1545]: time="2025-06-21T06:15:22.106547083Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 21 06:15:22.108635 containerd[1545]: time="2025-06-21T06:15:22.108492377Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 21 06:15:22.758275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2041486271.mount: Deactivated successfully. Jun 21 06:15:23.516797 update_engine[1529]: I20250621 06:15:23.515473 1529 update_attempter.cc:509] Updating boot flags... Jun 21 06:15:25.796721 containerd[1545]: time="2025-06-21T06:15:25.796576669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:25.798330 containerd[1545]: time="2025-06-21T06:15:25.798097544Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jun 21 06:15:25.800306 containerd[1545]: time="2025-06-21T06:15:25.800242072Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:25.804769 containerd[1545]: time="2025-06-21T06:15:25.804743184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:25.805630 containerd[1545]: time="2025-06-21T06:15:25.805586718Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.69698726s" Jun 21 06:15:25.805794 containerd[1545]: time="2025-06-21T06:15:25.805764271Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 21 06:15:26.394619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 21 06:15:26.404747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:15:26.896151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:15:26.907507 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 21 06:15:26.961845 kubelet[2293]: E0621 06:15:26.961788 2293 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 21 06:15:26.964557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 21 06:15:26.964925 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 21 06:15:26.965702 systemd[1]: kubelet.service: Consumed 486ms CPU time, 110.1M memory peak. Jun 21 06:15:30.621434 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:15:30.622029 systemd[1]: kubelet.service: Consumed 486ms CPU time, 110.1M memory peak. Jun 21 06:15:30.634443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:15:30.694112 systemd[1]: Reload requested from client PID 2307 ('systemctl') (unit session-11.scope)... Jun 21 06:15:30.694160 systemd[1]: Reloading... Jun 21 06:15:30.858150 zram_generator::config[2352]: No configuration found. Jun 21 06:15:31.227143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:15:31.395181 systemd[1]: Reloading finished in 700 ms. Jun 21 06:15:31.468534 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 21 06:15:31.468635 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 21 06:15:31.469279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:15:31.469430 systemd[1]: kubelet.service: Consumed 367ms CPU time, 98.3M memory peak. Jun 21 06:15:31.471838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:15:31.987520 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:15:32.005689 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 06:15:32.125478 kubelet[2418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:15:32.128012 kubelet[2418]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 06:15:32.128012 kubelet[2418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:15:32.128012 kubelet[2418]: I0621 06:15:32.126145 2418 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 06:15:32.838791 kubelet[2418]: I0621 06:15:32.838729 2418 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 06:15:32.839042 kubelet[2418]: I0621 06:15:32.839028 2418 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 06:15:32.839508 kubelet[2418]: I0621 06:15:32.839492 2418 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 06:15:32.876252 kubelet[2418]: I0621 06:15:32.876210 2418 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 06:15:32.876588 kubelet[2418]: E0621 06:15:32.876259 2418 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.45:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.45:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:15:32.906006 kubelet[2418]: I0621 06:15:32.905932 2418 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 06:15:32.916547 kubelet[2418]: I0621 06:15:32.916492 2418 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 06:15:32.917363 kubelet[2418]: I0621 06:15:32.917279 2418 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 06:15:32.917973 kubelet[2418]: I0621 06:15:32.917360 2418 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-0-b-cad5e61be6.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 06:15:32.918248 kubelet[2418]: I0621 06:15:32.918087 2418 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 06:15:32.918248 kubelet[2418]: I0621 06:15:32.918122 2418 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 06:15:32.918668 kubelet[2418]: I0621 06:15:32.918620 2418 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:15:32.927698 kubelet[2418]: I0621 06:15:32.927645 2418 kubelet.go:446] "Attempting to sync node with API server" Jun 21 06:15:32.927923 kubelet[2418]: I0621 06:15:32.927745 2418 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 06:15:32.927923 kubelet[2418]: I0621 06:15:32.927873 2418 kubelet.go:352] "Adding apiserver pod source" Jun 21 06:15:32.928067 kubelet[2418]: I0621 06:15:32.927954 2418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 06:15:32.934834 kubelet[2418]: W0621 06:15:32.934760 2418 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-0-b-cad5e61be6.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 21 06:15:32.935414 kubelet[2418]: E0621 06:15:32.935353 2418 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.45:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4372-0-0-b-cad5e61be6.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.45:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:15:32.936078 kubelet[2418]: I0621 06:15:32.935661 2418 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 06:15:32.936463 kubelet[2418]: I0621 06:15:32.936445 2418 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 06:15:32.936650 kubelet[2418]: W0621 06:15:32.936636 2418 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 21 06:15:32.941378 kubelet[2418]: I0621 06:15:32.941359 2418 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 06:15:32.941499 kubelet[2418]: I0621 06:15:32.941487 2418 server.go:1287] "Started kubelet" Jun 21 06:15:32.953348 kubelet[2418]: I0621 06:15:32.953323 2418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 06:15:32.964012 kubelet[2418]: E0621 06:15:32.956838 2418 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.45:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.45:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4372-0-0-b-cad5e61be6.novalocal.184afa3a2c14868f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4372-0-0-b-cad5e61be6.novalocal,UID:ci-4372-0-0-b-cad5e61be6.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4372-0-0-b-cad5e61be6.novalocal,},FirstTimestamp:2025-06-21 06:15:32.941448847 +0000 UTC m=+0.900403841,LastTimestamp:2025-06-21 06:15:32.941448847 +0000 UTC m=+0.900403841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4372-0-0-b-cad5e61be6.novalocal,}" Jun 21 06:15:32.964561 kubelet[2418]: I0621 06:15:32.964501 2418 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 06:15:32.966565 kubelet[2418]: W0621 06:15:32.966504 2418 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 21 06:15:32.966875 kubelet[2418]: E0621 06:15:32.966754 2418 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.45:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.45:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:15:32.968330 kubelet[2418]: I0621 06:15:32.967778 2418 server.go:479] "Adding debug handlers to kubelet server" Jun 21 06:15:32.969901 kubelet[2418]: I0621 06:15:32.969878 2418 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 06:15:32.970390 kubelet[2418]: E0621 06:15:32.970204 2418 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" Jun 21 06:15:32.970830 kubelet[2418]: I0621 06:15:32.970578 2418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 06:15:32.971276 kubelet[2418]: I0621 06:15:32.971244 2418 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 06:15:32.972739 kubelet[2418]: I0621 06:15:32.971554 2418 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 06:15:32.972998 kubelet[2418]: I0621 06:15:32.971942 2418 reconciler.go:26] "Reconciler: start to sync state" Jun 21 06:15:32.972998 kubelet[2418]: I0621 06:15:32.972677 2418 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 06:15:32.973307 kubelet[2418]: W0621 06:15:32.973257 2418 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 21 06:15:32.973469 kubelet[2418]: E0621 06:15:32.973315 2418 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.45:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.45:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:15:32.973469 kubelet[2418]: E0621 06:15:32.973394 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-0-b-cad5e61be6.novalocal?timeout=10s\": dial tcp 172.24.4.45:6443: connect: connection refused" interval="200ms" Jun 21 06:15:32.974028 kubelet[2418]: I0621 06:15:32.973970 2418 factory.go:221] Registration of the systemd container factory successfully Jun 21 06:15:32.974377 kubelet[2418]: I0621 06:15:32.974251 2418 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 06:15:32.976261 kubelet[2418]: E0621 06:15:32.976228 2418 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 06:15:32.977387 kubelet[2418]: I0621 06:15:32.977356 2418 factory.go:221] Registration of the containerd container factory successfully Jun 21 06:15:33.000149 kubelet[2418]: I0621 06:15:32.998627 2418 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 06:15:33.000149 kubelet[2418]: I0621 06:15:32.998646 2418 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 06:15:33.000149 kubelet[2418]: I0621 06:15:32.998702 2418 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:15:33.004656 kubelet[2418]: I0621 06:15:33.004484 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 06:15:33.008205 kubelet[2418]: I0621 06:15:33.007522 2418 policy_none.go:49] "None policy: Start" Jun 21 06:15:33.008205 kubelet[2418]: I0621 06:15:33.007568 2418 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 06:15:33.008205 kubelet[2418]: I0621 06:15:33.007582 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 06:15:33.008205 kubelet[2418]: I0621 06:15:33.007638 2418 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 06:15:33.008205 kubelet[2418]: I0621 06:15:33.007679 2418 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 06:15:33.008205 kubelet[2418]: I0621 06:15:33.007693 2418 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 06:15:33.008205 kubelet[2418]: I0621 06:15:33.007710 2418 state_mem.go:35] "Initializing new in-memory state store" Jun 21 06:15:33.008205 kubelet[2418]: E0621 06:15:33.007764 2418 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 06:15:33.015873 kubelet[2418]: W0621 06:15:33.015776 2418 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.45:6443: connect: connection refused Jun 21 06:15:33.015957 kubelet[2418]: E0621 06:15:33.015886 2418 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.45:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.45:6443: connect: connection refused" logger="UnhandledError" Jun 21 06:15:33.021651 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 21 06:15:33.032059 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 21 06:15:33.036113 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 21 06:15:33.054889 kubelet[2418]: I0621 06:15:33.054796 2418 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 06:15:33.055076 kubelet[2418]: I0621 06:15:33.055055 2418 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 06:15:33.055157 kubelet[2418]: I0621 06:15:33.055080 2418 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 06:15:33.057158 kubelet[2418]: I0621 06:15:33.057113 2418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 06:15:33.058667 kubelet[2418]: E0621 06:15:33.058628 2418 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 06:15:33.058741 kubelet[2418]: E0621 06:15:33.058704 2418 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" Jun 21 06:15:33.143436 systemd[1]: Created slice kubepods-burstable-pod35453708fb3f74e8db8405c67c26581a.slice - libcontainer container kubepods-burstable-pod35453708fb3f74e8db8405c67c26581a.slice. Jun 21 06:15:33.160215 kubelet[2418]: I0621 06:15:33.159435 2418 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.161470 kubelet[2418]: E0621 06:15:33.161377 2418 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.45:6443/api/v1/nodes\": dial tcp 172.24.4.45:6443: connect: connection refused" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.163428 kubelet[2418]: E0621 06:15:33.163294 2418 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.166529 systemd[1]: Created slice kubepods-burstable-pod671e5fe25c3bffc05585c700cd4eb7ab.slice - libcontainer container kubepods-burstable-pod671e5fe25c3bffc05585c700cd4eb7ab.slice. Jun 21 06:15:33.172706 kubelet[2418]: E0621 06:15:33.171771 2418 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.174908 kubelet[2418]: E0621 06:15:33.174818 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-0-b-cad5e61be6.novalocal?timeout=10s\": dial tcp 172.24.4.45:6443: connect: connection refused" interval="400ms" Jun 21 06:15:33.178971 systemd[1]: Created slice kubepods-burstable-pod3cb07023912c11e10e2ac9d5698ee164.slice - libcontainer container kubepods-burstable-pod3cb07023912c11e10e2ac9d5698ee164.slice. Jun 21 06:15:33.183776 kubelet[2418]: E0621 06:15:33.183721 2418 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.275088 kubelet[2418]: I0621 06:15:33.274776 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/671e5fe25c3bffc05585c700cd4eb7ab-ca-certs\") pod \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"671e5fe25c3bffc05585c700cd4eb7ab\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.275088 kubelet[2418]: I0621 06:15:33.274859 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/671e5fe25c3bffc05585c700cd4eb7ab-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"671e5fe25c3bffc05585c700cd4eb7ab\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.275088 kubelet[2418]: I0621 06:15:33.274911 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/671e5fe25c3bffc05585c700cd4eb7ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"671e5fe25c3bffc05585c700cd4eb7ab\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.275088 kubelet[2418]: I0621 06:15:33.274973 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cb07023912c11e10e2ac9d5698ee164-kubeconfig\") pod \"kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"3cb07023912c11e10e2ac9d5698ee164\") " pod="kube-system/kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.275543 kubelet[2418]: I0621 06:15:33.275147 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35453708fb3f74e8db8405c67c26581a-ca-certs\") pod \"kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"35453708fb3f74e8db8405c67c26581a\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.275543 kubelet[2418]: I0621 06:15:33.275301 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35453708fb3f74e8db8405c67c26581a-k8s-certs\") pod \"kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"35453708fb3f74e8db8405c67c26581a\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.275543 kubelet[2418]: I0621 06:15:33.275419 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35453708fb3f74e8db8405c67c26581a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"35453708fb3f74e8db8405c67c26581a\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.275790 kubelet[2418]: I0621 06:15:33.275540 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/671e5fe25c3bffc05585c700cd4eb7ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"671e5fe25c3bffc05585c700cd4eb7ab\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.275790 kubelet[2418]: I0621 06:15:33.275628 2418 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/671e5fe25c3bffc05585c700cd4eb7ab-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"671e5fe25c3bffc05585c700cd4eb7ab\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.366645 kubelet[2418]: I0621 06:15:33.366548 2418 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.367470 kubelet[2418]: E0621 06:15:33.367370 2418 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.45:6443/api/v1/nodes\": dial tcp 172.24.4.45:6443: connect: connection refused" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.467795 containerd[1545]: time="2025-06-21T06:15:33.467358351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal,Uid:35453708fb3f74e8db8405c67c26581a,Namespace:kube-system,Attempt:0,}" Jun 21 06:15:33.474208 containerd[1545]: time="2025-06-21T06:15:33.474128188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal,Uid:671e5fe25c3bffc05585c700cd4eb7ab,Namespace:kube-system,Attempt:0,}" Jun 21 06:15:33.487833 containerd[1545]: time="2025-06-21T06:15:33.487680624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal,Uid:3cb07023912c11e10e2ac9d5698ee164,Namespace:kube-system,Attempt:0,}" Jun 21 06:15:33.587267 kubelet[2418]: E0621 06:15:33.586625 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.45:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4372-0-0-b-cad5e61be6.novalocal?timeout=10s\": dial tcp 172.24.4.45:6443: connect: connection refused" interval="800ms" Jun 21 06:15:33.617358 containerd[1545]: time="2025-06-21T06:15:33.617014150Z" level=info msg="connecting to shim b9cc65d313ed62816f72aa25530eee91f0c87ab6844dcf9a3b663ffd95ccdf41" address="unix:///run/containerd/s/f7788d96b5f7630225819f2e1a11e225710199d5f1ef999b815c407f296f5264" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:15:33.625121 containerd[1545]: time="2025-06-21T06:15:33.625018724Z" level=info msg="connecting to shim 604659d21a1df3d5043fa43450e19df339f9c322134a8ad5407b3a00b1ecb324" address="unix:///run/containerd/s/595471de76ae334fb442fd272fde98421a7373178f7b4619caa4006e91adba05" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:15:33.647429 containerd[1545]: time="2025-06-21T06:15:33.646977719Z" level=info msg="connecting to shim 467f4da216400ad94208c90822d21f0c8c92ed69438e10db36fee642fc44376b" address="unix:///run/containerd/s/d892a38619daf50a0af90e68d429677aa51642ab0994d2e25d8be870e1842f91" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:15:33.692230 systemd[1]: Started cri-containerd-604659d21a1df3d5043fa43450e19df339f9c322134a8ad5407b3a00b1ecb324.scope - libcontainer container 604659d21a1df3d5043fa43450e19df339f9c322134a8ad5407b3a00b1ecb324. Jun 21 06:15:33.706200 systemd[1]: Started cri-containerd-b9cc65d313ed62816f72aa25530eee91f0c87ab6844dcf9a3b663ffd95ccdf41.scope - libcontainer container b9cc65d313ed62816f72aa25530eee91f0c87ab6844dcf9a3b663ffd95ccdf41. Jun 21 06:15:33.717205 systemd[1]: Started cri-containerd-467f4da216400ad94208c90822d21f0c8c92ed69438e10db36fee642fc44376b.scope - libcontainer container 467f4da216400ad94208c90822d21f0c8c92ed69438e10db36fee642fc44376b. Jun 21 06:15:33.776770 kubelet[2418]: I0621 06:15:33.775108 2418 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.777455 kubelet[2418]: E0621 06:15:33.777383 2418 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.45:6443/api/v1/nodes\": dial tcp 172.24.4.45:6443: connect: connection refused" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:33.816649 containerd[1545]: time="2025-06-21T06:15:33.816560268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal,Uid:671e5fe25c3bffc05585c700cd4eb7ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"604659d21a1df3d5043fa43450e19df339f9c322134a8ad5407b3a00b1ecb324\"" Jun 21 06:15:33.817451 containerd[1545]: time="2025-06-21T06:15:33.817377552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal,Uid:3cb07023912c11e10e2ac9d5698ee164,Namespace:kube-system,Attempt:0,} returns sandbox id \"467f4da216400ad94208c90822d21f0c8c92ed69438e10db36fee642fc44376b\"" Jun 21 06:15:33.824871 containerd[1545]: time="2025-06-21T06:15:33.824826873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal,Uid:35453708fb3f74e8db8405c67c26581a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9cc65d313ed62816f72aa25530eee91f0c87ab6844dcf9a3b663ffd95ccdf41\"" Jun 21 06:15:33.825144 containerd[1545]: time="2025-06-21T06:15:33.825112330Z" level=info msg="CreateContainer within sandbox \"604659d21a1df3d5043fa43450e19df339f9c322134a8ad5407b3a00b1ecb324\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 21 06:15:33.826428 containerd[1545]: time="2025-06-21T06:15:33.826401498Z" level=info msg="CreateContainer within sandbox \"467f4da216400ad94208c90822d21f0c8c92ed69438e10db36fee642fc44376b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 21 06:15:33.829905 containerd[1545]: time="2025-06-21T06:15:33.829809665Z" level=info msg="CreateContainer within sandbox \"b9cc65d313ed62816f72aa25530eee91f0c87ab6844dcf9a3b663ffd95ccdf41\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 21 06:15:33.858740 containerd[1545]: time="2025-06-21T06:15:33.858659895Z" level=info msg="Container bfa781e59019c1b9ef0f7b25017443a6e796e56518cd337fda4cb9bf8c08c5b6: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:15:33.871717 containerd[1545]: time="2025-06-21T06:15:33.871653553Z" level=info msg="Container 970f201ce81a377830804765a845e9432f2716036b76b652c4bc985f1d4e4c59: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:15:33.879932 containerd[1545]: time="2025-06-21T06:15:33.879864603Z" level=info msg="Container 11cfd3d3dad0bb6a2cc4f9a903cde290d38ed5c01e8baec696b2ff5eaf5d42e4: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:15:33.892380 containerd[1545]: time="2025-06-21T06:15:33.892315293Z" level=info msg="CreateContainer within sandbox \"b9cc65d313ed62816f72aa25530eee91f0c87ab6844dcf9a3b663ffd95ccdf41\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bfa781e59019c1b9ef0f7b25017443a6e796e56518cd337fda4cb9bf8c08c5b6\"" Jun 21 06:15:33.893634 containerd[1545]: time="2025-06-21T06:15:33.893592460Z" level=info msg="StartContainer for \"bfa781e59019c1b9ef0f7b25017443a6e796e56518cd337fda4cb9bf8c08c5b6\"" Jun 21 06:15:33.897019 containerd[1545]: time="2025-06-21T06:15:33.896947406Z" level=info msg="connecting to shim bfa781e59019c1b9ef0f7b25017443a6e796e56518cd337fda4cb9bf8c08c5b6" address="unix:///run/containerd/s/f7788d96b5f7630225819f2e1a11e225710199d5f1ef999b815c407f296f5264" protocol=ttrpc version=3 Jun 21 06:15:33.906820 containerd[1545]: time="2025-06-21T06:15:33.906757469Z" level=info msg="CreateContainer within sandbox \"467f4da216400ad94208c90822d21f0c8c92ed69438e10db36fee642fc44376b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"11cfd3d3dad0bb6a2cc4f9a903cde290d38ed5c01e8baec696b2ff5eaf5d42e4\"" Jun 21 06:15:33.908427 containerd[1545]: time="2025-06-21T06:15:33.908404680Z" level=info msg="StartContainer for \"11cfd3d3dad0bb6a2cc4f9a903cde290d38ed5c01e8baec696b2ff5eaf5d42e4\"" Jun 21 06:15:33.909810 containerd[1545]: time="2025-06-21T06:15:33.909782316Z" level=info msg="CreateContainer within sandbox \"604659d21a1df3d5043fa43450e19df339f9c322134a8ad5407b3a00b1ecb324\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"970f201ce81a377830804765a845e9432f2716036b76b652c4bc985f1d4e4c59\"" Jun 21 06:15:33.911006 containerd[1545]: time="2025-06-21T06:15:33.910890246Z" level=info msg="connecting to shim 11cfd3d3dad0bb6a2cc4f9a903cde290d38ed5c01e8baec696b2ff5eaf5d42e4" address="unix:///run/containerd/s/d892a38619daf50a0af90e68d429677aa51642ab0994d2e25d8be870e1842f91" protocol=ttrpc version=3 Jun 21 06:15:33.911747 containerd[1545]: time="2025-06-21T06:15:33.911696158Z" level=info msg="StartContainer for \"970f201ce81a377830804765a845e9432f2716036b76b652c4bc985f1d4e4c59\"" Jun 21 06:15:33.915874 containerd[1545]: time="2025-06-21T06:15:33.915782779Z" level=info msg="connecting to shim 970f201ce81a377830804765a845e9432f2716036b76b652c4bc985f1d4e4c59" address="unix:///run/containerd/s/595471de76ae334fb442fd272fde98421a7373178f7b4619caa4006e91adba05" protocol=ttrpc version=3 Jun 21 06:15:33.929849 systemd[1]: Started cri-containerd-bfa781e59019c1b9ef0f7b25017443a6e796e56518cd337fda4cb9bf8c08c5b6.scope - libcontainer container bfa781e59019c1b9ef0f7b25017443a6e796e56518cd337fda4cb9bf8c08c5b6. Jun 21 06:15:33.964259 systemd[1]: Started cri-containerd-11cfd3d3dad0bb6a2cc4f9a903cde290d38ed5c01e8baec696b2ff5eaf5d42e4.scope - libcontainer container 11cfd3d3dad0bb6a2cc4f9a903cde290d38ed5c01e8baec696b2ff5eaf5d42e4. Jun 21 06:15:33.975327 systemd[1]: Started cri-containerd-970f201ce81a377830804765a845e9432f2716036b76b652c4bc985f1d4e4c59.scope - libcontainer container 970f201ce81a377830804765a845e9432f2716036b76b652c4bc985f1d4e4c59. Jun 21 06:15:34.087159 containerd[1545]: time="2025-06-21T06:15:34.085177007Z" level=info msg="StartContainer for \"bfa781e59019c1b9ef0f7b25017443a6e796e56518cd337fda4cb9bf8c08c5b6\" returns successfully" Jun 21 06:15:34.087159 containerd[1545]: time="2025-06-21T06:15:34.085436023Z" level=info msg="StartContainer for \"11cfd3d3dad0bb6a2cc4f9a903cde290d38ed5c01e8baec696b2ff5eaf5d42e4\" returns successfully" Jun 21 06:15:34.102169 containerd[1545]: time="2025-06-21T06:15:34.102119735Z" level=info msg="StartContainer for \"970f201ce81a377830804765a845e9432f2716036b76b652c4bc985f1d4e4c59\" returns successfully" Jun 21 06:15:34.580423 kubelet[2418]: I0621 06:15:34.580389 2418 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:35.059355 kubelet[2418]: E0621 06:15:35.059315 2418 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:35.061188 kubelet[2418]: E0621 06:15:35.061147 2418 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:35.065355 kubelet[2418]: E0621 06:15:35.065184 2418 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:36.074199 kubelet[2418]: E0621 06:15:36.074160 2418 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:36.074812 kubelet[2418]: E0621 06:15:36.074764 2418 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:36.076250 kubelet[2418]: E0621 06:15:36.076221 2418 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.012656 kubelet[2418]: E0621 06:15:37.012477 2418 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.074629 kubelet[2418]: E0621 06:15:37.074515 2418 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.074629 kubelet[2418]: E0621 06:15:37.074612 2418 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.081814 kubelet[2418]: I0621 06:15:37.081737 2418 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.081814 kubelet[2418]: E0621 06:15:37.081779 2418 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4372-0-0-b-cad5e61be6.novalocal\": node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" Jun 21 06:15:37.171766 kubelet[2418]: I0621 06:15:37.171669 2418 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.178694 kubelet[2418]: E0621 06:15:37.178577 2418 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.178694 kubelet[2418]: I0621 06:15:37.178642 2418 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.183978 kubelet[2418]: E0621 06:15:37.183627 2418 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.183978 kubelet[2418]: I0621 06:15:37.183676 2418 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.186342 kubelet[2418]: E0621 06:15:37.186277 2418 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.607549 kubelet[2418]: I0621 06:15:37.607455 2418 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.614740 kubelet[2418]: E0621 06:15:37.613799 2418 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:37.955060 kubelet[2418]: I0621 06:15:37.953349 2418 apiserver.go:52] "Watching apiserver" Jun 21 06:15:37.973925 kubelet[2418]: I0621 06:15:37.973707 2418 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 06:15:39.557436 systemd[1]: Reload requested from client PID 2687 ('systemctl') (unit session-11.scope)... Jun 21 06:15:39.557544 systemd[1]: Reloading... Jun 21 06:15:39.693041 zram_generator::config[2732]: No configuration found. Jun 21 06:15:39.859504 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 21 06:15:40.025278 systemd[1]: Reloading finished in 466 ms. Jun 21 06:15:40.051945 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:15:40.074521 systemd[1]: kubelet.service: Deactivated successfully. Jun 21 06:15:40.074865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:15:40.074951 systemd[1]: kubelet.service: Consumed 1.695s CPU time, 131.9M memory peak. Jun 21 06:15:40.077767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 21 06:15:40.517976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 21 06:15:40.530598 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 21 06:15:40.598021 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:15:40.598021 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 21 06:15:40.598021 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 21 06:15:40.598021 kubelet[2796]: I0621 06:15:40.596962 2796 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 21 06:15:40.611404 kubelet[2796]: I0621 06:15:40.611369 2796 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 21 06:15:40.612016 kubelet[2796]: I0621 06:15:40.611639 2796 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 21 06:15:40.612761 kubelet[2796]: I0621 06:15:40.612383 2796 server.go:954] "Client rotation is on, will bootstrap in background" Jun 21 06:15:40.616502 kubelet[2796]: I0621 06:15:40.615051 2796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 21 06:15:40.618841 kubelet[2796]: I0621 06:15:40.618582 2796 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 21 06:15:40.630347 kubelet[2796]: I0621 06:15:40.630321 2796 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 21 06:15:40.636404 kubelet[2796]: I0621 06:15:40.634948 2796 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 21 06:15:40.636404 kubelet[2796]: I0621 06:15:40.635341 2796 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 21 06:15:40.636404 kubelet[2796]: I0621 06:15:40.635372 2796 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4372-0-0-b-cad5e61be6.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 21 06:15:40.636404 kubelet[2796]: I0621 06:15:40.636203 2796 topology_manager.go:138] "Creating topology manager with none policy" Jun 21 06:15:40.636794 kubelet[2796]: I0621 06:15:40.636220 2796 container_manager_linux.go:304] "Creating device plugin manager" Jun 21 06:15:40.636991 kubelet[2796]: I0621 06:15:40.636951 2796 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:15:40.637190 kubelet[2796]: I0621 06:15:40.637172 2796 kubelet.go:446] "Attempting to sync node with API server" Jun 21 06:15:40.637261 kubelet[2796]: I0621 06:15:40.637209 2796 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 21 06:15:40.637303 kubelet[2796]: I0621 06:15:40.637259 2796 kubelet.go:352] "Adding apiserver pod source" Jun 21 06:15:40.637303 kubelet[2796]: I0621 06:15:40.637283 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 21 06:15:40.640374 kubelet[2796]: I0621 06:15:40.640265 2796 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 21 06:15:40.640837 kubelet[2796]: I0621 06:15:40.640821 2796 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 21 06:15:40.641546 kubelet[2796]: I0621 06:15:40.641524 2796 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 21 06:15:40.641692 kubelet[2796]: I0621 06:15:40.641672 2796 server.go:1287] "Started kubelet" Jun 21 06:15:40.644346 kubelet[2796]: I0621 06:15:40.644314 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 21 06:15:40.652703 kubelet[2796]: I0621 06:15:40.652564 2796 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 21 06:15:40.654176 kubelet[2796]: I0621 06:15:40.654143 2796 server.go:479] "Adding debug handlers to kubelet server" Jun 21 06:15:40.656050 kubelet[2796]: I0621 06:15:40.655670 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 21 06:15:40.656050 kubelet[2796]: I0621 06:15:40.655937 2796 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 21 06:15:40.656444 kubelet[2796]: I0621 06:15:40.656425 2796 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 21 06:15:40.661095 kubelet[2796]: I0621 06:15:40.661075 2796 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 21 06:15:40.663005 kubelet[2796]: E0621 06:15:40.662025 2796 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4372-0-0-b-cad5e61be6.novalocal\" not found" Jun 21 06:15:40.666671 kubelet[2796]: I0621 06:15:40.666653 2796 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 21 06:15:40.666880 kubelet[2796]: I0621 06:15:40.666867 2796 reconciler.go:26] "Reconciler: start to sync state" Jun 21 06:15:40.670342 kubelet[2796]: I0621 06:15:40.670246 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 21 06:15:40.672747 kubelet[2796]: I0621 06:15:40.672723 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 21 06:15:40.673039 kubelet[2796]: I0621 06:15:40.672979 2796 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 21 06:15:40.673139 kubelet[2796]: I0621 06:15:40.673127 2796 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 21 06:15:40.673210 kubelet[2796]: I0621 06:15:40.673201 2796 kubelet.go:2382] "Starting kubelet main sync loop" Jun 21 06:15:40.673347 kubelet[2796]: E0621 06:15:40.673311 2796 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 21 06:15:40.674511 kubelet[2796]: I0621 06:15:40.674479 2796 factory.go:221] Registration of the systemd container factory successfully Jun 21 06:15:40.675028 kubelet[2796]: I0621 06:15:40.674607 2796 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 21 06:15:40.693115 kubelet[2796]: I0621 06:15:40.691370 2796 factory.go:221] Registration of the containerd container factory successfully Jun 21 06:15:40.697596 kubelet[2796]: E0621 06:15:40.697516 2796 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 21 06:15:40.746618 sudo[2829]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 21 06:15:40.747162 sudo[2829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 21 06:15:40.772678 kubelet[2796]: I0621 06:15:40.772583 2796 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 21 06:15:40.774173 kubelet[2796]: I0621 06:15:40.772813 2796 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 21 06:15:40.774173 kubelet[2796]: I0621 06:15:40.772868 2796 state_mem.go:36] "Initialized new in-memory state store" Jun 21 06:15:40.774173 kubelet[2796]: I0621 06:15:40.773111 2796 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 21 06:15:40.774173 kubelet[2796]: I0621 06:15:40.773125 2796 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 21 06:15:40.774173 kubelet[2796]: I0621 06:15:40.773188 2796 policy_none.go:49] "None policy: Start" Jun 21 06:15:40.774173 kubelet[2796]: I0621 06:15:40.773239 2796 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 21 06:15:40.774173 kubelet[2796]: I0621 06:15:40.773294 2796 state_mem.go:35] "Initializing new in-memory state store" Jun 21 06:15:40.774173 kubelet[2796]: I0621 06:15:40.773432 2796 state_mem.go:75] "Updated machine memory state" Jun 21 06:15:40.774712 kubelet[2796]: E0621 06:15:40.774693 2796 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 21 06:15:40.781532 kubelet[2796]: I0621 06:15:40.780909 2796 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 21 06:15:40.782307 kubelet[2796]: I0621 06:15:40.782268 2796 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 21 06:15:40.785192 kubelet[2796]: I0621 06:15:40.782792 2796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 21 06:15:40.788189 kubelet[2796]: I0621 06:15:40.787884 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 21 06:15:40.790586 kubelet[2796]: E0621 06:15:40.789870 2796 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 21 06:15:40.907259 kubelet[2796]: I0621 06:15:40.907190 2796 kubelet_node_status.go:75] "Attempting to register node" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:40.931836 kubelet[2796]: I0621 06:15:40.931788 2796 kubelet_node_status.go:124] "Node was previously registered" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:40.932250 kubelet[2796]: I0621 06:15:40.932168 2796 kubelet_node_status.go:78] "Successfully registered node" node="ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:40.976623 kubelet[2796]: I0621 06:15:40.976379 2796 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:40.977935 kubelet[2796]: I0621 06:15:40.977920 2796 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:40.979052 kubelet[2796]: I0621 06:15:40.979037 2796 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:40.990831 kubelet[2796]: W0621 06:15:40.990720 2796 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 06:15:40.995555 kubelet[2796]: W0621 06:15:40.995344 2796 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 06:15:40.996269 kubelet[2796]: W0621 06:15:40.996046 2796 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 06:15:41.069667 kubelet[2796]: I0621 06:15:41.069337 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/671e5fe25c3bffc05585c700cd4eb7ab-ca-certs\") pod \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"671e5fe25c3bffc05585c700cd4eb7ab\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:41.070399 kubelet[2796]: I0621 06:15:41.069593 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/671e5fe25c3bffc05585c700cd4eb7ab-flexvolume-dir\") pod \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"671e5fe25c3bffc05585c700cd4eb7ab\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:41.070399 kubelet[2796]: I0621 06:15:41.070144 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/671e5fe25c3bffc05585c700cd4eb7ab-kubeconfig\") pod \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"671e5fe25c3bffc05585c700cd4eb7ab\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:41.070399 kubelet[2796]: I0621 06:15:41.070212 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cb07023912c11e10e2ac9d5698ee164-kubeconfig\") pod \"kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"3cb07023912c11e10e2ac9d5698ee164\") " pod="kube-system/kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:41.070399 kubelet[2796]: I0621 06:15:41.070236 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35453708fb3f74e8db8405c67c26581a-k8s-certs\") pod \"kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"35453708fb3f74e8db8405c67c26581a\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:41.070895 kubelet[2796]: I0621 06:15:41.070269 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/671e5fe25c3bffc05585c700cd4eb7ab-k8s-certs\") pod \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"671e5fe25c3bffc05585c700cd4eb7ab\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:41.070895 kubelet[2796]: I0621 06:15:41.070732 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/671e5fe25c3bffc05585c700cd4eb7ab-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"671e5fe25c3bffc05585c700cd4eb7ab\") " pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:41.070895 kubelet[2796]: I0621 06:15:41.070758 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35453708fb3f74e8db8405c67c26581a-ca-certs\") pod \"kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"35453708fb3f74e8db8405c67c26581a\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:41.071267 kubelet[2796]: I0621 06:15:41.071128 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35453708fb3f74e8db8405c67c26581a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal\" (UID: \"35453708fb3f74e8db8405c67c26581a\") " pod="kube-system/kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:41.326182 sudo[2829]: pam_unix(sudo:session): session closed for user root Jun 21 06:15:41.650551 kubelet[2796]: I0621 06:15:41.650091 2796 apiserver.go:52] "Watching apiserver" Jun 21 06:15:41.667702 kubelet[2796]: I0621 06:15:41.667625 2796 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 21 06:15:41.744379 kubelet[2796]: I0621 06:15:41.744333 2796 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:41.770042 kubelet[2796]: W0621 06:15:41.769253 2796 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 21 06:15:41.770042 kubelet[2796]: E0621 06:15:41.769354 2796 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal" Jun 21 06:15:41.802400 kubelet[2796]: I0621 06:15:41.802312 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4372-0-0-b-cad5e61be6.novalocal" podStartSLOduration=1.802157695 podStartE2EDuration="1.802157695s" podCreationTimestamp="2025-06-21 06:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:15:41.802043601 +0000 UTC m=+1.261304826" watchObservedRunningTime="2025-06-21 06:15:41.802157695 +0000 UTC m=+1.261418910" Jun 21 06:15:41.830374 kubelet[2796]: I0621 06:15:41.829919 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4372-0-0-b-cad5e61be6.novalocal" podStartSLOduration=1.829899478 podStartE2EDuration="1.829899478s" podCreationTimestamp="2025-06-21 06:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:15:41.817235256 +0000 UTC m=+1.276496471" watchObservedRunningTime="2025-06-21 06:15:41.829899478 +0000 UTC m=+1.289160703" Jun 21 06:15:41.830374 kubelet[2796]: I0621 06:15:41.830038 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4372-0-0-b-cad5e61be6.novalocal" podStartSLOduration=1.8300311150000002 podStartE2EDuration="1.830031115s" podCreationTimestamp="2025-06-21 06:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:15:41.829718969 +0000 UTC m=+1.288980194" watchObservedRunningTime="2025-06-21 06:15:41.830031115 +0000 UTC m=+1.289292340" Jun 21 06:15:44.096744 sudo[1830]: pam_unix(sudo:session): session closed for user root Jun 21 06:15:44.369021 sshd[1829]: Connection closed by 172.24.4.1 port 53794 Jun 21 06:15:44.375337 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jun 21 06:15:44.384488 kubelet[2796]: I0621 06:15:44.384405 2796 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 21 06:15:44.385854 containerd[1545]: time="2025-06-21T06:15:44.385794412Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 21 06:15:44.388357 kubelet[2796]: I0621 06:15:44.388183 2796 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 21 06:15:44.390651 systemd[1]: sshd@8-172.24.4.45:22-172.24.4.1:53794.service: Deactivated successfully. Jun 21 06:15:44.397111 systemd[1]: session-11.scope: Deactivated successfully. Jun 21 06:15:44.397966 systemd[1]: session-11.scope: Consumed 8.467s CPU time, 272.6M memory peak. Jun 21 06:15:44.412289 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Jun 21 06:15:44.420750 systemd-logind[1528]: Removed session 11. Jun 21 06:15:45.450871 systemd[1]: Created slice kubepods-besteffort-pod98d15183_ef30_4e77_a25b_02d87deb5843.slice - libcontainer container kubepods-besteffort-pod98d15183_ef30_4e77_a25b_02d87deb5843.slice. Jun 21 06:15:45.473904 systemd[1]: Created slice kubepods-burstable-pod38a7c875_4432_45d7_b2fb_00042dfc15e8.slice - libcontainer container kubepods-burstable-pod38a7c875_4432_45d7_b2fb_00042dfc15e8.slice. Jun 21 06:15:45.500290 systemd[1]: Created slice kubepods-besteffort-pod6f63e500_c310_4c34_82d7_72167afde656.slice - libcontainer container kubepods-besteffort-pod6f63e500_c310_4c34_82d7_72167afde656.slice. Jun 21 06:15:45.519440 kubelet[2796]: I0621 06:15:45.519288 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-hostproc\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.519440 kubelet[2796]: I0621 06:15:45.519446 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98d15183-ef30-4e77-a25b-02d87deb5843-xtables-lock\") pod \"kube-proxy-dzfrk\" (UID: \"98d15183-ef30-4e77-a25b-02d87deb5843\") " pod="kube-system/kube-proxy-dzfrk" Jun 21 06:15:45.520062 kubelet[2796]: I0621 06:15:45.519539 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cni-path\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520062 kubelet[2796]: I0621 06:15:45.519613 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-run\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520062 kubelet[2796]: I0621 06:15:45.519647 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-bpf-maps\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520062 kubelet[2796]: I0621 06:15:45.519667 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-etc-cni-netd\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520062 kubelet[2796]: I0621 06:15:45.519686 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbxdv\" (UniqueName: \"kubernetes.io/projected/38a7c875-4432-45d7-b2fb-00042dfc15e8-kube-api-access-rbxdv\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520287 kubelet[2796]: I0621 06:15:45.519711 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw6pw\" (UniqueName: \"kubernetes.io/projected/6f63e500-c310-4c34-82d7-72167afde656-kube-api-access-tw6pw\") pod \"cilium-operator-6c4d7847fc-sprmb\" (UID: \"6f63e500-c310-4c34-82d7-72167afde656\") " pod="kube-system/cilium-operator-6c4d7847fc-sprmb" Jun 21 06:15:45.520287 kubelet[2796]: I0621 06:15:45.519730 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/98d15183-ef30-4e77-a25b-02d87deb5843-kube-proxy\") pod \"kube-proxy-dzfrk\" (UID: \"98d15183-ef30-4e77-a25b-02d87deb5843\") " pod="kube-system/kube-proxy-dzfrk" Jun 21 06:15:45.520287 kubelet[2796]: I0621 06:15:45.519748 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbn7\" (UniqueName: \"kubernetes.io/projected/98d15183-ef30-4e77-a25b-02d87deb5843-kube-api-access-wlbn7\") pod \"kube-proxy-dzfrk\" (UID: \"98d15183-ef30-4e77-a25b-02d87deb5843\") " pod="kube-system/kube-proxy-dzfrk" Jun 21 06:15:45.520287 kubelet[2796]: I0621 06:15:45.519767 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38a7c875-4432-45d7-b2fb-00042dfc15e8-clustermesh-secrets\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520287 kubelet[2796]: I0621 06:15:45.519785 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38a7c875-4432-45d7-b2fb-00042dfc15e8-hubble-tls\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520506 kubelet[2796]: I0621 06:15:45.519803 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-xtables-lock\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520506 kubelet[2796]: I0621 06:15:45.519828 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-config-path\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520506 kubelet[2796]: I0621 06:15:45.519855 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98d15183-ef30-4e77-a25b-02d87deb5843-lib-modules\") pod \"kube-proxy-dzfrk\" (UID: \"98d15183-ef30-4e77-a25b-02d87deb5843\") " pod="kube-system/kube-proxy-dzfrk" Jun 21 06:15:45.520506 kubelet[2796]: I0621 06:15:45.519873 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f63e500-c310-4c34-82d7-72167afde656-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-sprmb\" (UID: \"6f63e500-c310-4c34-82d7-72167afde656\") " pod="kube-system/cilium-operator-6c4d7847fc-sprmb" Jun 21 06:15:45.520506 kubelet[2796]: I0621 06:15:45.519892 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-host-proc-sys-net\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520692 kubelet[2796]: I0621 06:15:45.520249 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-host-proc-sys-kernel\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520692 kubelet[2796]: I0621 06:15:45.520275 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-cgroup\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.520692 kubelet[2796]: I0621 06:15:45.520509 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-lib-modules\") pod \"cilium-9sjxq\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " pod="kube-system/cilium-9sjxq" Jun 21 06:15:45.771006 containerd[1545]: time="2025-06-21T06:15:45.770776815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dzfrk,Uid:98d15183-ef30-4e77-a25b-02d87deb5843,Namespace:kube-system,Attempt:0,}" Jun 21 06:15:45.779919 containerd[1545]: time="2025-06-21T06:15:45.779871548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9sjxq,Uid:38a7c875-4432-45d7-b2fb-00042dfc15e8,Namespace:kube-system,Attempt:0,}" Jun 21 06:15:45.805777 containerd[1545]: time="2025-06-21T06:15:45.805658629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sprmb,Uid:6f63e500-c310-4c34-82d7-72167afde656,Namespace:kube-system,Attempt:0,}" Jun 21 06:15:45.810892 containerd[1545]: time="2025-06-21T06:15:45.810759769Z" level=info msg="connecting to shim bb820656b4f9f190edf15591374bd0f5d88fb15d258c5a050717b5d08884ada0" address="unix:///run/containerd/s/b0af813797ce949eac1409b475fa76a0b3b5489434a86a2b4e4ba07f52838e36" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:15:45.860218 systemd[1]: Started cri-containerd-bb820656b4f9f190edf15591374bd0f5d88fb15d258c5a050717b5d08884ada0.scope - libcontainer container bb820656b4f9f190edf15591374bd0f5d88fb15d258c5a050717b5d08884ada0. Jun 21 06:15:45.893387 containerd[1545]: time="2025-06-21T06:15:45.893309936Z" level=info msg="connecting to shim 66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a" address="unix:///run/containerd/s/f204997b1a9645c5bae891b5c3e0d656fda090607cdc66376ad96355b754ddc8" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:15:45.909549 containerd[1545]: time="2025-06-21T06:15:45.909491244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dzfrk,Uid:98d15183-ef30-4e77-a25b-02d87deb5843,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb820656b4f9f190edf15591374bd0f5d88fb15d258c5a050717b5d08884ada0\"" Jun 21 06:15:45.915742 containerd[1545]: time="2025-06-21T06:15:45.915333936Z" level=info msg="CreateContainer within sandbox \"bb820656b4f9f190edf15591374bd0f5d88fb15d258c5a050717b5d08884ada0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 21 06:15:45.923938 containerd[1545]: time="2025-06-21T06:15:45.923891400Z" level=info msg="connecting to shim 8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8" address="unix:///run/containerd/s/215df9706456362c6fec5214ca018146dca9e33a882348f8655628f2b97b9c6a" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:15:45.936326 systemd[1]: Started cri-containerd-66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a.scope - libcontainer container 66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a. Jun 21 06:15:45.940061 containerd[1545]: time="2025-06-21T06:15:45.940014600Z" level=info msg="Container a0a7456c33b2e4e7070a7957a999d6def0091b9d5bb0d5d4574430c3536d1625: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:15:45.966232 systemd[1]: Started cri-containerd-8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8.scope - libcontainer container 8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8. Jun 21 06:15:45.972767 containerd[1545]: time="2025-06-21T06:15:45.972693951Z" level=info msg="CreateContainer within sandbox \"bb820656b4f9f190edf15591374bd0f5d88fb15d258c5a050717b5d08884ada0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a0a7456c33b2e4e7070a7957a999d6def0091b9d5bb0d5d4574430c3536d1625\"" Jun 21 06:15:45.974952 containerd[1545]: time="2025-06-21T06:15:45.974176192Z" level=info msg="StartContainer for \"a0a7456c33b2e4e7070a7957a999d6def0091b9d5bb0d5d4574430c3536d1625\"" Jun 21 06:15:45.980437 containerd[1545]: time="2025-06-21T06:15:45.980393487Z" level=info msg="connecting to shim a0a7456c33b2e4e7070a7957a999d6def0091b9d5bb0d5d4574430c3536d1625" address="unix:///run/containerd/s/b0af813797ce949eac1409b475fa76a0b3b5489434a86a2b4e4ba07f52838e36" protocol=ttrpc version=3 Jun 21 06:15:45.987139 containerd[1545]: time="2025-06-21T06:15:45.987079089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9sjxq,Uid:38a7c875-4432-45d7-b2fb-00042dfc15e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\"" Jun 21 06:15:45.991684 containerd[1545]: time="2025-06-21T06:15:45.991650426Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 21 06:15:46.022244 systemd[1]: Started cri-containerd-a0a7456c33b2e4e7070a7957a999d6def0091b9d5bb0d5d4574430c3536d1625.scope - libcontainer container a0a7456c33b2e4e7070a7957a999d6def0091b9d5bb0d5d4574430c3536d1625. Jun 21 06:15:46.056921 containerd[1545]: time="2025-06-21T06:15:46.056865757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-sprmb,Uid:6f63e500-c310-4c34-82d7-72167afde656,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\"" Jun 21 06:15:46.102014 containerd[1545]: time="2025-06-21T06:15:46.101939881Z" level=info msg="StartContainer for \"a0a7456c33b2e4e7070a7957a999d6def0091b9d5bb0d5d4574430c3536d1625\" returns successfully" Jun 21 06:15:51.272479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460620275.mount: Deactivated successfully. Jun 21 06:15:54.487744 containerd[1545]: time="2025-06-21T06:15:54.487445641Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:54.493059 containerd[1545]: time="2025-06-21T06:15:54.492908429Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 21 06:15:54.495492 containerd[1545]: time="2025-06-21T06:15:54.495375816Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:54.500624 containerd[1545]: time="2025-06-21T06:15:54.500346821Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.508435566s" Jun 21 06:15:54.500624 containerd[1545]: time="2025-06-21T06:15:54.500465704Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 21 06:15:54.509886 containerd[1545]: time="2025-06-21T06:15:54.509373976Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 21 06:15:54.515418 containerd[1545]: time="2025-06-21T06:15:54.515274264Z" level=info msg="CreateContainer within sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 06:15:54.595193 containerd[1545]: time="2025-06-21T06:15:54.591904260Z" level=info msg="Container 0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:15:54.666037 containerd[1545]: time="2025-06-21T06:15:54.665882774Z" level=info msg="CreateContainer within sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\"" Jun 21 06:15:54.668332 containerd[1545]: time="2025-06-21T06:15:54.668254773Z" level=info msg="StartContainer for \"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\"" Jun 21 06:15:54.671166 containerd[1545]: time="2025-06-21T06:15:54.671065506Z" level=info msg="connecting to shim 0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3" address="unix:///run/containerd/s/f204997b1a9645c5bae891b5c3e0d656fda090607cdc66376ad96355b754ddc8" protocol=ttrpc version=3 Jun 21 06:15:54.754551 systemd[1]: Started cri-containerd-0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3.scope - libcontainer container 0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3. Jun 21 06:15:54.822013 containerd[1545]: time="2025-06-21T06:15:54.821926119Z" level=info msg="StartContainer for \"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\" returns successfully" Jun 21 06:15:54.835961 systemd[1]: cri-containerd-0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3.scope: Deactivated successfully. Jun 21 06:15:54.845785 containerd[1545]: time="2025-06-21T06:15:54.845621659Z" level=info msg="received exit event container_id:\"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\" id:\"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\" pid:3212 exited_at:{seconds:1750486554 nanos:844609631}" Jun 21 06:15:54.847105 containerd[1545]: time="2025-06-21T06:15:54.846507941Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\" id:\"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\" pid:3212 exited_at:{seconds:1750486554 nanos:844609631}" Jun 21 06:15:54.880559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3-rootfs.mount: Deactivated successfully. Jun 21 06:15:54.897300 kubelet[2796]: I0621 06:15:54.896614 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dzfrk" podStartSLOduration=9.895789607 podStartE2EDuration="9.895789607s" podCreationTimestamp="2025-06-21 06:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:15:46.792523919 +0000 UTC m=+6.251785144" watchObservedRunningTime="2025-06-21 06:15:54.895789607 +0000 UTC m=+14.355050822" Jun 21 06:15:55.871807 containerd[1545]: time="2025-06-21T06:15:55.871521504Z" level=info msg="CreateContainer within sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 06:15:55.921808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896218950.mount: Deactivated successfully. Jun 21 06:15:55.926918 containerd[1545]: time="2025-06-21T06:15:55.925715694Z" level=info msg="Container bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:15:55.938702 containerd[1545]: time="2025-06-21T06:15:55.938598710Z" level=info msg="CreateContainer within sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\"" Jun 21 06:15:55.939734 containerd[1545]: time="2025-06-21T06:15:55.939620867Z" level=info msg="StartContainer for \"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\"" Jun 21 06:15:55.941226 containerd[1545]: time="2025-06-21T06:15:55.941152220Z" level=info msg="connecting to shim bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e" address="unix:///run/containerd/s/f204997b1a9645c5bae891b5c3e0d656fda090607cdc66376ad96355b754ddc8" protocol=ttrpc version=3 Jun 21 06:15:55.971150 systemd[1]: Started cri-containerd-bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e.scope - libcontainer container bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e. Jun 21 06:15:56.011604 containerd[1545]: time="2025-06-21T06:15:56.011550336Z" level=info msg="StartContainer for \"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\" returns successfully" Jun 21 06:15:56.028656 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 21 06:15:56.029417 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:15:56.029852 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:15:56.034077 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 21 06:15:56.036588 containerd[1545]: time="2025-06-21T06:15:56.036533249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\" id:\"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\" pid:3254 exited_at:{seconds:1750486556 nanos:35496654}" Jun 21 06:15:56.037200 containerd[1545]: time="2025-06-21T06:15:56.036774692Z" level=info msg="received exit event container_id:\"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\" id:\"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\" pid:3254 exited_at:{seconds:1750486556 nanos:35496654}" Jun 21 06:15:56.038078 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 21 06:15:56.038621 systemd[1]: cri-containerd-bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e.scope: Deactivated successfully. Jun 21 06:15:56.078492 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 21 06:15:56.895136 containerd[1545]: time="2025-06-21T06:15:56.894046637Z" level=info msg="CreateContainer within sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 06:15:56.922729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e-rootfs.mount: Deactivated successfully. Jun 21 06:15:56.957068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759006402.mount: Deactivated successfully. Jun 21 06:15:57.004903 containerd[1545]: time="2025-06-21T06:15:57.004846978Z" level=info msg="Container 521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:15:57.032116 containerd[1545]: time="2025-06-21T06:15:57.031965837Z" level=info msg="CreateContainer within sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\"" Jun 21 06:15:57.034187 containerd[1545]: time="2025-06-21T06:15:57.034150856Z" level=info msg="StartContainer for \"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\"" Jun 21 06:15:57.039982 containerd[1545]: time="2025-06-21T06:15:57.039928904Z" level=info msg="connecting to shim 521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca" address="unix:///run/containerd/s/f204997b1a9645c5bae891b5c3e0d656fda090607cdc66376ad96355b754ddc8" protocol=ttrpc version=3 Jun 21 06:15:57.081594 systemd[1]: Started cri-containerd-521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca.scope - libcontainer container 521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca. Jun 21 06:15:57.146960 systemd[1]: cri-containerd-521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca.scope: Deactivated successfully. Jun 21 06:15:57.152141 containerd[1545]: time="2025-06-21T06:15:57.152105195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\" id:\"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\" pid:3313 exited_at:{seconds:1750486557 nanos:150456031}" Jun 21 06:15:57.153358 containerd[1545]: time="2025-06-21T06:15:57.153230565Z" level=info msg="received exit event container_id:\"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\" id:\"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\" pid:3313 exited_at:{seconds:1750486557 nanos:150456031}" Jun 21 06:15:57.157012 containerd[1545]: time="2025-06-21T06:15:57.156912933Z" level=info msg="StartContainer for \"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\" returns successfully" Jun 21 06:15:57.884839 containerd[1545]: time="2025-06-21T06:15:57.884777820Z" level=info msg="CreateContainer within sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 06:15:57.923755 containerd[1545]: time="2025-06-21T06:15:57.923713555Z" level=info msg="Container 0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:15:57.930478 containerd[1545]: time="2025-06-21T06:15:57.930437718Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:57.932936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2117671477.mount: Deactivated successfully. Jun 21 06:15:57.945010 containerd[1545]: time="2025-06-21T06:15:57.944509304Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 21 06:15:57.945895 containerd[1545]: time="2025-06-21T06:15:57.945857392Z" level=info msg="CreateContainer within sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\"" Jun 21 06:15:57.946671 containerd[1545]: time="2025-06-21T06:15:57.946277470Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 21 06:15:57.948787 containerd[1545]: time="2025-06-21T06:15:57.948756881Z" level=info msg="StartContainer for \"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\"" Jun 21 06:15:57.949159 containerd[1545]: time="2025-06-21T06:15:57.949097340Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.439567322s" Jun 21 06:15:57.949159 containerd[1545]: time="2025-06-21T06:15:57.949138808Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 21 06:15:57.950753 containerd[1545]: time="2025-06-21T06:15:57.950643109Z" level=info msg="connecting to shim 0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd" address="unix:///run/containerd/s/f204997b1a9645c5bae891b5c3e0d656fda090607cdc66376ad96355b754ddc8" protocol=ttrpc version=3 Jun 21 06:15:57.952895 containerd[1545]: time="2025-06-21T06:15:57.952846372Z" level=info msg="CreateContainer within sandbox \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 21 06:15:57.983512 containerd[1545]: time="2025-06-21T06:15:57.983457913Z" level=info msg="Container a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:15:57.991212 systemd[1]: Started cri-containerd-0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd.scope - libcontainer container 0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd. Jun 21 06:15:58.000134 containerd[1545]: time="2025-06-21T06:15:58.000073380Z" level=info msg="CreateContainer within sandbox \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\"" Jun 21 06:15:58.001422 containerd[1545]: time="2025-06-21T06:15:58.001167853Z" level=info msg="StartContainer for \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\"" Jun 21 06:15:58.002474 containerd[1545]: time="2025-06-21T06:15:58.002441202Z" level=info msg="connecting to shim a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c" address="unix:///run/containerd/s/215df9706456362c6fec5214ca018146dca9e33a882348f8655628f2b97b9c6a" protocol=ttrpc version=3 Jun 21 06:15:58.028289 systemd[1]: Started cri-containerd-a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c.scope - libcontainer container a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c. Jun 21 06:15:58.045642 systemd[1]: cri-containerd-0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd.scope: Deactivated successfully. Jun 21 06:15:58.048661 containerd[1545]: time="2025-06-21T06:15:58.048621516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\" id:\"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\" pid:3357 exited_at:{seconds:1750486558 nanos:48373660}" Jun 21 06:15:58.053474 containerd[1545]: time="2025-06-21T06:15:58.053337252Z" level=info msg="received exit event container_id:\"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\" id:\"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\" pid:3357 exited_at:{seconds:1750486558 nanos:48373660}" Jun 21 06:15:58.068396 containerd[1545]: time="2025-06-21T06:15:58.068355913Z" level=info msg="StartContainer for \"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\" returns successfully" Jun 21 06:15:58.090658 containerd[1545]: time="2025-06-21T06:15:58.090584288Z" level=info msg="StartContainer for \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\" returns successfully" Jun 21 06:15:58.902932 containerd[1545]: time="2025-06-21T06:15:58.902860379Z" level=info msg="CreateContainer within sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 06:15:58.922600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd-rootfs.mount: Deactivated successfully. Jun 21 06:15:58.928168 containerd[1545]: time="2025-06-21T06:15:58.928126823Z" level=info msg="Container d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:15:58.949097 containerd[1545]: time="2025-06-21T06:15:58.948787648Z" level=info msg="CreateContainer within sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\"" Jun 21 06:15:58.950918 containerd[1545]: time="2025-06-21T06:15:58.950788852Z" level=info msg="StartContainer for \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\"" Jun 21 06:15:58.956836 containerd[1545]: time="2025-06-21T06:15:58.956701082Z" level=info msg="connecting to shim d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490" address="unix:///run/containerd/s/f204997b1a9645c5bae891b5c3e0d656fda090607cdc66376ad96355b754ddc8" protocol=ttrpc version=3 Jun 21 06:15:58.997205 systemd[1]: Started cri-containerd-d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490.scope - libcontainer container d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490. Jun 21 06:15:59.106077 containerd[1545]: time="2025-06-21T06:15:59.106004300Z" level=info msg="StartContainer for \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" returns successfully" Jun 21 06:15:59.175778 kubelet[2796]: I0621 06:15:59.175531 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-sprmb" podStartSLOduration=2.283401044 podStartE2EDuration="14.175500037s" podCreationTimestamp="2025-06-21 06:15:45 +0000 UTC" firstStartedPulling="2025-06-21 06:15:46.058240036 +0000 UTC m=+5.517501261" lastFinishedPulling="2025-06-21 06:15:57.950339029 +0000 UTC m=+17.409600254" observedRunningTime="2025-06-21 06:15:59.063120931 +0000 UTC m=+18.522382146" watchObservedRunningTime="2025-06-21 06:15:59.175500037 +0000 UTC m=+18.634761252" Jun 21 06:15:59.339250 containerd[1545]: time="2025-06-21T06:15:59.339198687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" id:\"212dba1677ae48cb836636b965842cab37cf994adb45296e93f9f0ba1739d55e\" pid:3458 exited_at:{seconds:1750486559 nanos:338708688}" Jun 21 06:15:59.401954 kubelet[2796]: I0621 06:15:59.401913 2796 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 21 06:15:59.476699 systemd[1]: Created slice kubepods-burstable-podfb1028b3_b206_4ea2_be02_57a3bf6a93f4.slice - libcontainer container kubepods-burstable-podfb1028b3_b206_4ea2_be02_57a3bf6a93f4.slice. Jun 21 06:15:59.485042 systemd[1]: Created slice kubepods-burstable-podf1f7439a_c17e_4fbb_bbd9_1acb2b7a272e.slice - libcontainer container kubepods-burstable-podf1f7439a_c17e_4fbb_bbd9_1acb2b7a272e.slice. Jun 21 06:15:59.548361 kubelet[2796]: I0621 06:15:59.548308 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1f7439a-c17e-4fbb-bbd9-1acb2b7a272e-config-volume\") pod \"coredns-668d6bf9bc-trx5g\" (UID: \"f1f7439a-c17e-4fbb-bbd9-1acb2b7a272e\") " pod="kube-system/coredns-668d6bf9bc-trx5g" Jun 21 06:15:59.548361 kubelet[2796]: I0621 06:15:59.548359 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxvmf\" (UniqueName: \"kubernetes.io/projected/fb1028b3-b206-4ea2-be02-57a3bf6a93f4-kube-api-access-rxvmf\") pod \"coredns-668d6bf9bc-22b45\" (UID: \"fb1028b3-b206-4ea2-be02-57a3bf6a93f4\") " pod="kube-system/coredns-668d6bf9bc-22b45" Jun 21 06:15:59.548546 kubelet[2796]: I0621 06:15:59.548394 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb1028b3-b206-4ea2-be02-57a3bf6a93f4-config-volume\") pod \"coredns-668d6bf9bc-22b45\" (UID: \"fb1028b3-b206-4ea2-be02-57a3bf6a93f4\") " pod="kube-system/coredns-668d6bf9bc-22b45" Jun 21 06:15:59.548546 kubelet[2796]: I0621 06:15:59.548417 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs8w5\" (UniqueName: \"kubernetes.io/projected/f1f7439a-c17e-4fbb-bbd9-1acb2b7a272e-kube-api-access-hs8w5\") pod \"coredns-668d6bf9bc-trx5g\" (UID: \"f1f7439a-c17e-4fbb-bbd9-1acb2b7a272e\") " pod="kube-system/coredns-668d6bf9bc-trx5g" Jun 21 06:15:59.783069 containerd[1545]: time="2025-06-21T06:15:59.782940261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-22b45,Uid:fb1028b3-b206-4ea2-be02-57a3bf6a93f4,Namespace:kube-system,Attempt:0,}" Jun 21 06:15:59.791380 containerd[1545]: time="2025-06-21T06:15:59.791275225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trx5g,Uid:f1f7439a-c17e-4fbb-bbd9-1acb2b7a272e,Namespace:kube-system,Attempt:0,}" Jun 21 06:15:59.954271 kubelet[2796]: I0621 06:15:59.954189 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9sjxq" podStartSLOduration=6.436902233 podStartE2EDuration="14.954169866s" podCreationTimestamp="2025-06-21 06:15:45 +0000 UTC" firstStartedPulling="2025-06-21 06:15:45.989271113 +0000 UTC m=+5.448532338" lastFinishedPulling="2025-06-21 06:15:54.506538696 +0000 UTC m=+13.965799971" observedRunningTime="2025-06-21 06:15:59.95396937 +0000 UTC m=+19.413230585" watchObservedRunningTime="2025-06-21 06:15:59.954169866 +0000 UTC m=+19.413431081" Jun 21 06:16:02.411426 systemd-networkd[1434]: cilium_host: Link UP Jun 21 06:16:02.412736 systemd-networkd[1434]: cilium_net: Link UP Jun 21 06:16:02.413702 systemd-networkd[1434]: cilium_net: Gained carrier Jun 21 06:16:02.413887 systemd-networkd[1434]: cilium_host: Gained carrier Jun 21 06:16:02.537146 systemd-networkd[1434]: cilium_vxlan: Link UP Jun 21 06:16:02.537155 systemd-networkd[1434]: cilium_vxlan: Gained carrier Jun 21 06:16:02.848067 kernel: NET: Registered PF_ALG protocol family Jun 21 06:16:03.148148 systemd-networkd[1434]: cilium_host: Gained IPv6LL Jun 21 06:16:03.338181 systemd-networkd[1434]: cilium_net: Gained IPv6LL Jun 21 06:16:03.782321 systemd-networkd[1434]: lxc_health: Link UP Jun 21 06:16:03.795181 systemd-networkd[1434]: lxc_health: Gained carrier Jun 21 06:16:03.978179 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL Jun 21 06:16:04.369709 kernel: eth0: renamed from tmp76f28 Jun 21 06:16:04.375482 systemd-networkd[1434]: lxc758501243f53: Link UP Jun 21 06:16:04.375785 systemd-networkd[1434]: lxc758501243f53: Gained carrier Jun 21 06:16:04.382404 systemd-networkd[1434]: lxcf44bd66b1e8d: Link UP Jun 21 06:16:04.393438 kernel: eth0: renamed from tmp8bc69 Jun 21 06:16:04.399279 systemd-networkd[1434]: lxcf44bd66b1e8d: Gained carrier Jun 21 06:16:05.066235 systemd-networkd[1434]: lxc_health: Gained IPv6LL Jun 21 06:16:05.514270 systemd-networkd[1434]: lxc758501243f53: Gained IPv6LL Jun 21 06:16:05.706581 systemd-networkd[1434]: lxcf44bd66b1e8d: Gained IPv6LL Jun 21 06:16:08.875191 containerd[1545]: time="2025-06-21T06:16:08.874255295Z" level=info msg="connecting to shim 8bc690f7af7d675f17a11d67a95fc6f9e5b74d5546a90e879029662ff0c3b96f" address="unix:///run/containerd/s/afb57e861aa2d4f986ed1183028af0574fa8414d3dd3f6365a796795ac32df70" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:16:08.908537 containerd[1545]: time="2025-06-21T06:16:08.908488574Z" level=info msg="connecting to shim 76f28846c19625ec7b0e395a2a574f704d68ca7985bc459fd7f04e2d94a9bf5e" address="unix:///run/containerd/s/7f12f4a2a5d4d15d7b9a5e3d77de6948bdb8a428e9ab9ec428212962a58a800d" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:16:08.947309 systemd[1]: Started cri-containerd-76f28846c19625ec7b0e395a2a574f704d68ca7985bc459fd7f04e2d94a9bf5e.scope - libcontainer container 76f28846c19625ec7b0e395a2a574f704d68ca7985bc459fd7f04e2d94a9bf5e. Jun 21 06:16:08.950055 systemd[1]: Started cri-containerd-8bc690f7af7d675f17a11d67a95fc6f9e5b74d5546a90e879029662ff0c3b96f.scope - libcontainer container 8bc690f7af7d675f17a11d67a95fc6f9e5b74d5546a90e879029662ff0c3b96f. Jun 21 06:16:09.024742 containerd[1545]: time="2025-06-21T06:16:09.024672924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-trx5g,Uid:f1f7439a-c17e-4fbb-bbd9-1acb2b7a272e,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bc690f7af7d675f17a11d67a95fc6f9e5b74d5546a90e879029662ff0c3b96f\"" Jun 21 06:16:09.029016 containerd[1545]: time="2025-06-21T06:16:09.028701380Z" level=info msg="CreateContainer within sandbox \"8bc690f7af7d675f17a11d67a95fc6f9e5b74d5546a90e879029662ff0c3b96f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 06:16:09.047937 containerd[1545]: time="2025-06-21T06:16:09.046422658Z" level=info msg="Container df50228667cec8cf140c88a50d382dda0c381d4bd8ef411a5813d2a09b093458: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:16:09.062303 containerd[1545]: time="2025-06-21T06:16:09.062169615Z" level=info msg="CreateContainer within sandbox \"8bc690f7af7d675f17a11d67a95fc6f9e5b74d5546a90e879029662ff0c3b96f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df50228667cec8cf140c88a50d382dda0c381d4bd8ef411a5813d2a09b093458\"" Jun 21 06:16:09.063396 containerd[1545]: time="2025-06-21T06:16:09.063318299Z" level=info msg="StartContainer for \"df50228667cec8cf140c88a50d382dda0c381d4bd8ef411a5813d2a09b093458\"" Jun 21 06:16:09.065608 containerd[1545]: time="2025-06-21T06:16:09.065576885Z" level=info msg="connecting to shim df50228667cec8cf140c88a50d382dda0c381d4bd8ef411a5813d2a09b093458" address="unix:///run/containerd/s/afb57e861aa2d4f986ed1183028af0574fa8414d3dd3f6365a796795ac32df70" protocol=ttrpc version=3 Jun 21 06:16:09.081287 containerd[1545]: time="2025-06-21T06:16:09.081203204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-22b45,Uid:fb1028b3-b206-4ea2-be02-57a3bf6a93f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"76f28846c19625ec7b0e395a2a574f704d68ca7985bc459fd7f04e2d94a9bf5e\"" Jun 21 06:16:09.085818 containerd[1545]: time="2025-06-21T06:16:09.085611914Z" level=info msg="CreateContainer within sandbox \"76f28846c19625ec7b0e395a2a574f704d68ca7985bc459fd7f04e2d94a9bf5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 21 06:16:09.100351 containerd[1545]: time="2025-06-21T06:16:09.099698124Z" level=info msg="Container f0859e8e3f12de52714d539c74fea4676875620db64be956bce862e68007ce93: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:16:09.100154 systemd[1]: Started cri-containerd-df50228667cec8cf140c88a50d382dda0c381d4bd8ef411a5813d2a09b093458.scope - libcontainer container df50228667cec8cf140c88a50d382dda0c381d4bd8ef411a5813d2a09b093458. Jun 21 06:16:09.120371 containerd[1545]: time="2025-06-21T06:16:09.120326726Z" level=info msg="CreateContainer within sandbox \"76f28846c19625ec7b0e395a2a574f704d68ca7985bc459fd7f04e2d94a9bf5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0859e8e3f12de52714d539c74fea4676875620db64be956bce862e68007ce93\"" Jun 21 06:16:09.122772 containerd[1545]: time="2025-06-21T06:16:09.122734512Z" level=info msg="StartContainer for \"f0859e8e3f12de52714d539c74fea4676875620db64be956bce862e68007ce93\"" Jun 21 06:16:09.124405 containerd[1545]: time="2025-06-21T06:16:09.124344371Z" level=info msg="connecting to shim f0859e8e3f12de52714d539c74fea4676875620db64be956bce862e68007ce93" address="unix:///run/containerd/s/7f12f4a2a5d4d15d7b9a5e3d77de6948bdb8a428e9ab9ec428212962a58a800d" protocol=ttrpc version=3 Jun 21 06:16:09.152185 systemd[1]: Started cri-containerd-f0859e8e3f12de52714d539c74fea4676875620db64be956bce862e68007ce93.scope - libcontainer container f0859e8e3f12de52714d539c74fea4676875620db64be956bce862e68007ce93. Jun 21 06:16:09.155694 containerd[1545]: time="2025-06-21T06:16:09.155647585Z" level=info msg="StartContainer for \"df50228667cec8cf140c88a50d382dda0c381d4bd8ef411a5813d2a09b093458\" returns successfully" Jun 21 06:16:09.197669 containerd[1545]: time="2025-06-21T06:16:09.197625590Z" level=info msg="StartContainer for \"f0859e8e3f12de52714d539c74fea4676875620db64be956bce862e68007ce93\" returns successfully" Jun 21 06:16:09.854830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3285313447.mount: Deactivated successfully. Jun 21 06:16:09.994773 kubelet[2796]: I0621 06:16:09.993635 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-22b45" podStartSLOduration=24.993418268 podStartE2EDuration="24.993418268s" podCreationTimestamp="2025-06-21 06:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:16:09.99097709 +0000 UTC m=+29.450238355" watchObservedRunningTime="2025-06-21 06:16:09.993418268 +0000 UTC m=+29.452679533" Jun 21 06:16:10.062464 kubelet[2796]: I0621 06:16:10.062341 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-trx5g" podStartSLOduration=25.062304012 podStartE2EDuration="25.062304012s" podCreationTimestamp="2025-06-21 06:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:16:10.061926965 +0000 UTC m=+29.521188180" watchObservedRunningTime="2025-06-21 06:16:10.062304012 +0000 UTC m=+29.521565227" Jun 21 06:17:22.336588 systemd[1]: Started sshd@9-172.24.4.45:22-172.24.4.1:33404.service - OpenSSH per-connection server daemon (172.24.4.1:33404). Jun 21 06:17:23.561329 sshd[4110]: Accepted publickey for core from 172.24.4.1 port 33404 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:17:23.566807 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:17:23.589224 systemd-logind[1528]: New session 12 of user core. Jun 21 06:17:23.600584 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 21 06:17:24.369222 sshd[4112]: Connection closed by 172.24.4.1 port 33404 Jun 21 06:17:24.370068 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Jun 21 06:17:24.391255 systemd[1]: sshd@9-172.24.4.45:22-172.24.4.1:33404.service: Deactivated successfully. Jun 21 06:17:24.403629 systemd[1]: session-12.scope: Deactivated successfully. Jun 21 06:17:24.408407 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Jun 21 06:17:24.413871 systemd-logind[1528]: Removed session 12. Jun 21 06:17:29.407363 systemd[1]: Started sshd@10-172.24.4.45:22-172.24.4.1:53806.service - OpenSSH per-connection server daemon (172.24.4.1:53806). Jun 21 06:17:30.860653 sshd[4126]: Accepted publickey for core from 172.24.4.1 port 53806 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:17:30.900974 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:17:30.921371 systemd-logind[1528]: New session 13 of user core. Jun 21 06:17:30.927230 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 21 06:17:31.532787 sshd[4128]: Connection closed by 172.24.4.1 port 53806 Jun 21 06:17:31.532317 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Jun 21 06:17:31.543571 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Jun 21 06:17:31.544701 systemd[1]: sshd@10-172.24.4.45:22-172.24.4.1:53806.service: Deactivated successfully. Jun 21 06:17:31.552883 systemd[1]: session-13.scope: Deactivated successfully. Jun 21 06:17:31.557261 systemd-logind[1528]: Removed session 13. Jun 21 06:17:36.564726 systemd[1]: Started sshd@11-172.24.4.45:22-172.24.4.1:39288.service - OpenSSH per-connection server daemon (172.24.4.1:39288). Jun 21 06:17:37.759144 sshd[4141]: Accepted publickey for core from 172.24.4.1 port 39288 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:17:37.765443 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:17:37.792162 systemd-logind[1528]: New session 14 of user core. Jun 21 06:17:37.802439 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 21 06:17:38.538302 sshd[4143]: Connection closed by 172.24.4.1 port 39288 Jun 21 06:17:38.541214 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Jun 21 06:17:38.557609 systemd[1]: sshd@11-172.24.4.45:22-172.24.4.1:39288.service: Deactivated successfully. Jun 21 06:17:38.565500 systemd[1]: session-14.scope: Deactivated successfully. Jun 21 06:17:38.569674 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Jun 21 06:17:38.575597 systemd-logind[1528]: Removed session 14. Jun 21 06:17:43.564634 systemd[1]: Started sshd@12-172.24.4.45:22-172.24.4.1:49350.service - OpenSSH per-connection server daemon (172.24.4.1:49350). Jun 21 06:17:45.023854 sshd[4160]: Accepted publickey for core from 172.24.4.1 port 49350 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:17:45.028216 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:17:45.043544 systemd-logind[1528]: New session 15 of user core. Jun 21 06:17:45.052435 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 21 06:17:46.191048 sshd[4162]: Connection closed by 172.24.4.1 port 49350 Jun 21 06:17:46.192462 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Jun 21 06:17:46.209462 systemd[1]: sshd@12-172.24.4.45:22-172.24.4.1:49350.service: Deactivated successfully. Jun 21 06:17:46.215906 systemd[1]: session-15.scope: Deactivated successfully. Jun 21 06:17:46.218837 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Jun 21 06:17:46.227751 systemd-logind[1528]: Removed session 15. Jun 21 06:17:46.233612 systemd[1]: Started sshd@13-172.24.4.45:22-172.24.4.1:49364.service - OpenSSH per-connection server daemon (172.24.4.1:49364). Jun 21 06:17:47.532590 sshd[4174]: Accepted publickey for core from 172.24.4.1 port 49364 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:17:47.537504 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:17:47.555705 systemd-logind[1528]: New session 16 of user core. Jun 21 06:17:47.574542 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 21 06:17:48.247611 sshd[4178]: Connection closed by 172.24.4.1 port 49364 Jun 21 06:17:48.252494 sshd-session[4174]: pam_unix(sshd:session): session closed for user core Jun 21 06:17:48.268230 systemd[1]: sshd@13-172.24.4.45:22-172.24.4.1:49364.service: Deactivated successfully. Jun 21 06:17:48.272332 systemd[1]: session-16.scope: Deactivated successfully. Jun 21 06:17:48.275457 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Jun 21 06:17:48.280721 systemd-logind[1528]: Removed session 16. Jun 21 06:17:48.286122 systemd[1]: Started sshd@14-172.24.4.45:22-172.24.4.1:49370.service - OpenSSH per-connection server daemon (172.24.4.1:49370). Jun 21 06:17:49.730507 sshd[4188]: Accepted publickey for core from 172.24.4.1 port 49370 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:17:49.734165 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:17:49.748250 systemd-logind[1528]: New session 17 of user core. Jun 21 06:17:49.759507 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 21 06:17:50.596700 sshd[4190]: Connection closed by 172.24.4.1 port 49370 Jun 21 06:17:50.598147 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Jun 21 06:17:50.609226 systemd[1]: sshd@14-172.24.4.45:22-172.24.4.1:49370.service: Deactivated successfully. Jun 21 06:17:50.614313 systemd[1]: session-17.scope: Deactivated successfully. Jun 21 06:17:50.616811 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Jun 21 06:17:50.621089 systemd-logind[1528]: Removed session 17. Jun 21 06:17:55.627251 systemd[1]: Started sshd@15-172.24.4.45:22-172.24.4.1:55808.service - OpenSSH per-connection server daemon (172.24.4.1:55808). Jun 21 06:17:56.907183 sshd[4201]: Accepted publickey for core from 172.24.4.1 port 55808 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:17:56.910970 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:17:56.933619 systemd-logind[1528]: New session 18 of user core. Jun 21 06:17:56.942686 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 21 06:17:57.804037 sshd[4203]: Connection closed by 172.24.4.1 port 55808 Jun 21 06:17:57.805523 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jun 21 06:17:57.825216 systemd[1]: sshd@15-172.24.4.45:22-172.24.4.1:55808.service: Deactivated successfully. Jun 21 06:17:57.832392 systemd[1]: session-18.scope: Deactivated successfully. Jun 21 06:17:57.836792 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Jun 21 06:17:57.845570 systemd[1]: Started sshd@16-172.24.4.45:22-172.24.4.1:55824.service - OpenSSH per-connection server daemon (172.24.4.1:55824). Jun 21 06:17:57.850201 systemd-logind[1528]: Removed session 18. Jun 21 06:17:59.200074 sshd[4215]: Accepted publickey for core from 172.24.4.1 port 55824 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:17:59.203299 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:17:59.215338 systemd-logind[1528]: New session 19 of user core. Jun 21 06:17:59.234852 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 21 06:17:59.995139 sshd[4217]: Connection closed by 172.24.4.1 port 55824 Jun 21 06:17:59.993554 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Jun 21 06:18:00.016480 systemd[1]: sshd@16-172.24.4.45:22-172.24.4.1:55824.service: Deactivated successfully. Jun 21 06:18:00.023648 systemd[1]: session-19.scope: Deactivated successfully. Jun 21 06:18:00.026673 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Jun 21 06:18:00.034373 systemd[1]: Started sshd@17-172.24.4.45:22-172.24.4.1:55836.service - OpenSSH per-connection server daemon (172.24.4.1:55836). Jun 21 06:18:00.038527 systemd-logind[1528]: Removed session 19. Jun 21 06:18:01.201867 sshd[4227]: Accepted publickey for core from 172.24.4.1 port 55836 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:18:01.207521 sshd-session[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:18:01.224106 systemd-logind[1528]: New session 20 of user core. Jun 21 06:18:01.236507 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 21 06:18:03.659476 sshd[4229]: Connection closed by 172.24.4.1 port 55836 Jun 21 06:18:03.658543 sshd-session[4227]: pam_unix(sshd:session): session closed for user core Jun 21 06:18:03.673719 systemd[1]: sshd@17-172.24.4.45:22-172.24.4.1:55836.service: Deactivated successfully. Jun 21 06:18:03.678404 systemd[1]: session-20.scope: Deactivated successfully. Jun 21 06:18:03.680116 systemd-logind[1528]: Session 20 logged out. Waiting for processes to exit. Jun 21 06:18:03.685431 systemd-logind[1528]: Removed session 20. Jun 21 06:18:03.689328 systemd[1]: Started sshd@18-172.24.4.45:22-172.24.4.1:47780.service - OpenSSH per-connection server daemon (172.24.4.1:47780). Jun 21 06:18:04.974789 sshd[4245]: Accepted publickey for core from 172.24.4.1 port 47780 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:18:04.977819 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:18:04.992836 systemd-logind[1528]: New session 21 of user core. Jun 21 06:18:04.999340 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 21 06:18:06.008242 sshd[4247]: Connection closed by 172.24.4.1 port 47780 Jun 21 06:18:06.011210 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Jun 21 06:18:06.031208 systemd[1]: sshd@18-172.24.4.45:22-172.24.4.1:47780.service: Deactivated successfully. Jun 21 06:18:06.038805 systemd[1]: session-21.scope: Deactivated successfully. Jun 21 06:18:06.044137 systemd-logind[1528]: Session 21 logged out. Waiting for processes to exit. Jun 21 06:18:06.051490 systemd[1]: Started sshd@19-172.24.4.45:22-172.24.4.1:47792.service - OpenSSH per-connection server daemon (172.24.4.1:47792). Jun 21 06:18:06.053398 systemd-logind[1528]: Removed session 21. Jun 21 06:18:07.331359 sshd[4257]: Accepted publickey for core from 172.24.4.1 port 47792 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:18:07.335175 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:18:07.355909 systemd-logind[1528]: New session 22 of user core. Jun 21 06:18:07.369340 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 21 06:18:08.047328 sshd[4259]: Connection closed by 172.24.4.1 port 47792 Jun 21 06:18:08.048755 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Jun 21 06:18:08.057185 systemd[1]: sshd@19-172.24.4.45:22-172.24.4.1:47792.service: Deactivated successfully. Jun 21 06:18:08.064809 systemd[1]: session-22.scope: Deactivated successfully. Jun 21 06:18:08.067978 systemd-logind[1528]: Session 22 logged out. Waiting for processes to exit. Jun 21 06:18:08.071969 systemd-logind[1528]: Removed session 22. Jun 21 06:18:13.079768 systemd[1]: Started sshd@20-172.24.4.45:22-172.24.4.1:47806.service - OpenSSH per-connection server daemon (172.24.4.1:47806). Jun 21 06:18:14.372051 sshd[4272]: Accepted publickey for core from 172.24.4.1 port 47806 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:18:14.375228 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:18:14.388079 systemd-logind[1528]: New session 23 of user core. Jun 21 06:18:14.404373 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 21 06:18:15.325903 sshd[4274]: Connection closed by 172.24.4.1 port 47806 Jun 21 06:18:15.328374 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Jun 21 06:18:15.337515 systemd[1]: sshd@20-172.24.4.45:22-172.24.4.1:47806.service: Deactivated successfully. Jun 21 06:18:15.345431 systemd[1]: session-23.scope: Deactivated successfully. Jun 21 06:18:15.351491 systemd-logind[1528]: Session 23 logged out. Waiting for processes to exit. Jun 21 06:18:15.354982 systemd-logind[1528]: Removed session 23. Jun 21 06:18:20.356507 systemd[1]: Started sshd@21-172.24.4.45:22-172.24.4.1:34468.service - OpenSSH per-connection server daemon (172.24.4.1:34468). Jun 21 06:18:21.612755 sshd[4287]: Accepted publickey for core from 172.24.4.1 port 34468 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:18:21.615845 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:18:21.627802 systemd-logind[1528]: New session 24 of user core. Jun 21 06:18:21.647422 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 21 06:18:22.434962 sshd[4289]: Connection closed by 172.24.4.1 port 34468 Jun 21 06:18:22.436205 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Jun 21 06:18:22.459540 systemd[1]: sshd@21-172.24.4.45:22-172.24.4.1:34468.service: Deactivated successfully. Jun 21 06:18:22.466313 systemd[1]: session-24.scope: Deactivated successfully. Jun 21 06:18:22.468621 systemd-logind[1528]: Session 24 logged out. Waiting for processes to exit. Jun 21 06:18:22.472345 systemd-logind[1528]: Removed session 24. Jun 21 06:18:27.457596 systemd[1]: Started sshd@22-172.24.4.45:22-172.24.4.1:46964.service - OpenSSH per-connection server daemon (172.24.4.1:46964). Jun 21 06:18:28.685527 sshd[4301]: Accepted publickey for core from 172.24.4.1 port 46964 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:18:28.690933 sshd-session[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:18:28.706816 systemd-logind[1528]: New session 25 of user core. Jun 21 06:18:28.718597 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 21 06:18:29.410042 sshd[4303]: Connection closed by 172.24.4.1 port 46964 Jun 21 06:18:29.409424 sshd-session[4301]: pam_unix(sshd:session): session closed for user core Jun 21 06:18:29.431828 systemd[1]: sshd@22-172.24.4.45:22-172.24.4.1:46964.service: Deactivated successfully. Jun 21 06:18:29.437170 systemd[1]: session-25.scope: Deactivated successfully. Jun 21 06:18:29.442108 systemd-logind[1528]: Session 25 logged out. Waiting for processes to exit. Jun 21 06:18:29.447306 systemd-logind[1528]: Removed session 25. Jun 21 06:18:29.453047 systemd[1]: Started sshd@23-172.24.4.45:22-172.24.4.1:46968.service - OpenSSH per-connection server daemon (172.24.4.1:46968). Jun 21 06:18:30.635613 sshd[4315]: Accepted publickey for core from 172.24.4.1 port 46968 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:18:30.639164 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:18:30.653439 systemd-logind[1528]: New session 26 of user core. Jun 21 06:18:30.665330 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 21 06:18:32.712934 containerd[1545]: time="2025-06-21T06:18:32.712546208Z" level=info msg="StopContainer for \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\" with timeout 30 (s)" Jun 21 06:18:32.719315 containerd[1545]: time="2025-06-21T06:18:32.719151954Z" level=info msg="Stop container \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\" with signal terminated" Jun 21 06:18:32.741711 systemd[1]: cri-containerd-a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c.scope: Deactivated successfully. Jun 21 06:18:32.752909 containerd[1545]: time="2025-06-21T06:18:32.752796981Z" level=info msg="received exit event container_id:\"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\" id:\"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\" pid:3379 exited_at:{seconds:1750486712 nanos:751681997}" Jun 21 06:18:32.757017 containerd[1545]: time="2025-06-21T06:18:32.755015246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\" id:\"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\" pid:3379 exited_at:{seconds:1750486712 nanos:751681997}" Jun 21 06:18:32.794827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c-rootfs.mount: Deactivated successfully. Jun 21 06:18:32.798492 containerd[1545]: time="2025-06-21T06:18:32.796646152Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 21 06:18:32.805645 containerd[1545]: time="2025-06-21T06:18:32.805408097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" id:\"2bb531e8511b654e0f4b2d6944df0cb8962bfc0e264bf482812e6235c915f6a1\" pid:4342 exited_at:{seconds:1750486712 nanos:804879965}" Jun 21 06:18:32.810486 containerd[1545]: time="2025-06-21T06:18:32.810443133Z" level=info msg="StopContainer for \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" with timeout 2 (s)" Jun 21 06:18:32.811047 containerd[1545]: time="2025-06-21T06:18:32.810884602Z" level=info msg="Stop container \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" with signal terminated" Jun 21 06:18:32.821969 containerd[1545]: time="2025-06-21T06:18:32.821919416Z" level=info msg="StopContainer for \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\" returns successfully" Jun 21 06:18:32.825433 containerd[1545]: time="2025-06-21T06:18:32.824563572Z" level=info msg="StopPodSandbox for \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\"" Jun 21 06:18:32.825433 containerd[1545]: time="2025-06-21T06:18:32.824717071Z" level=info msg="Container to stop \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:18:32.827209 systemd-networkd[1434]: lxc_health: Link DOWN Jun 21 06:18:32.827216 systemd-networkd[1434]: lxc_health: Lost carrier Jun 21 06:18:32.845013 systemd[1]: cri-containerd-d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490.scope: Deactivated successfully. Jun 21 06:18:32.845339 systemd[1]: cri-containerd-d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490.scope: Consumed 9.172s CPU time, 125.1M memory peak, 128K read from disk, 13.3M written to disk. Jun 21 06:18:32.852024 containerd[1545]: time="2025-06-21T06:18:32.851418938Z" level=info msg="received exit event container_id:\"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" id:\"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" pid:3428 exited_at:{seconds:1750486712 nanos:849764942}" Jun 21 06:18:32.852024 containerd[1545]: time="2025-06-21T06:18:32.851539133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" id:\"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" pid:3428 exited_at:{seconds:1750486712 nanos:849764942}" Jun 21 06:18:32.856675 systemd[1]: cri-containerd-8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8.scope: Deactivated successfully. Jun 21 06:18:32.863329 containerd[1545]: time="2025-06-21T06:18:32.863261028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\" id:\"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\" pid:2997 exit_status:137 exited_at:{seconds:1750486712 nanos:862266120}" Jun 21 06:18:32.887709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490-rootfs.mount: Deactivated successfully. Jun 21 06:18:32.922572 containerd[1545]: time="2025-06-21T06:18:32.922517104Z" level=info msg="StopContainer for \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" returns successfully" Jun 21 06:18:32.923484 containerd[1545]: time="2025-06-21T06:18:32.923459053Z" level=info msg="StopPodSandbox for \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\"" Jun 21 06:18:32.923556 containerd[1545]: time="2025-06-21T06:18:32.923527852Z" level=info msg="Container to stop \"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:18:32.923556 containerd[1545]: time="2025-06-21T06:18:32.923542349Z" level=info msg="Container to stop \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:18:32.923556 containerd[1545]: time="2025-06-21T06:18:32.923553049Z" level=info msg="Container to stop \"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:18:32.923692 containerd[1545]: time="2025-06-21T06:18:32.923563879Z" level=info msg="Container to stop \"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:18:32.923692 containerd[1545]: time="2025-06-21T06:18:32.923575211Z" level=info msg="Container to stop \"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 21 06:18:32.929246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8-rootfs.mount: Deactivated successfully. Jun 21 06:18:32.936515 systemd[1]: cri-containerd-66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a.scope: Deactivated successfully. Jun 21 06:18:32.945753 containerd[1545]: time="2025-06-21T06:18:32.945711664Z" level=info msg="shim disconnected" id=8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8 namespace=k8s.io Jun 21 06:18:32.945911 containerd[1545]: time="2025-06-21T06:18:32.945758172Z" level=warning msg="cleaning up after shim disconnected" id=8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8 namespace=k8s.io Jun 21 06:18:32.945911 containerd[1545]: time="2025-06-21T06:18:32.945774913Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 06:18:32.972102 containerd[1545]: time="2025-06-21T06:18:32.971386432Z" level=info msg="received exit event sandbox_id:\"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\" exit_status:137 exited_at:{seconds:1750486712 nanos:862266120}" Jun 21 06:18:32.972222 containerd[1545]: time="2025-06-21T06:18:32.971417472Z" level=info msg="TaskExit event in podsandbox handler container_id:\"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" id:\"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" pid:2974 exit_status:137 exited_at:{seconds:1750486712 nanos:944276008}" Jun 21 06:18:32.975478 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8-shm.mount: Deactivated successfully. Jun 21 06:18:32.977811 containerd[1545]: time="2025-06-21T06:18:32.977309908Z" level=info msg="TearDown network for sandbox \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\" successfully" Jun 21 06:18:32.977811 containerd[1545]: time="2025-06-21T06:18:32.977337349Z" level=info msg="StopPodSandbox for \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\" returns successfully" Jun 21 06:18:32.987088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a-rootfs.mount: Deactivated successfully. Jun 21 06:18:33.006382 containerd[1545]: time="2025-06-21T06:18:33.006332914Z" level=info msg="received exit event sandbox_id:\"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" exit_status:137 exited_at:{seconds:1750486712 nanos:944276008}" Jun 21 06:18:33.007691 containerd[1545]: time="2025-06-21T06:18:33.007643726Z" level=info msg="shim disconnected" id=66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a namespace=k8s.io Jun 21 06:18:33.007691 containerd[1545]: time="2025-06-21T06:18:33.007670275Z" level=warning msg="cleaning up after shim disconnected" id=66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a namespace=k8s.io Jun 21 06:18:33.007691 containerd[1545]: time="2025-06-21T06:18:33.007679383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 21 06:18:33.009208 containerd[1545]: time="2025-06-21T06:18:33.009102295Z" level=info msg="TearDown network for sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" successfully" Jun 21 06:18:33.009208 containerd[1545]: time="2025-06-21T06:18:33.009188106Z" level=info msg="StopPodSandbox for \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" returns successfully" Jun 21 06:18:33.049021 kubelet[2796]: I0621 06:18:33.047406 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f63e500-c310-4c34-82d7-72167afde656-cilium-config-path\") pod \"6f63e500-c310-4c34-82d7-72167afde656\" (UID: \"6f63e500-c310-4c34-82d7-72167afde656\") " Jun 21 06:18:33.049021 kubelet[2796]: I0621 06:18:33.047510 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw6pw\" (UniqueName: \"kubernetes.io/projected/6f63e500-c310-4c34-82d7-72167afde656-kube-api-access-tw6pw\") pod \"6f63e500-c310-4c34-82d7-72167afde656\" (UID: \"6f63e500-c310-4c34-82d7-72167afde656\") " Jun 21 06:18:33.051836 kubelet[2796]: I0621 06:18:33.051774 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f63e500-c310-4c34-82d7-72167afde656-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f63e500-c310-4c34-82d7-72167afde656" (UID: "6f63e500-c310-4c34-82d7-72167afde656"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 06:18:33.053627 kubelet[2796]: I0621 06:18:33.053554 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f63e500-c310-4c34-82d7-72167afde656-kube-api-access-tw6pw" (OuterVolumeSpecName: "kube-api-access-tw6pw") pod "6f63e500-c310-4c34-82d7-72167afde656" (UID: "6f63e500-c310-4c34-82d7-72167afde656"). InnerVolumeSpecName "kube-api-access-tw6pw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 06:18:33.148603 kubelet[2796]: I0621 06:18:33.148504 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbxdv\" (UniqueName: \"kubernetes.io/projected/38a7c875-4432-45d7-b2fb-00042dfc15e8-kube-api-access-rbxdv\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.148603 kubelet[2796]: I0621 06:18:33.148598 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-host-proc-sys-net\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149087 kubelet[2796]: I0621 06:18:33.148640 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-host-proc-sys-kernel\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149087 kubelet[2796]: I0621 06:18:33.148684 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-lib-modules\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149087 kubelet[2796]: I0621 06:18:33.148749 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38a7c875-4432-45d7-b2fb-00042dfc15e8-clustermesh-secrets\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149087 kubelet[2796]: I0621 06:18:33.148799 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-xtables-lock\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149087 kubelet[2796]: I0621 06:18:33.148836 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-run\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149087 kubelet[2796]: I0621 06:18:33.148869 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-bpf-maps\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149663 kubelet[2796]: I0621 06:18:33.148907 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38a7c875-4432-45d7-b2fb-00042dfc15e8-hubble-tls\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149663 kubelet[2796]: I0621 06:18:33.148953 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-hostproc\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149663 kubelet[2796]: I0621 06:18:33.149045 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cni-path\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149663 kubelet[2796]: I0621 06:18:33.149087 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-etc-cni-netd\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149663 kubelet[2796]: I0621 06:18:33.149130 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-config-path\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.149663 kubelet[2796]: I0621 06:18:33.149165 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-cgroup\") pod \"38a7c875-4432-45d7-b2fb-00042dfc15e8\" (UID: \"38a7c875-4432-45d7-b2fb-00042dfc15e8\") " Jun 21 06:18:33.150522 kubelet[2796]: I0621 06:18:33.149288 2796 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f63e500-c310-4c34-82d7-72167afde656-cilium-config-path\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.150522 kubelet[2796]: I0621 06:18:33.149318 2796 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tw6pw\" (UniqueName: \"kubernetes.io/projected/6f63e500-c310-4c34-82d7-72167afde656-kube-api-access-tw6pw\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.150522 kubelet[2796]: I0621 06:18:33.149423 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 06:18:33.150522 kubelet[2796]: I0621 06:18:33.150073 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 06:18:33.150522 kubelet[2796]: I0621 06:18:33.150131 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 06:18:33.151806 kubelet[2796]: I0621 06:18:33.150206 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 06:18:33.151806 kubelet[2796]: I0621 06:18:33.150248 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 06:18:33.154285 kubelet[2796]: I0621 06:18:33.154140 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 06:18:33.154285 kubelet[2796]: I0621 06:18:33.154251 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-hostproc" (OuterVolumeSpecName: "hostproc") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 06:18:33.154592 kubelet[2796]: I0621 06:18:33.154294 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 06:18:33.155255 kubelet[2796]: I0621 06:18:33.154971 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cni-path" (OuterVolumeSpecName: "cni-path") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 06:18:33.158077 kubelet[2796]: I0621 06:18:33.157105 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 21 06:18:33.170265 kubelet[2796]: I0621 06:18:33.170157 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38a7c875-4432-45d7-b2fb-00042dfc15e8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 21 06:18:33.172065 kubelet[2796]: I0621 06:18:33.171936 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a7c875-4432-45d7-b2fb-00042dfc15e8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 06:18:33.172756 kubelet[2796]: I0621 06:18:33.172669 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 21 06:18:33.175198 kubelet[2796]: I0621 06:18:33.175142 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38a7c875-4432-45d7-b2fb-00042dfc15e8-kube-api-access-rbxdv" (OuterVolumeSpecName: "kube-api-access-rbxdv") pod "38a7c875-4432-45d7-b2fb-00042dfc15e8" (UID: "38a7c875-4432-45d7-b2fb-00042dfc15e8"). InnerVolumeSpecName "kube-api-access-rbxdv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 21 06:18:33.250687 kubelet[2796]: I0621 06:18:33.250496 2796 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cni-path\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.250687 kubelet[2796]: I0621 06:18:33.250564 2796 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-etc-cni-netd\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.250687 kubelet[2796]: I0621 06:18:33.250594 2796 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-config-path\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.250687 kubelet[2796]: I0621 06:18:33.250620 2796 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-cgroup\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.250687 kubelet[2796]: I0621 06:18:33.250645 2796 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rbxdv\" (UniqueName: \"kubernetes.io/projected/38a7c875-4432-45d7-b2fb-00042dfc15e8-kube-api-access-rbxdv\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.250687 kubelet[2796]: I0621 06:18:33.250669 2796 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-host-proc-sys-net\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.250687 kubelet[2796]: I0621 06:18:33.250694 2796 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-host-proc-sys-kernel\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.252195 kubelet[2796]: I0621 06:18:33.250719 2796 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-lib-modules\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.252195 kubelet[2796]: I0621 06:18:33.250762 2796 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38a7c875-4432-45d7-b2fb-00042dfc15e8-clustermesh-secrets\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.252195 kubelet[2796]: I0621 06:18:33.250796 2796 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-xtables-lock\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.252195 kubelet[2796]: I0621 06:18:33.250820 2796 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-cilium-run\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.252195 kubelet[2796]: I0621 06:18:33.250842 2796 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-bpf-maps\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.252195 kubelet[2796]: I0621 06:18:33.250864 2796 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38a7c875-4432-45d7-b2fb-00042dfc15e8-hubble-tls\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.252195 kubelet[2796]: I0621 06:18:33.250884 2796 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38a7c875-4432-45d7-b2fb-00042dfc15e8-hostproc\") on node \"ci-4372-0-0-b-cad5e61be6.novalocal\" DevicePath \"\"" Jun 21 06:18:33.794575 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a-shm.mount: Deactivated successfully. Jun 21 06:18:33.794811 systemd[1]: var-lib-kubelet-pods-38a7c875\x2d4432\x2d45d7\x2db2fb\x2d00042dfc15e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drbxdv.mount: Deactivated successfully. Jun 21 06:18:33.795053 systemd[1]: var-lib-kubelet-pods-6f63e500\x2dc310\x2d4c34\x2d82d7\x2d72167afde656-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtw6pw.mount: Deactivated successfully. Jun 21 06:18:33.795254 systemd[1]: var-lib-kubelet-pods-38a7c875\x2d4432\x2d45d7\x2db2fb\x2d00042dfc15e8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 21 06:18:33.795447 systemd[1]: var-lib-kubelet-pods-38a7c875\x2d4432\x2d45d7\x2db2fb\x2d00042dfc15e8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 21 06:18:33.819859 kubelet[2796]: I0621 06:18:33.819775 2796 scope.go:117] "RemoveContainer" containerID="a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c" Jun 21 06:18:33.829631 containerd[1545]: time="2025-06-21T06:18:33.828961966Z" level=info msg="RemoveContainer for \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\"" Jun 21 06:18:33.843156 systemd[1]: Removed slice kubepods-besteffort-pod6f63e500_c310_4c34_82d7_72167afde656.slice - libcontainer container kubepods-besteffort-pod6f63e500_c310_4c34_82d7_72167afde656.slice. Jun 21 06:18:33.843677 systemd[1]: kubepods-besteffort-pod6f63e500_c310_4c34_82d7_72167afde656.slice: Consumed 1.019s CPU time, 29.8M memory peak, 4K written to disk. Jun 21 06:18:33.862354 containerd[1545]: time="2025-06-21T06:18:33.861464777Z" level=info msg="RemoveContainer for \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\" returns successfully" Jun 21 06:18:33.866721 kubelet[2796]: I0621 06:18:33.866647 2796 scope.go:117] "RemoveContainer" containerID="a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c" Jun 21 06:18:33.868651 containerd[1545]: time="2025-06-21T06:18:33.868104817Z" level=error msg="ContainerStatus for \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\": not found" Jun 21 06:18:33.870128 kubelet[2796]: E0621 06:18:33.870080 2796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\": not found" containerID="a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c" Jun 21 06:18:33.870691 kubelet[2796]: I0621 06:18:33.870385 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c"} err="failed to get container status \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a0ed8593f8080193a150579645d5e2bc75401ed3c629c98e1a72540f205b035c\": not found" Jun 21 06:18:33.870827 kubelet[2796]: I0621 06:18:33.870805 2796 scope.go:117] "RemoveContainer" containerID="d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490" Jun 21 06:18:33.871846 systemd[1]: Removed slice kubepods-burstable-pod38a7c875_4432_45d7_b2fb_00042dfc15e8.slice - libcontainer container kubepods-burstable-pod38a7c875_4432_45d7_b2fb_00042dfc15e8.slice. Jun 21 06:18:33.872109 systemd[1]: kubepods-burstable-pod38a7c875_4432_45d7_b2fb_00042dfc15e8.slice: Consumed 9.288s CPU time, 125.5M memory peak, 128K read from disk, 13.3M written to disk. Jun 21 06:18:33.876237 containerd[1545]: time="2025-06-21T06:18:33.876203156Z" level=info msg="RemoveContainer for \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\"" Jun 21 06:18:33.885800 containerd[1545]: time="2025-06-21T06:18:33.885758981Z" level=info msg="RemoveContainer for \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" returns successfully" Jun 21 06:18:33.886287 kubelet[2796]: I0621 06:18:33.885959 2796 scope.go:117] "RemoveContainer" containerID="0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd" Jun 21 06:18:33.889150 containerd[1545]: time="2025-06-21T06:18:33.889115726Z" level=info msg="RemoveContainer for \"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\"" Jun 21 06:18:33.896327 containerd[1545]: time="2025-06-21T06:18:33.896210550Z" level=info msg="RemoveContainer for \"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\" returns successfully" Jun 21 06:18:33.896665 kubelet[2796]: I0621 06:18:33.896644 2796 scope.go:117] "RemoveContainer" containerID="521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca" Jun 21 06:18:33.900544 containerd[1545]: time="2025-06-21T06:18:33.900479317Z" level=info msg="RemoveContainer for \"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\"" Jun 21 06:18:33.910871 containerd[1545]: time="2025-06-21T06:18:33.908953672Z" level=info msg="RemoveContainer for \"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\" returns successfully" Jun 21 06:18:33.915292 kubelet[2796]: I0621 06:18:33.915152 2796 scope.go:117] "RemoveContainer" containerID="bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e" Jun 21 06:18:33.918138 containerd[1545]: time="2025-06-21T06:18:33.918109457Z" level=info msg="RemoveContainer for \"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\"" Jun 21 06:18:33.922879 containerd[1545]: time="2025-06-21T06:18:33.922804214Z" level=info msg="RemoveContainer for \"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\" returns successfully" Jun 21 06:18:33.923502 kubelet[2796]: I0621 06:18:33.923371 2796 scope.go:117] "RemoveContainer" containerID="0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3" Jun 21 06:18:33.925940 containerd[1545]: time="2025-06-21T06:18:33.925490649Z" level=info msg="RemoveContainer for \"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\"" Jun 21 06:18:33.929088 containerd[1545]: time="2025-06-21T06:18:33.929062717Z" level=info msg="RemoveContainer for \"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\" returns successfully" Jun 21 06:18:33.929493 kubelet[2796]: I0621 06:18:33.929476 2796 scope.go:117] "RemoveContainer" containerID="d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490" Jun 21 06:18:33.930067 containerd[1545]: time="2025-06-21T06:18:33.930037869Z" level=error msg="ContainerStatus for \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\": not found" Jun 21 06:18:33.930436 kubelet[2796]: E0621 06:18:33.930400 2796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\": not found" containerID="d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490" Jun 21 06:18:33.930573 kubelet[2796]: I0621 06:18:33.930544 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490"} err="failed to get container status \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\": rpc error: code = NotFound desc = an error occurred when try to find container \"d45de181530bf4f9c9c7e0eb73c21b4390006992cc382915166d96270df74490\": not found" Jun 21 06:18:33.930752 kubelet[2796]: I0621 06:18:33.930706 2796 scope.go:117] "RemoveContainer" containerID="0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd" Jun 21 06:18:33.931219 containerd[1545]: time="2025-06-21T06:18:33.931153764Z" level=error msg="ContainerStatus for \"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\": not found" Jun 21 06:18:33.931355 kubelet[2796]: E0621 06:18:33.931335 2796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\": not found" containerID="0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd" Jun 21 06:18:33.931463 kubelet[2796]: I0621 06:18:33.931443 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd"} err="failed to get container status \"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"0450e5bbe7347103ce3b279775ec7d48bd49c96f2a4a1877e5e4e7cdf5d1f9fd\": not found" Jun 21 06:18:33.931622 kubelet[2796]: I0621 06:18:33.931559 2796 scope.go:117] "RemoveContainer" containerID="521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca" Jun 21 06:18:33.932047 containerd[1545]: time="2025-06-21T06:18:33.931834062Z" level=error msg="ContainerStatus for \"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\": not found" Jun 21 06:18:33.932110 kubelet[2796]: E0621 06:18:33.931938 2796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\": not found" containerID="521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca" Jun 21 06:18:33.932110 kubelet[2796]: I0621 06:18:33.931957 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca"} err="failed to get container status \"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\": rpc error: code = NotFound desc = an error occurred when try to find container \"521e72749aba610514fdb8d51fb67894e4ff2df31bb2176b3130d89017c08dca\": not found" Jun 21 06:18:33.932110 kubelet[2796]: I0621 06:18:33.931972 2796 scope.go:117] "RemoveContainer" containerID="bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e" Jun 21 06:18:33.932282 containerd[1545]: time="2025-06-21T06:18:33.932200370Z" level=error msg="ContainerStatus for \"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\": not found" Jun 21 06:18:33.932468 kubelet[2796]: E0621 06:18:33.932392 2796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\": not found" containerID="bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e" Jun 21 06:18:33.932646 kubelet[2796]: I0621 06:18:33.932550 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e"} err="failed to get container status \"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd0645c2686071cda985bb6c6d9c8681199c742f77f632ff716e5e4851af374e\": not found" Jun 21 06:18:33.932646 kubelet[2796]: I0621 06:18:33.932570 2796 scope.go:117] "RemoveContainer" containerID="0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3" Jun 21 06:18:33.933027 kubelet[2796]: E0621 06:18:33.932938 2796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\": not found" containerID="0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3" Jun 21 06:18:33.933219 containerd[1545]: time="2025-06-21T06:18:33.932830484Z" level=error msg="ContainerStatus for \"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\": not found" Jun 21 06:18:33.933370 kubelet[2796]: I0621 06:18:33.933106 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3"} err="failed to get container status \"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"0fa3bad5cd6388a01767127afede169e647e8be5bc8cd9eceaeac36f1f5286a3\": not found" Jun 21 06:18:34.679319 kubelet[2796]: I0621 06:18:34.679233 2796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38a7c875-4432-45d7-b2fb-00042dfc15e8" path="/var/lib/kubelet/pods/38a7c875-4432-45d7-b2fb-00042dfc15e8/volumes" Jun 21 06:18:34.680593 kubelet[2796]: I0621 06:18:34.680512 2796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f63e500-c310-4c34-82d7-72167afde656" path="/var/lib/kubelet/pods/6f63e500-c310-4c34-82d7-72167afde656/volumes" Jun 21 06:18:34.782481 sshd[4317]: Connection closed by 172.24.4.1 port 46968 Jun 21 06:18:34.783803 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Jun 21 06:18:34.796051 systemd[1]: sshd@23-172.24.4.45:22-172.24.4.1:46968.service: Deactivated successfully. Jun 21 06:18:34.799752 systemd[1]: session-26.scope: Deactivated successfully. Jun 21 06:18:34.800436 systemd[1]: session-26.scope: Consumed 1.039s CPU time, 23.4M memory peak. Jun 21 06:18:34.802529 systemd-logind[1528]: Session 26 logged out. Waiting for processes to exit. Jun 21 06:18:34.809844 systemd[1]: Started sshd@24-172.24.4.45:22-172.24.4.1:37432.service - OpenSSH per-connection server daemon (172.24.4.1:37432). Jun 21 06:18:34.812581 systemd-logind[1528]: Removed session 26. Jun 21 06:18:35.965547 kubelet[2796]: E0621 06:18:35.965434 2796 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 21 06:18:36.067543 sshd[4461]: Accepted publickey for core from 172.24.4.1 port 37432 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:18:36.070671 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:18:36.084250 systemd-logind[1528]: New session 27 of user core. Jun 21 06:18:36.095338 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 21 06:18:37.283424 kubelet[2796]: I0621 06:18:37.283370 2796 memory_manager.go:355] "RemoveStaleState removing state" podUID="6f63e500-c310-4c34-82d7-72167afde656" containerName="cilium-operator" Jun 21 06:18:37.283424 kubelet[2796]: I0621 06:18:37.283413 2796 memory_manager.go:355] "RemoveStaleState removing state" podUID="38a7c875-4432-45d7-b2fb-00042dfc15e8" containerName="cilium-agent" Jun 21 06:18:37.295528 systemd[1]: Created slice kubepods-burstable-podde1a6469_7001_4764_9a64_4fc4b58e1432.slice - libcontainer container kubepods-burstable-podde1a6469_7001_4764_9a64_4fc4b58e1432.slice. Jun 21 06:18:37.370905 sshd[4463]: Connection closed by 172.24.4.1 port 37432 Jun 21 06:18:37.370670 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Jun 21 06:18:37.379149 systemd[1]: sshd@24-172.24.4.45:22-172.24.4.1:37432.service: Deactivated successfully. Jun 21 06:18:37.380979 systemd[1]: session-27.scope: Deactivated successfully. Jun 21 06:18:37.381976 systemd-logind[1528]: Session 27 logged out. Waiting for processes to exit. Jun 21 06:18:37.383944 kubelet[2796]: I0621 06:18:37.383403 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de1a6469-7001-4764-9a64-4fc4b58e1432-xtables-lock\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.383944 kubelet[2796]: I0621 06:18:37.383451 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de1a6469-7001-4764-9a64-4fc4b58e1432-host-proc-sys-net\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.383944 kubelet[2796]: I0621 06:18:37.383476 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de1a6469-7001-4764-9a64-4fc4b58e1432-cilium-run\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.383944 kubelet[2796]: I0621 06:18:37.383494 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de1a6469-7001-4764-9a64-4fc4b58e1432-lib-modules\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.383944 kubelet[2796]: I0621 06:18:37.383521 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de1a6469-7001-4764-9a64-4fc4b58e1432-hostproc\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.383944 kubelet[2796]: I0621 06:18:37.383544 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de1a6469-7001-4764-9a64-4fc4b58e1432-cilium-config-path\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.384222 kubelet[2796]: I0621 06:18:37.383578 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de1a6469-7001-4764-9a64-4fc4b58e1432-host-proc-sys-kernel\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.384222 kubelet[2796]: I0621 06:18:37.383603 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de1a6469-7001-4764-9a64-4fc4b58e1432-cilium-cgroup\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.384222 kubelet[2796]: I0621 06:18:37.383622 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de1a6469-7001-4764-9a64-4fc4b58e1432-clustermesh-secrets\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.384222 kubelet[2796]: I0621 06:18:37.383648 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de1a6469-7001-4764-9a64-4fc4b58e1432-hubble-tls\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.384222 kubelet[2796]: I0621 06:18:37.383672 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hht5g\" (UniqueName: \"kubernetes.io/projected/de1a6469-7001-4764-9a64-4fc4b58e1432-kube-api-access-hht5g\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.384222 kubelet[2796]: I0621 06:18:37.383699 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de1a6469-7001-4764-9a64-4fc4b58e1432-bpf-maps\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.384431 kubelet[2796]: I0621 06:18:37.383722 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de1a6469-7001-4764-9a64-4fc4b58e1432-cni-path\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.384431 kubelet[2796]: I0621 06:18:37.383797 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de1a6469-7001-4764-9a64-4fc4b58e1432-etc-cni-netd\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.384431 kubelet[2796]: I0621 06:18:37.383823 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/de1a6469-7001-4764-9a64-4fc4b58e1432-cilium-ipsec-secrets\") pod \"cilium-wwtf4\" (UID: \"de1a6469-7001-4764-9a64-4fc4b58e1432\") " pod="kube-system/cilium-wwtf4" Jun 21 06:18:37.386219 systemd[1]: Started sshd@25-172.24.4.45:22-172.24.4.1:37448.service - OpenSSH per-connection server daemon (172.24.4.1:37448). Jun 21 06:18:37.388025 systemd-logind[1528]: Removed session 27. Jun 21 06:18:37.601749 containerd[1545]: time="2025-06-21T06:18:37.601544767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wwtf4,Uid:de1a6469-7001-4764-9a64-4fc4b58e1432,Namespace:kube-system,Attempt:0,}" Jun 21 06:18:37.664083 containerd[1545]: time="2025-06-21T06:18:37.663046194Z" level=info msg="connecting to shim 269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870" address="unix:///run/containerd/s/30c5691f3ceee5421401dabfe4d6a79ec40367d7b217388c2c793938c1f64256" namespace=k8s.io protocol=ttrpc version=3 Jun 21 06:18:37.707177 systemd[1]: Started cri-containerd-269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870.scope - libcontainer container 269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870. Jun 21 06:18:37.738657 containerd[1545]: time="2025-06-21T06:18:37.738611381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wwtf4,Uid:de1a6469-7001-4764-9a64-4fc4b58e1432,Namespace:kube-system,Attempt:0,} returns sandbox id \"269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870\"" Jun 21 06:18:37.748651 containerd[1545]: time="2025-06-21T06:18:37.748591493Z" level=info msg="CreateContainer within sandbox \"269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 21 06:18:37.759263 containerd[1545]: time="2025-06-21T06:18:37.759190147Z" level=info msg="Container 21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:18:37.769966 containerd[1545]: time="2025-06-21T06:18:37.769912022Z" level=info msg="CreateContainer within sandbox \"269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4\"" Jun 21 06:18:37.771047 containerd[1545]: time="2025-06-21T06:18:37.770803927Z" level=info msg="StartContainer for \"21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4\"" Jun 21 06:18:37.771935 containerd[1545]: time="2025-06-21T06:18:37.771907128Z" level=info msg="connecting to shim 21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4" address="unix:///run/containerd/s/30c5691f3ceee5421401dabfe4d6a79ec40367d7b217388c2c793938c1f64256" protocol=ttrpc version=3 Jun 21 06:18:37.797269 systemd[1]: Started cri-containerd-21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4.scope - libcontainer container 21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4. Jun 21 06:18:37.836737 containerd[1545]: time="2025-06-21T06:18:37.836680229Z" level=info msg="StartContainer for \"21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4\" returns successfully" Jun 21 06:18:37.845177 systemd[1]: cri-containerd-21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4.scope: Deactivated successfully. Jun 21 06:18:37.848324 containerd[1545]: time="2025-06-21T06:18:37.848216453Z" level=info msg="received exit event container_id:\"21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4\" id:\"21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4\" pid:4539 exited_at:{seconds:1750486717 nanos:847588134}" Jun 21 06:18:37.848535 containerd[1545]: time="2025-06-21T06:18:37.848304118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4\" id:\"21f62ac8335692dc89995d64a80b0ab1a67210213aa1b42ac5b1cd4e91fcd9c4\" pid:4539 exited_at:{seconds:1750486717 nanos:847588134}" Jun 21 06:18:38.831675 sshd[4474]: Accepted publickey for core from 172.24.4.1 port 37448 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:18:38.837608 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:18:38.852579 systemd-logind[1528]: New session 28 of user core. Jun 21 06:18:38.860323 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 21 06:18:38.906060 containerd[1545]: time="2025-06-21T06:18:38.904481302Z" level=info msg="CreateContainer within sandbox \"269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 21 06:18:38.945020 containerd[1545]: time="2025-06-21T06:18:38.942773221Z" level=info msg="Container 73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:18:38.955178 containerd[1545]: time="2025-06-21T06:18:38.954860511Z" level=info msg="CreateContainer within sandbox \"269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f\"" Jun 21 06:18:38.957045 containerd[1545]: time="2025-06-21T06:18:38.956345368Z" level=info msg="StartContainer for \"73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f\"" Jun 21 06:18:38.959446 containerd[1545]: time="2025-06-21T06:18:38.959340484Z" level=info msg="connecting to shim 73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f" address="unix:///run/containerd/s/30c5691f3ceee5421401dabfe4d6a79ec40367d7b217388c2c793938c1f64256" protocol=ttrpc version=3 Jun 21 06:18:38.989287 systemd[1]: Started cri-containerd-73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f.scope - libcontainer container 73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f. Jun 21 06:18:39.036816 containerd[1545]: time="2025-06-21T06:18:39.036762906Z" level=info msg="StartContainer for \"73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f\" returns successfully" Jun 21 06:18:39.044880 systemd[1]: cri-containerd-73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f.scope: Deactivated successfully. Jun 21 06:18:39.047263 containerd[1545]: time="2025-06-21T06:18:39.047113814Z" level=info msg="received exit event container_id:\"73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f\" id:\"73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f\" pid:4585 exited_at:{seconds:1750486719 nanos:46598447}" Jun 21 06:18:39.047536 containerd[1545]: time="2025-06-21T06:18:39.047377510Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f\" id:\"73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f\" pid:4585 exited_at:{seconds:1750486719 nanos:46598447}" Jun 21 06:18:39.071692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73a2b3f36493c34a698a2f89ac946a027fa364dd1f8633c14eca5f872cca608f-rootfs.mount: Deactivated successfully. Jun 21 06:18:39.591050 sshd[4572]: Connection closed by 172.24.4.1 port 37448 Jun 21 06:18:39.592341 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Jun 21 06:18:39.610937 systemd[1]: sshd@25-172.24.4.45:22-172.24.4.1:37448.service: Deactivated successfully. Jun 21 06:18:39.616133 systemd[1]: session-28.scope: Deactivated successfully. Jun 21 06:18:39.619772 systemd-logind[1528]: Session 28 logged out. Waiting for processes to exit. Jun 21 06:18:39.626899 systemd[1]: Started sshd@26-172.24.4.45:22-172.24.4.1:37450.service - OpenSSH per-connection server daemon (172.24.4.1:37450). Jun 21 06:18:39.630749 systemd-logind[1528]: Removed session 28. Jun 21 06:18:39.905600 containerd[1545]: time="2025-06-21T06:18:39.904414223Z" level=info msg="CreateContainer within sandbox \"269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 21 06:18:39.939551 containerd[1545]: time="2025-06-21T06:18:39.939482317Z" level=info msg="Container f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:18:39.958877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount51378517.mount: Deactivated successfully. Jun 21 06:18:39.976875 containerd[1545]: time="2025-06-21T06:18:39.976817679Z" level=info msg="CreateContainer within sandbox \"269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825\"" Jun 21 06:18:39.978503 containerd[1545]: time="2025-06-21T06:18:39.978386235Z" level=info msg="StartContainer for \"f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825\"" Jun 21 06:18:39.981099 containerd[1545]: time="2025-06-21T06:18:39.980897531Z" level=info msg="connecting to shim f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825" address="unix:///run/containerd/s/30c5691f3ceee5421401dabfe4d6a79ec40367d7b217388c2c793938c1f64256" protocol=ttrpc version=3 Jun 21 06:18:40.008207 systemd[1]: Started cri-containerd-f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825.scope - libcontainer container f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825. Jun 21 06:18:40.069297 containerd[1545]: time="2025-06-21T06:18:40.069189082Z" level=info msg="StartContainer for \"f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825\" returns successfully" Jun 21 06:18:40.074711 systemd[1]: cri-containerd-f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825.scope: Deactivated successfully. Jun 21 06:18:40.077962 containerd[1545]: time="2025-06-21T06:18:40.077823988Z" level=info msg="received exit event container_id:\"f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825\" id:\"f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825\" pid:4638 exited_at:{seconds:1750486720 nanos:77431511}" Jun 21 06:18:40.078928 containerd[1545]: time="2025-06-21T06:18:40.078896252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825\" id:\"f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825\" pid:4638 exited_at:{seconds:1750486720 nanos:77431511}" Jun 21 06:18:40.103741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2328cda4e58473adb0e48530c618e503d0991da796afa8574f4d9c4455fe825-rootfs.mount: Deactivated successfully. Jun 21 06:18:40.715405 containerd[1545]: time="2025-06-21T06:18:40.715178056Z" level=info msg="StopPodSandbox for \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\"" Jun 21 06:18:40.716153 containerd[1545]: time="2025-06-21T06:18:40.715494751Z" level=info msg="TearDown network for sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" successfully" Jun 21 06:18:40.716153 containerd[1545]: time="2025-06-21T06:18:40.715530758Z" level=info msg="StopPodSandbox for \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" returns successfully" Jun 21 06:18:40.718230 containerd[1545]: time="2025-06-21T06:18:40.718099412Z" level=info msg="RemovePodSandbox for \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\"" Jun 21 06:18:40.718671 containerd[1545]: time="2025-06-21T06:18:40.718626341Z" level=info msg="Forcibly stopping sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\"" Jun 21 06:18:40.719322 containerd[1545]: time="2025-06-21T06:18:40.719245494Z" level=info msg="TearDown network for sandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" successfully" Jun 21 06:18:40.723867 containerd[1545]: time="2025-06-21T06:18:40.723498070Z" level=info msg="Ensure that sandbox 66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a in task-service has been cleanup successfully" Jun 21 06:18:40.775785 containerd[1545]: time="2025-06-21T06:18:40.775663501Z" level=info msg="RemovePodSandbox \"66fd947767c5d10353f1262f7b8683ee7152a6bf7d95ae96a236d1df4020bc7a\" returns successfully" Jun 21 06:18:40.777046 containerd[1545]: time="2025-06-21T06:18:40.776855579Z" level=info msg="StopPodSandbox for \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\"" Jun 21 06:18:40.777328 containerd[1545]: time="2025-06-21T06:18:40.777289845Z" level=info msg="TearDown network for sandbox \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\" successfully" Jun 21 06:18:40.777328 containerd[1545]: time="2025-06-21T06:18:40.777317397Z" level=info msg="StopPodSandbox for \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\" returns successfully" Jun 21 06:18:40.778409 containerd[1545]: time="2025-06-21T06:18:40.778204062Z" level=info msg="RemovePodSandbox for \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\"" Jun 21 06:18:40.778409 containerd[1545]: time="2025-06-21T06:18:40.778271318Z" level=info msg="Forcibly stopping sandbox \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\"" Jun 21 06:18:40.778409 containerd[1545]: time="2025-06-21T06:18:40.778385302Z" level=info msg="TearDown network for sandbox \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\" successfully" Jun 21 06:18:40.781032 containerd[1545]: time="2025-06-21T06:18:40.780863906Z" level=info msg="Ensure that sandbox 8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8 in task-service has been cleanup successfully" Jun 21 06:18:40.838724 containerd[1545]: time="2025-06-21T06:18:40.838612511Z" level=info msg="RemovePodSandbox \"8ea69c5c9bfaa81c3e781ca3bec49ecbd43bc51fc950fa11ffbce43038ae5cb8\" returns successfully" Jun 21 06:18:40.923837 containerd[1545]: time="2025-06-21T06:18:40.923755319Z" level=info msg="CreateContainer within sandbox \"269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 21 06:18:40.948061 containerd[1545]: time="2025-06-21T06:18:40.945535281Z" level=info msg="Container 38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:18:40.957855 sshd[4623]: Accepted publickey for core from 172.24.4.1 port 37450 ssh2: RSA SHA256:a2Dit7vQ2pMRQe3ls1XK1VdD7eByExY7Cxv0KLk9C9Y Jun 21 06:18:40.967779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564111640.mount: Deactivated successfully. Jun 21 06:18:40.968916 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 21 06:18:40.974434 kubelet[2796]: E0621 06:18:40.973316 2796 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 21 06:18:40.982268 systemd-logind[1528]: New session 29 of user core. Jun 21 06:18:40.989173 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 21 06:18:40.997797 containerd[1545]: time="2025-06-21T06:18:40.996601828Z" level=info msg="CreateContainer within sandbox \"269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a\"" Jun 21 06:18:40.999018 containerd[1545]: time="2025-06-21T06:18:40.998938585Z" level=info msg="StartContainer for \"38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a\"" Jun 21 06:18:41.001973 containerd[1545]: time="2025-06-21T06:18:41.001920615Z" level=info msg="connecting to shim 38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a" address="unix:///run/containerd/s/30c5691f3ceee5421401dabfe4d6a79ec40367d7b217388c2c793938c1f64256" protocol=ttrpc version=3 Jun 21 06:18:41.029166 systemd[1]: Started cri-containerd-38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a.scope - libcontainer container 38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a. Jun 21 06:18:41.059474 systemd[1]: cri-containerd-38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a.scope: Deactivated successfully. Jun 21 06:18:41.066427 containerd[1545]: time="2025-06-21T06:18:41.066106557Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a\" id:\"38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a\" pid:4683 exited_at:{seconds:1750486721 nanos:62024893}" Jun 21 06:18:41.076504 containerd[1545]: time="2025-06-21T06:18:41.076298997Z" level=info msg="received exit event container_id:\"38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a\" id:\"38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a\" pid:4683 exited_at:{seconds:1750486721 nanos:62024893}" Jun 21 06:18:41.079806 containerd[1545]: time="2025-06-21T06:18:41.079776398Z" level=info msg="StartContainer for \"38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a\" returns successfully" Jun 21 06:18:41.104216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38348bee84b2f898086d95586d167336b05b1d975a64cb07553f2ac8a16b966a-rootfs.mount: Deactivated successfully. Jun 21 06:18:41.964067 containerd[1545]: time="2025-06-21T06:18:41.962113858Z" level=info msg="CreateContainer within sandbox \"269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 21 06:18:42.051239 containerd[1545]: time="2025-06-21T06:18:42.051183929Z" level=info msg="Container 3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3: CDI devices from CRI Config.CDIDevices: []" Jun 21 06:18:42.081848 containerd[1545]: time="2025-06-21T06:18:42.081660430Z" level=info msg="CreateContainer within sandbox \"269f8baf3f632016f52b4e09c711797d306c87f47f73f4f698a79b500727f870\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3\"" Jun 21 06:18:42.083306 containerd[1545]: time="2025-06-21T06:18:42.083209278Z" level=info msg="StartContainer for \"3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3\"" Jun 21 06:18:42.086111 containerd[1545]: time="2025-06-21T06:18:42.086064069Z" level=info msg="connecting to shim 3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3" address="unix:///run/containerd/s/30c5691f3ceee5421401dabfe4d6a79ec40367d7b217388c2c793938c1f64256" protocol=ttrpc version=3 Jun 21 06:18:42.121215 systemd[1]: Started cri-containerd-3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3.scope - libcontainer container 3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3. Jun 21 06:18:42.183394 containerd[1545]: time="2025-06-21T06:18:42.183347372Z" level=info msg="StartContainer for \"3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3\" returns successfully" Jun 21 06:18:42.305342 containerd[1545]: time="2025-06-21T06:18:42.304926540Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3\" id:\"7c764694cafbdf78c4644efffc0b2e4bc82606f0e6702652b8f9c5e0ecd0600d\" pid:4759 exited_at:{seconds:1750486722 nanos:304450556}" Jun 21 06:18:42.708070 kernel: cryptd: max_cpu_qlen set to 1000 Jun 21 06:18:42.770256 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jun 21 06:18:43.006497 kubelet[2796]: I0621 06:18:43.006227 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wwtf4" podStartSLOduration=6.00618259 podStartE2EDuration="6.00618259s" podCreationTimestamp="2025-06-21 06:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-21 06:18:43.003260564 +0000 UTC m=+182.462521789" watchObservedRunningTime="2025-06-21 06:18:43.00618259 +0000 UTC m=+182.465443815" Jun 21 06:18:43.760784 containerd[1545]: time="2025-06-21T06:18:43.760374823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3\" id:\"8881a6415d2a409c9a9ffada73bf025edef6ed7c83fda065ca6d0a88558c1cf8\" pid:4856 exit_status:1 exited_at:{seconds:1750486723 nanos:757121214}" Jun 21 06:18:44.770713 kubelet[2796]: I0621 06:18:44.770606 2796 setters.go:602] "Node became not ready" node="ci-4372-0-0-b-cad5e61be6.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-21T06:18:44Z","lastTransitionTime":"2025-06-21T06:18:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 21 06:18:45.990392 containerd[1545]: time="2025-06-21T06:18:45.990182304Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3\" id:\"badf8ed3671d635e4d39cbcd0109b95fa8627b6c85aba932ae3b72ec9145141d\" pid:5201 exit_status:1 exited_at:{seconds:1750486725 nanos:987790742}" Jun 21 06:18:46.253126 systemd-networkd[1434]: lxc_health: Link UP Jun 21 06:18:46.268092 systemd-networkd[1434]: lxc_health: Gained carrier Jun 21 06:18:47.754218 systemd-networkd[1434]: lxc_health: Gained IPv6LL Jun 21 06:18:48.196385 containerd[1545]: time="2025-06-21T06:18:48.196321839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3\" id:\"b6889814e3db3a4719b6d47d56f82f62abf0e2ededda7e07335c7876aebb5166\" pid:5331 exited_at:{seconds:1750486728 nanos:193191180}" Jun 21 06:18:50.399725 containerd[1545]: time="2025-06-21T06:18:50.399525809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3\" id:\"f4ff78a47c550a7be08719b1545c6fb6e0483494e0bdda792f50e0e36368ef64\" pid:5358 exited_at:{seconds:1750486730 nanos:399016833}" Jun 21 06:18:52.685538 containerd[1545]: time="2025-06-21T06:18:52.685478653Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3df304ec29a7e44e2511e3952080bd69841df1aca7529f912ee66f553915d0c3\" id:\"2c6bc011e1a8ee03d00b072279c28df7aae5f2859b483c1f9251d840838f92e6\" pid:5389 exited_at:{seconds:1750486732 nanos:684684884}" Jun 21 06:18:52.689665 kubelet[2796]: E0621 06:18:52.689349 2796 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46532->127.0.0.1:36529: write tcp 127.0.0.1:46532->127.0.0.1:36529: write: broken pipe Jun 21 06:18:52.948922 sshd[4667]: Connection closed by 172.24.4.1 port 37450 Jun 21 06:18:52.953463 sshd-session[4623]: pam_unix(sshd:session): session closed for user core Jun 21 06:18:52.967253 systemd-logind[1528]: Session 29 logged out. Waiting for processes to exit. Jun 21 06:18:52.967977 systemd[1]: sshd@26-172.24.4.45:22-172.24.4.1:37450.service: Deactivated successfully. Jun 21 06:18:52.979519 systemd[1]: session-29.scope: Deactivated successfully. Jun 21 06:18:52.986851 systemd-logind[1528]: Removed session 29.