Jul 6 23:47:34.099075 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:53:45 -00 2025 Jul 6 23:47:34.099106 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:47:34.099118 kernel: BIOS-provided physical RAM map: Jul 6 23:47:34.099143 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:47:34.099150 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:47:34.099180 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:47:34.099189 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 6 23:47:34.099197 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 6 23:47:34.099204 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:47:34.099212 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:47:34.099220 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 6 23:47:34.099227 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 6 23:47:34.099235 kernel: NX (Execute Disable) protection: active Jul 6 23:47:34.099243 kernel: APIC: Static calls initialized Jul 6 23:47:34.099254 kernel: SMBIOS 3.0.0 present. Jul 6 23:47:34.099262 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 6 23:47:34.099270 kernel: Hypervisor detected: KVM Jul 6 23:47:34.099278 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:47:34.099286 kernel: kvm-clock: using sched offset of 3488146313 cycles Jul 6 23:47:34.099296 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:47:34.099305 kernel: tsc: Detected 1996.249 MHz processor Jul 6 23:47:34.099314 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:47:34.099322 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:47:34.099331 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 6 23:47:34.099339 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:47:34.099348 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:47:34.099356 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 6 23:47:34.099364 kernel: ACPI: Early table checksum verification disabled Jul 6 23:47:34.099374 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 6 23:47:34.099383 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:47:34.099391 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:47:34.099399 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:47:34.099407 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 6 23:47:34.099415 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:47:34.099424 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:47:34.099432 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 6 23:47:34.099440 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 6 23:47:34.099450 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 6 23:47:34.099458 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 6 23:47:34.099466 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 6 23:47:34.099477 kernel: No NUMA configuration found Jul 6 23:47:34.099486 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 6 23:47:34.099494 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Jul 6 23:47:34.099505 kernel: Zone ranges: Jul 6 23:47:34.099514 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:47:34.099522 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 6 23:47:34.099531 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 6 23:47:34.099539 kernel: Movable zone start for each node Jul 6 23:47:34.099548 kernel: Early memory node ranges Jul 6 23:47:34.099556 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:47:34.099565 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 6 23:47:34.099575 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 6 23:47:34.099584 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 6 23:47:34.099592 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:47:34.099601 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:47:34.099610 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 6 23:47:34.099618 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:47:34.099627 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:47:34.099635 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:47:34.099644 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:47:34.099654 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:47:34.099663 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:47:34.099671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:47:34.099680 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:47:34.099688 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:47:34.099697 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 6 23:47:34.099705 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:47:34.099714 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 6 23:47:34.099722 kernel: Booting paravirtualized kernel on KVM Jul 6 23:47:34.099733 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:47:34.099742 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 6 23:47:34.099750 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jul 6 23:47:34.099759 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jul 6 23:47:34.099767 kernel: pcpu-alloc: [0] 0 1 Jul 6 23:47:34.099775 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 6 23:47:34.099785 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:47:34.099794 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:47:34.099805 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:47:34.099813 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:47:34.099822 kernel: Fallback order for Node 0: 0 Jul 6 23:47:34.099831 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Jul 6 23:47:34.099839 kernel: Policy zone: Normal Jul 6 23:47:34.099848 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:47:34.099856 kernel: software IO TLB: area num 2. Jul 6 23:47:34.099866 kernel: Memory: 3964156K/4193772K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43492K init, 1584K bss, 229356K reserved, 0K cma-reserved) Jul 6 23:47:34.099875 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 6 23:47:34.099885 kernel: ftrace: allocating 37940 entries in 149 pages Jul 6 23:47:34.099894 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:47:34.099902 kernel: Dynamic Preempt: voluntary Jul 6 23:47:34.099911 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:47:34.099944 kernel: rcu: RCU event tracing is enabled. Jul 6 23:47:34.099953 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 6 23:47:34.099962 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:47:34.099971 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:47:34.099979 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:47:34.099990 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:47:34.100001 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 6 23:47:34.100010 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 6 23:47:34.100019 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:47:34.100029 kernel: Console: colour VGA+ 80x25 Jul 6 23:47:34.100038 kernel: printk: console [tty0] enabled Jul 6 23:47:34.100047 kernel: printk: console [ttyS0] enabled Jul 6 23:47:34.100056 kernel: ACPI: Core revision 20230628 Jul 6 23:47:34.100065 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:47:34.100074 kernel: x2apic enabled Jul 6 23:47:34.100086 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:47:34.100095 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:47:34.100104 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:47:34.100114 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 6 23:47:34.100146 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 6 23:47:34.100155 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 6 23:47:34.100165 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:47:34.100174 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:47:34.100183 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:47:34.100196 kernel: Speculative Store Bypass: Vulnerable Jul 6 23:47:34.100205 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 6 23:47:34.100214 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:47:34.100223 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:47:34.100239 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:47:34.100250 kernel: landlock: Up and running. Jul 6 23:47:34.100259 kernel: SELinux: Initializing. Jul 6 23:47:34.100269 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:47:34.100279 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:47:34.100289 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 6 23:47:34.100298 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:47:34.100310 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:47:34.100320 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 6 23:47:34.100330 kernel: Performance Events: AMD PMU driver. Jul 6 23:47:34.100340 kernel: ... version: 0 Jul 6 23:47:34.100349 kernel: ... bit width: 48 Jul 6 23:47:34.100361 kernel: ... generic registers: 4 Jul 6 23:47:34.100370 kernel: ... value mask: 0000ffffffffffff Jul 6 23:47:34.100380 kernel: ... max period: 00007fffffffffff Jul 6 23:47:34.100391 kernel: ... fixed-purpose events: 0 Jul 6 23:47:34.100400 kernel: ... event mask: 000000000000000f Jul 6 23:47:34.100410 kernel: signal: max sigframe size: 1440 Jul 6 23:47:34.100419 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:47:34.100429 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:47:34.100439 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:47:34.100450 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:47:34.100460 kernel: .... node #0, CPUs: #1 Jul 6 23:47:34.100469 kernel: smp: Brought up 1 node, 2 CPUs Jul 6 23:47:34.100479 kernel: smpboot: Max logical packages: 2 Jul 6 23:47:34.100488 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 6 23:47:34.100498 kernel: devtmpfs: initialized Jul 6 23:47:34.100508 kernel: x86/mm: Memory block size: 128MB Jul 6 23:47:34.100518 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:47:34.100527 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 6 23:47:34.100539 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:47:34.100549 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:47:34.100559 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:47:34.100568 kernel: audit: type=2000 audit(1751845652.646:1): state=initialized audit_enabled=0 res=1 Jul 6 23:47:34.100578 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:47:34.100588 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:47:34.100597 kernel: cpuidle: using governor menu Jul 6 23:47:34.100607 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:47:34.100616 kernel: dca service started, version 1.12.1 Jul 6 23:47:34.100628 kernel: PCI: Using configuration type 1 for base access Jul 6 23:47:34.100638 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:47:34.100648 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:47:34.100658 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:47:34.100667 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:47:34.100677 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:47:34.100686 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:47:34.100696 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:47:34.100706 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:47:34.100718 kernel: ACPI: Interpreter enabled Jul 6 23:47:34.100727 kernel: ACPI: PM: (supports S0 S3 S5) Jul 6 23:47:34.100737 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:47:34.100747 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:47:34.100756 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:47:34.100766 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 6 23:47:34.100776 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:47:34.100959 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:47:34.101069 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 6 23:47:34.101197 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 6 23:47:34.101212 kernel: acpiphp: Slot [3] registered Jul 6 23:47:34.101221 kernel: acpiphp: Slot [4] registered Jul 6 23:47:34.101230 kernel: acpiphp: Slot [5] registered Jul 6 23:47:34.101239 kernel: acpiphp: Slot [6] registered Jul 6 23:47:34.101248 kernel: acpiphp: Slot [7] registered Jul 6 23:47:34.101257 kernel: acpiphp: Slot [8] registered Jul 6 23:47:34.101266 kernel: acpiphp: Slot [9] registered Jul 6 23:47:34.101279 kernel: acpiphp: Slot [10] registered Jul 6 23:47:34.101287 kernel: acpiphp: Slot [11] registered Jul 6 23:47:34.101296 kernel: acpiphp: Slot [12] registered Jul 6 23:47:34.101305 kernel: acpiphp: Slot [13] registered Jul 6 23:47:34.101314 kernel: acpiphp: Slot [14] registered Jul 6 23:47:34.101323 kernel: acpiphp: Slot [15] registered Jul 6 23:47:34.101332 kernel: acpiphp: Slot [16] registered Jul 6 23:47:34.101341 kernel: acpiphp: Slot [17] registered Jul 6 23:47:34.101350 kernel: acpiphp: Slot [18] registered Jul 6 23:47:34.101360 kernel: acpiphp: Slot [19] registered Jul 6 23:47:34.101369 kernel: acpiphp: Slot [20] registered Jul 6 23:47:34.101378 kernel: acpiphp: Slot [21] registered Jul 6 23:47:34.101387 kernel: acpiphp: Slot [22] registered Jul 6 23:47:34.101396 kernel: acpiphp: Slot [23] registered Jul 6 23:47:34.101404 kernel: acpiphp: Slot [24] registered Jul 6 23:47:34.101413 kernel: acpiphp: Slot [25] registered Jul 6 23:47:34.101422 kernel: acpiphp: Slot [26] registered Jul 6 23:47:34.101431 kernel: acpiphp: Slot [27] registered Jul 6 23:47:34.101440 kernel: acpiphp: Slot [28] registered Jul 6 23:47:34.101451 kernel: acpiphp: Slot [29] registered Jul 6 23:47:34.101460 kernel: acpiphp: Slot [30] registered Jul 6 23:47:34.101468 kernel: acpiphp: Slot [31] registered Jul 6 23:47:34.101477 kernel: PCI host bridge to bus 0000:00 Jul 6 23:47:34.101582 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:47:34.101685 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:47:34.101771 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:47:34.101855 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:47:34.101942 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 6 23:47:34.102025 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:47:34.102162 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 6 23:47:34.102270 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 6 23:47:34.102371 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 6 23:47:34.102466 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Jul 6 23:47:34.102565 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 6 23:47:34.102660 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 6 23:47:34.102755 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 6 23:47:34.102847 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 6 23:47:34.102949 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 6 23:47:34.103042 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 6 23:47:34.103172 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 6 23:47:34.103279 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jul 6 23:47:34.103375 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jul 6 23:47:34.103470 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Jul 6 23:47:34.103564 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Jul 6 23:47:34.103657 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Jul 6 23:47:34.103752 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:47:34.103861 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:47:34.104019 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Jul 6 23:47:34.104144 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Jul 6 23:47:34.104255 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Jul 6 23:47:34.104362 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Jul 6 23:47:34.104473 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:47:34.104577 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 6 23:47:34.104684 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Jul 6 23:47:34.104787 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 6 23:47:34.104903 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Jul 6 23:47:34.105007 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Jul 6 23:47:34.105109 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 6 23:47:34.106305 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Jul 6 23:47:34.106412 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Jul 6 23:47:34.106520 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Jul 6 23:47:34.106620 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Jul 6 23:47:34.106635 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:47:34.106646 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:47:34.106656 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:47:34.106666 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:47:34.106677 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 6 23:47:34.106686 kernel: iommu: Default domain type: Translated Jul 6 23:47:34.106700 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:47:34.106709 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:47:34.106719 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:47:34.106730 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:47:34.106740 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 6 23:47:34.106838 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 6 23:47:34.106938 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 6 23:47:34.107040 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:47:34.107055 kernel: vgaarb: loaded Jul 6 23:47:34.107069 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:47:34.107079 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:47:34.107089 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:47:34.107099 kernel: pnp: PnP ACPI init Jul 6 23:47:34.107220 kernel: pnp 00:03: [dma 2] Jul 6 23:47:34.107236 kernel: pnp: PnP ACPI: found 5 devices Jul 6 23:47:34.107246 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:47:34.107255 kernel: NET: Registered PF_INET protocol family Jul 6 23:47:34.107264 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:47:34.107277 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:47:34.107286 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:47:34.107296 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:47:34.107305 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:47:34.107314 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:47:34.107323 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:47:34.107332 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:47:34.107341 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:47:34.107352 kernel: NET: Registered PF_XDP protocol family Jul 6 23:47:34.107436 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:47:34.107520 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:47:34.107603 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:47:34.107686 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 6 23:47:34.109197 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 6 23:47:34.109303 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 6 23:47:34.109407 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 6 23:47:34.109425 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:47:34.109435 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 6 23:47:34.109445 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 6 23:47:34.109454 kernel: Initialise system trusted keyrings Jul 6 23:47:34.109463 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:47:34.109473 kernel: Key type asymmetric registered Jul 6 23:47:34.109482 kernel: Asymmetric key parser 'x509' registered Jul 6 23:47:34.109492 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:47:34.109501 kernel: io scheduler mq-deadline registered Jul 6 23:47:34.109512 kernel: io scheduler kyber registered Jul 6 23:47:34.109521 kernel: io scheduler bfq registered Jul 6 23:47:34.109530 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:47:34.109540 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 6 23:47:34.109550 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 6 23:47:34.109560 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 6 23:47:34.109569 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 6 23:47:34.109578 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:47:34.109588 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:47:34.109599 kernel: random: crng init done Jul 6 23:47:34.109608 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:47:34.109618 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:47:34.109627 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:47:34.109722 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 6 23:47:34.109737 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:47:34.109824 kernel: rtc_cmos 00:04: registered as rtc0 Jul 6 23:47:34.109912 kernel: rtc_cmos 00:04: setting system clock to 2025-07-06T23:47:33 UTC (1751845653) Jul 6 23:47:34.110003 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 6 23:47:34.110017 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:47:34.110027 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:47:34.110036 kernel: Segment Routing with IPv6 Jul 6 23:47:34.110045 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:47:34.110054 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:47:34.110063 kernel: Key type dns_resolver registered Jul 6 23:47:34.110073 kernel: IPI shorthand broadcast: enabled Jul 6 23:47:34.110082 kernel: sched_clock: Marking stable (973007225, 170468683)->(1186122863, -42646955) Jul 6 23:47:34.110094 kernel: registered taskstats version 1 Jul 6 23:47:34.110103 kernel: Loading compiled-in X.509 certificates Jul 6 23:47:34.110113 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: f74b958d282931d4f0d8d911dd18abd0ec707734' Jul 6 23:47:34.110174 kernel: Key type .fscrypt registered Jul 6 23:47:34.110184 kernel: Key type fscrypt-provisioning registered Jul 6 23:47:34.110194 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:47:34.110203 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:47:34.110212 kernel: ima: No architecture policies found Jul 6 23:47:34.110221 kernel: clk: Disabling unused clocks Jul 6 23:47:34.110233 kernel: Freeing unused kernel image (initmem) memory: 43492K Jul 6 23:47:34.110243 kernel: Write protecting the kernel read-only data: 38912k Jul 6 23:47:34.110252 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 6 23:47:34.110261 kernel: Run /init as init process Jul 6 23:47:34.110270 kernel: with arguments: Jul 6 23:47:34.110279 kernel: /init Jul 6 23:47:34.110288 kernel: with environment: Jul 6 23:47:34.110297 kernel: HOME=/ Jul 6 23:47:34.110306 kernel: TERM=linux Jul 6 23:47:34.110317 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:47:34.110328 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:47:34.110341 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:47:34.110352 systemd[1]: Detected virtualization kvm. Jul 6 23:47:34.110362 systemd[1]: Detected architecture x86-64. Jul 6 23:47:34.110372 systemd[1]: Running in initrd. Jul 6 23:47:34.110382 systemd[1]: No hostname configured, using default hostname. Jul 6 23:47:34.110394 systemd[1]: Hostname set to . Jul 6 23:47:34.110404 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:47:34.110413 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:47:34.110423 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:47:34.110433 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:47:34.110443 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:47:34.110454 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:47:34.110473 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:47:34.110487 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:47:34.110498 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:47:34.110508 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:47:34.110519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:47:34.110531 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:47:34.110542 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:47:34.110552 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:47:34.110562 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:47:34.110572 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:47:34.110582 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:47:34.110592 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:47:34.110603 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:47:34.110613 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:47:34.110625 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:47:34.110635 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:47:34.110645 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:47:34.110655 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:47:34.110665 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:47:34.110675 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:47:34.110685 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:47:34.110695 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:47:34.110705 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:47:34.110718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:47:34.110750 systemd-journald[184]: Collecting audit messages is disabled. Jul 6 23:47:34.110776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:47:34.110790 systemd-journald[184]: Journal started Jul 6 23:47:34.110813 systemd-journald[184]: Runtime Journal (/run/log/journal/6c27cf44336a44389e961b18e782ac33) is 8M, max 78.3M, 70.3M free. Jul 6 23:47:34.119163 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:47:34.119407 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:47:34.121041 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:47:34.122464 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:47:34.131373 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:47:34.138281 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:47:34.142170 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:47:34.147377 systemd-modules-load[186]: Inserted module 'overlay' Jul 6 23:47:34.198940 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:47:34.198970 kernel: Bridge firewalling registered Jul 6 23:47:34.160292 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:47:34.182947 systemd-modules-load[186]: Inserted module 'br_netfilter' Jul 6 23:47:34.206630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:47:34.208401 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:47:34.209777 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:47:34.211007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:47:34.219267 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:47:34.222258 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:47:34.233328 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:47:34.239276 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:47:34.240664 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:47:34.254411 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:47:34.265150 dracut-cmdline[221]: dracut-dracut-053 Jul 6 23:47:34.266069 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7c120d8449636ab812a1f5387d02879f5beb6138a028d7566d1b80b47231d762 Jul 6 23:47:34.274643 systemd-resolved[219]: Positive Trust Anchors: Jul 6 23:47:34.275308 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:47:34.275350 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:47:34.281191 systemd-resolved[219]: Defaulting to hostname 'linux'. Jul 6 23:47:34.282092 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:47:34.282890 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:47:34.338178 kernel: SCSI subsystem initialized Jul 6 23:47:34.348233 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:47:34.360185 kernel: iscsi: registered transport (tcp) Jul 6 23:47:34.383340 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:47:34.383411 kernel: QLogic iSCSI HBA Driver Jul 6 23:47:34.445726 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:47:34.452517 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:47:34.506582 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:47:34.506706 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:47:34.509034 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:47:34.558204 kernel: raid6: sse2x4 gen() 13019 MB/s Jul 6 23:47:34.576190 kernel: raid6: sse2x2 gen() 14661 MB/s Jul 6 23:47:34.594648 kernel: raid6: sse2x1 gen() 9771 MB/s Jul 6 23:47:34.594718 kernel: raid6: using algorithm sse2x2 gen() 14661 MB/s Jul 6 23:47:34.613670 kernel: raid6: .... xor() 9144 MB/s, rmw enabled Jul 6 23:47:34.613726 kernel: raid6: using ssse3x2 recovery algorithm Jul 6 23:47:34.636177 kernel: xor: measuring software checksum speed Jul 6 23:47:34.638798 kernel: prefetch64-sse : 15742 MB/sec Jul 6 23:47:34.638846 kernel: generic_sse : 15649 MB/sec Jul 6 23:47:34.638873 kernel: xor: using function: prefetch64-sse (15742 MB/sec) Jul 6 23:47:34.812206 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:47:34.829491 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:47:34.837271 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:47:34.886021 systemd-udevd[405]: Using default interface naming scheme 'v255'. Jul 6 23:47:34.898310 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:47:34.910390 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:47:34.931799 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jul 6 23:47:34.978726 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:47:34.987400 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:47:35.032586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:47:35.043728 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:47:35.088729 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:47:35.093020 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:47:35.094189 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:47:35.095683 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:47:35.101304 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:47:35.118408 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:47:35.151143 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 6 23:47:35.160167 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 6 23:47:35.168464 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:47:35.169728 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:47:35.173748 kernel: libata version 3.00 loaded. Jul 6 23:47:35.173774 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 6 23:47:35.171812 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:47:35.189523 kernel: scsi host0: ata_piix Jul 6 23:47:35.189723 kernel: scsi host1: ata_piix Jul 6 23:47:35.189839 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:47:35.189854 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Jul 6 23:47:35.189866 kernel: GPT:17805311 != 20971519 Jul 6 23:47:35.189882 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:47:35.189894 kernel: GPT:17805311 != 20971519 Jul 6 23:47:35.189905 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Jul 6 23:47:35.189916 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:47:35.189927 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:47:35.172379 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:47:35.172526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:47:35.186848 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:47:35.194364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:47:35.195275 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:47:35.246895 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:47:35.253283 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:47:35.266641 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:47:35.369143 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (459) Jul 6 23:47:35.377177 kernel: BTRFS: device fsid 25bdfe43-d649-4808-8940-e1722efc7a2e devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (469) Jul 6 23:47:35.405056 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:47:35.420437 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:47:35.433096 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:47:35.434902 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:47:35.447064 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:47:35.452331 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:47:35.466731 disk-uuid[513]: Primary Header is updated. Jul 6 23:47:35.466731 disk-uuid[513]: Secondary Entries is updated. Jul 6 23:47:35.466731 disk-uuid[513]: Secondary Header is updated. Jul 6 23:47:35.474159 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:47:36.494365 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:47:36.494473 disk-uuid[514]: The operation has completed successfully. Jul 6 23:47:36.582066 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:47:36.582232 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:47:36.645292 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:47:36.648226 sh[525]: Success Jul 6 23:47:36.671165 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Jul 6 23:47:36.765824 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:47:36.777350 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:47:36.780643 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:47:36.805358 kernel: BTRFS info (device dm-0): first mount of filesystem 25bdfe43-d649-4808-8940-e1722efc7a2e Jul 6 23:47:36.805441 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:47:36.807228 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:47:36.809186 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:47:36.811737 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:47:36.824717 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:47:36.825839 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:47:36.835274 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:47:36.838251 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:47:36.865456 kernel: BTRFS info (device vda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:47:36.865575 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:47:36.867184 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:47:36.876166 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:47:36.883227 kernel: BTRFS info (device vda6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:47:36.892353 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:47:36.898260 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:47:37.007021 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:47:37.017289 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:47:37.044224 systemd-networkd[707]: lo: Link UP Jul 6 23:47:37.044881 systemd-networkd[707]: lo: Gained carrier Jul 6 23:47:37.046760 systemd-networkd[707]: Enumeration completed Jul 6 23:47:37.047723 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:47:37.047727 systemd-networkd[707]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:47:37.049009 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:47:37.050500 systemd-networkd[707]: eth0: Link UP Jul 6 23:47:37.050504 systemd-networkd[707]: eth0: Gained carrier Jul 6 23:47:37.050512 systemd-networkd[707]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:47:37.051230 systemd[1]: Reached target network.target - Network. Jul 6 23:47:37.062173 systemd-networkd[707]: eth0: DHCPv4 address 172.24.4.123/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 6 23:47:37.063101 ignition[616]: Ignition 2.20.0 Jul 6 23:47:37.063108 ignition[616]: Stage: fetch-offline Jul 6 23:47:37.065762 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:47:37.063163 ignition[616]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:37.063173 ignition[616]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 6 23:47:37.063266 ignition[616]: parsed url from cmdline: "" Jul 6 23:47:37.063269 ignition[616]: no config URL provided Jul 6 23:47:37.063275 ignition[616]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:47:37.063285 ignition[616]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:47:37.063290 ignition[616]: failed to fetch config: resource requires networking Jul 6 23:47:37.063469 ignition[616]: Ignition finished successfully Jul 6 23:47:37.073294 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 6 23:47:37.085829 ignition[717]: Ignition 2.20.0 Jul 6 23:47:37.085842 ignition[717]: Stage: fetch Jul 6 23:47:37.086028 ignition[717]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:37.086040 ignition[717]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 6 23:47:37.086152 ignition[717]: parsed url from cmdline: "" Jul 6 23:47:37.086156 ignition[717]: no config URL provided Jul 6 23:47:37.086161 ignition[717]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:47:37.086170 ignition[717]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:47:37.086260 ignition[717]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 6 23:47:37.086336 ignition[717]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 6 23:47:37.086369 ignition[717]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 6 23:47:37.348256 ignition[717]: GET result: OK Jul 6 23:47:37.348435 ignition[717]: parsing config with SHA512: c97be10712ca95285e39fb83716f2cd8e524d67ac4e19df31d2f730096515a60a5b19395471d7b36226f2a12d4c8a8fcccd5c6d0fd2f51ee247833a7bd4b7906 Jul 6 23:47:37.359542 unknown[717]: fetched base config from "system" Jul 6 23:47:37.359563 unknown[717]: fetched base config from "system" Jul 6 23:47:37.360638 ignition[717]: fetch: fetch complete Jul 6 23:47:37.359577 unknown[717]: fetched user config from "openstack" Jul 6 23:47:37.360653 ignition[717]: fetch: fetch passed Jul 6 23:47:37.364215 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 6 23:47:37.360798 ignition[717]: Ignition finished successfully Jul 6 23:47:37.376604 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:47:37.411945 ignition[724]: Ignition 2.20.0 Jul 6 23:47:37.411974 ignition[724]: Stage: kargs Jul 6 23:47:37.412420 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:37.412447 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 6 23:47:37.417414 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:47:37.414877 ignition[724]: kargs: kargs passed Jul 6 23:47:37.414978 ignition[724]: Ignition finished successfully Jul 6 23:47:37.428477 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:47:37.465270 ignition[730]: Ignition 2.20.0 Jul 6 23:47:37.465308 ignition[730]: Stage: disks Jul 6 23:47:37.465882 ignition[730]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:37.465923 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 6 23:47:37.474078 ignition[730]: disks: disks passed Jul 6 23:47:37.474214 ignition[730]: Ignition finished successfully Jul 6 23:47:37.476162 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:47:37.479083 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:47:37.480573 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:47:37.482656 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:47:37.484651 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:47:37.486433 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:47:37.494416 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:47:37.516073 systemd-fsck[738]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 6 23:47:37.527471 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:47:37.535339 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:47:37.650189 kernel: EXT4-fs (vda9): mounted filesystem daab0c95-3783-44c0-bef8-9d61a5c53c14 r/w with ordered data mode. Quota mode: none. Jul 6 23:47:37.652729 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:47:37.655387 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:47:37.661221 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:47:37.669243 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:47:37.672103 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:47:37.674286 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 6 23:47:37.675707 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:47:37.676558 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:47:37.679521 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:47:37.688279 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:47:37.693580 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (746) Jul 6 23:47:37.695165 kernel: BTRFS info (device vda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:47:37.695192 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:47:37.695205 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:47:37.717458 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:47:37.724200 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:47:37.835101 initrd-setup-root[773]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:47:37.842582 initrd-setup-root[781]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:47:37.848198 initrd-setup-root[788]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:47:37.854465 initrd-setup-root[795]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:47:38.024887 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:47:38.032263 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:47:38.042439 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:47:38.056402 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:47:38.063243 kernel: BTRFS info (device vda6): last unmount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:47:38.108105 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:47:38.112849 ignition[862]: INFO : Ignition 2.20.0 Jul 6 23:47:38.112849 ignition[862]: INFO : Stage: mount Jul 6 23:47:38.115092 ignition[862]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:38.115092 ignition[862]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 6 23:47:38.115092 ignition[862]: INFO : mount: mount passed Jul 6 23:47:38.115092 ignition[862]: INFO : Ignition finished successfully Jul 6 23:47:38.115451 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:47:38.987721 systemd-networkd[707]: eth0: Gained IPv6LL Jul 6 23:47:44.935766 coreos-metadata[748]: Jul 06 23:47:44.934 WARN failed to locate config-drive, using the metadata service API instead Jul 6 23:47:44.983294 coreos-metadata[748]: Jul 06 23:47:44.983 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 6 23:47:45.001866 coreos-metadata[748]: Jul 06 23:47:45.001 INFO Fetch successful Jul 6 23:47:45.003499 coreos-metadata[748]: Jul 06 23:47:45.002 INFO wrote hostname ci-4230-2-1-3-a5860ac047.novalocal to /sysroot/etc/hostname Jul 6 23:47:45.010565 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 6 23:47:45.011099 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 6 23:47:45.028449 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:47:45.072495 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:47:45.112261 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (880) Jul 6 23:47:45.120786 kernel: BTRFS info (device vda6): first mount of filesystem 520cc21d-4438-4aef-a59e-8797d7bc85f5 Jul 6 23:47:45.120877 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:47:45.125043 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:47:45.137252 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:47:45.143954 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:47:45.218371 ignition[898]: INFO : Ignition 2.20.0 Jul 6 23:47:45.218371 ignition[898]: INFO : Stage: files Jul 6 23:47:45.221505 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:45.221505 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 6 23:47:45.221505 ignition[898]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:47:45.221505 ignition[898]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:47:45.221505 ignition[898]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:47:45.231842 ignition[898]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:47:45.231842 ignition[898]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:47:45.231842 ignition[898]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:47:45.231842 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:47:45.231842 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:47:45.226405 unknown[898]: wrote ssh authorized keys file for user: core Jul 6 23:47:45.313253 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:47:45.794212 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:47:45.795422 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:47:45.795422 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 6 23:47:46.506559 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:47:46.917980 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:47:46.919593 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:47:46.919593 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:47:46.919593 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:47:46.919593 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:47:46.919593 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:47:46.919593 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:47:46.932323 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:47:46.932323 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:47:46.932323 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:47:46.932323 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:47:46.932323 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:47:46.932323 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:47:46.932323 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:47:46.932323 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:47:47.561375 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:47:49.541463 ignition[898]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:47:49.541463 ignition[898]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:47:49.551025 ignition[898]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:47:49.551025 ignition[898]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:47:49.551025 ignition[898]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:47:49.551025 ignition[898]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:47:49.551025 ignition[898]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:47:49.551025 ignition[898]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:47:49.551025 ignition[898]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:47:49.551025 ignition[898]: INFO : files: files passed Jul 6 23:47:49.551025 ignition[898]: INFO : Ignition finished successfully Jul 6 23:47:49.549652 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:47:49.562354 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:47:49.568850 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:47:49.585003 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:47:49.586645 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:47:49.599636 initrd-setup-root-after-ignition[926]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:47:49.599636 initrd-setup-root-after-ignition[926]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:47:49.604599 initrd-setup-root-after-ignition[930]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:47:49.607052 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:47:49.609707 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:47:49.626444 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:47:49.657338 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:47:49.657582 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:47:49.660436 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:47:49.662444 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:47:49.664607 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:47:49.671452 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:47:49.692676 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:47:49.701398 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:47:49.724426 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:47:49.726738 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:47:49.729095 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:47:49.731067 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:47:49.731456 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:47:49.733671 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:47:49.734852 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:47:49.735810 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:47:49.736876 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:47:49.738087 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:47:49.739592 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:47:49.740843 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:47:49.742061 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:47:49.743221 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:47:49.744343 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:47:49.745323 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:47:49.745521 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:47:49.746611 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:47:49.747355 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:47:49.748605 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:47:49.748742 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:47:49.749912 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:47:49.750116 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:47:49.751372 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:47:49.751554 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:47:49.753092 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:47:49.753351 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:47:49.760338 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:47:49.765387 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:47:49.766513 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:47:49.766739 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:47:49.769402 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:47:49.769609 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:47:49.779467 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:47:49.779581 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:47:49.784261 ignition[950]: INFO : Ignition 2.20.0 Jul 6 23:47:49.785494 ignition[950]: INFO : Stage: umount Jul 6 23:47:49.786504 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:47:49.786504 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 6 23:47:49.789203 ignition[950]: INFO : umount: umount passed Jul 6 23:47:49.789203 ignition[950]: INFO : Ignition finished successfully Jul 6 23:47:49.791144 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:47:49.791874 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:47:49.793507 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:47:49.793655 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:47:49.794206 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:47:49.794265 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:47:49.794766 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 6 23:47:49.794820 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 6 23:47:49.796294 systemd[1]: Stopped target network.target - Network. Jul 6 23:47:49.796972 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:47:49.797027 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:47:49.797565 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:47:49.797996 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:47:49.804238 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:47:49.805529 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:47:49.806284 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:47:49.807085 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:47:49.807160 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:47:49.807639 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:47:49.807684 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:47:49.809285 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:47:49.809401 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:47:49.810436 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:47:49.810514 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:47:49.811759 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:47:49.813364 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:47:49.817806 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:47:49.818552 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:47:49.818687 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:47:49.820627 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:47:49.820760 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:47:49.827271 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:47:49.827452 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:47:49.831681 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:47:49.831955 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:47:49.832344 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:47:49.834400 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:47:49.835106 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:47:49.835347 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:47:49.844298 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:47:49.844835 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:47:49.844908 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:47:49.845541 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:47:49.845593 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:47:49.848821 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:47:49.848888 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:47:49.849588 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:47:49.849647 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:47:49.851334 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:47:49.853243 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:47:49.853313 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:47:49.860600 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:47:49.860789 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:47:49.862496 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:47:49.862591 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:47:49.863690 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:47:49.863726 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:47:49.864919 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:47:49.865010 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:47:49.866627 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:47:49.866676 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:47:49.867823 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:47:49.867921 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:47:49.874300 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:47:49.875217 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:47:49.875279 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:47:49.875927 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:47:49.875973 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:47:49.876528 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:47:49.876576 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:47:49.877101 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:47:49.877190 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:47:49.879916 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:47:49.879981 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:47:49.880308 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:47:49.880434 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:47:49.885305 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:47:49.885410 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:47:49.886809 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:47:49.893520 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:47:49.902586 systemd[1]: Switching root. Jul 6 23:47:49.934942 systemd-journald[184]: Journal stopped Jul 6 23:47:52.355522 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jul 6 23:47:52.355624 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:47:52.355648 kernel: SELinux: policy capability open_perms=1 Jul 6 23:47:52.355661 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:47:52.355685 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:47:52.355698 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:47:52.355715 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:47:52.355728 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:47:52.355740 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:47:52.355759 kernel: audit: type=1403 audit(1751845670.985:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:47:52.355777 systemd[1]: Successfully loaded SELinux policy in 93.522ms. Jul 6 23:47:52.355804 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.165ms. Jul 6 23:47:52.355826 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:47:52.355841 systemd[1]: Detected virtualization kvm. Jul 6 23:47:52.355863 systemd[1]: Detected architecture x86-64. Jul 6 23:47:52.355895 systemd[1]: Detected first boot. Jul 6 23:47:52.355909 systemd[1]: Hostname set to . Jul 6 23:47:52.355928 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:47:52.355942 zram_generator::config[994]: No configuration found. Jul 6 23:47:52.355964 kernel: Guest personality initialized and is inactive Jul 6 23:47:52.355977 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 6 23:47:52.355989 kernel: Initialized host personality Jul 6 23:47:52.356002 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:47:52.356014 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:47:52.356029 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:47:52.356043 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:47:52.356056 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:47:52.356070 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:47:52.356091 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:47:52.356105 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:47:52.356135 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:47:52.356151 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:47:52.358578 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:47:52.358608 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:47:52.358623 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:47:52.358637 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:47:52.358651 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:47:52.358678 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:47:52.358692 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:47:52.358706 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:47:52.358721 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:47:52.358735 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:47:52.358750 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:47:52.358796 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:47:52.358812 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:47:52.358825 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:47:52.358839 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:47:52.358853 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:47:52.358867 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:47:52.358881 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:47:52.358894 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:47:52.358907 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:47:52.358929 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:47:52.358943 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:47:52.358956 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:47:52.358970 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:47:52.358983 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:47:52.358997 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:47:52.359010 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:47:52.359024 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:47:52.359038 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:47:52.359058 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:47:52.359073 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:47:52.359086 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:47:52.359100 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:47:52.359114 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:47:52.359147 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:47:52.359178 systemd[1]: Reached target machines.target - Containers. Jul 6 23:47:52.359194 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:47:52.359217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:47:52.359231 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:47:52.359246 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:47:52.359260 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:47:52.359274 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:47:52.359287 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:47:52.359305 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:47:52.359318 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:47:52.359332 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:47:52.359353 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:47:52.359374 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:47:52.359388 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:47:52.359402 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:47:52.359416 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:47:52.359430 kernel: fuse: init (API version 7.39) Jul 6 23:47:52.359448 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:47:52.359462 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:47:52.359505 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:47:52.359544 kernel: loop: module loaded Jul 6 23:47:52.359559 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:47:52.359573 kernel: ACPI: bus type drm_connector registered Jul 6 23:47:52.359595 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:47:52.359611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:47:52.359624 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:47:52.359637 systemd[1]: Stopped verity-setup.service. Jul 6 23:47:52.359650 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:47:52.359669 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:47:52.359682 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:47:52.359695 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:47:52.359713 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:47:52.359765 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:47:52.359780 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:47:52.359793 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:47:52.359805 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:47:52.359848 systemd-journald[1095]: Collecting audit messages is disabled. Jul 6 23:47:52.359896 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:47:52.359911 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:47:52.359925 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:47:52.359938 systemd-journald[1095]: Journal started Jul 6 23:47:52.359966 systemd-journald[1095]: Runtime Journal (/run/log/journal/6c27cf44336a44389e961b18e782ac33) is 8M, max 78.3M, 70.3M free. Jul 6 23:47:51.914062 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:47:52.362285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:47:51.926778 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:47:51.927583 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:47:52.367178 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:47:52.368096 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:47:52.368440 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:47:52.369527 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:47:52.369894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:47:52.370952 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:47:52.371281 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:47:52.372302 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:47:52.372618 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:47:52.373593 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:47:52.374544 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:47:52.375502 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:47:52.376543 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:47:52.391828 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:47:52.401356 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:47:52.406307 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:47:52.407165 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:47:52.407313 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:47:52.413820 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:47:52.422955 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:47:52.426305 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:47:52.428648 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:47:52.433537 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:47:52.435639 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:47:52.437998 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:47:52.441441 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:47:52.442715 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:47:52.450390 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:47:52.465388 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:47:52.467039 systemd-journald[1095]: Time spent on flushing to /var/log/journal/6c27cf44336a44389e961b18e782ac33 is 112.272ms for 958 entries. Jul 6 23:47:52.467039 systemd-journald[1095]: System Journal (/var/log/journal/6c27cf44336a44389e961b18e782ac33) is 8M, max 584.8M, 576.8M free. Jul 6 23:47:52.649430 systemd-journald[1095]: Received client request to flush runtime journal. Jul 6 23:47:52.649512 kernel: loop0: detected capacity change from 0 to 221472 Jul 6 23:47:52.649550 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:47:52.474526 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:47:52.479227 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:47:52.484389 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:47:52.485216 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:47:52.486622 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:47:52.506449 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:47:52.521567 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:47:52.522680 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:47:52.526684 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:47:52.539240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:47:52.576958 udevadm[1140]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:47:52.626185 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Jul 6 23:47:52.626200 systemd-tmpfiles[1135]: ACLs are not supported, ignoring. Jul 6 23:47:52.647014 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:47:52.653348 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:47:52.654459 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:47:52.664088 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:47:52.671162 kernel: loop1: detected capacity change from 0 to 8 Jul 6 23:47:52.711217 kernel: loop2: detected capacity change from 0 to 147912 Jul 6 23:47:52.730536 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:47:52.738410 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:47:52.759766 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jul 6 23:47:52.759792 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jul 6 23:47:52.767835 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:47:52.792159 kernel: loop3: detected capacity change from 0 to 138176 Jul 6 23:47:52.912163 kernel: loop4: detected capacity change from 0 to 221472 Jul 6 23:47:52.932814 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:47:53.005160 kernel: loop5: detected capacity change from 0 to 8 Jul 6 23:47:53.014300 kernel: loop6: detected capacity change from 0 to 147912 Jul 6 23:47:53.101143 kernel: loop7: detected capacity change from 0 to 138176 Jul 6 23:47:53.177532 (sd-merge)[1162]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 6 23:47:53.178634 (sd-merge)[1162]: Merged extensions into '/usr'. Jul 6 23:47:53.190382 systemd[1]: Reload requested from client PID 1133 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:47:53.190459 systemd[1]: Reloading... Jul 6 23:47:53.308183 zram_generator::config[1189]: No configuration found. Jul 6 23:47:53.549513 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:47:53.633648 systemd[1]: Reloading finished in 442 ms. Jul 6 23:47:53.642166 ldconfig[1128]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:47:53.647356 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:47:53.648356 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:47:53.649202 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:47:53.659379 systemd[1]: Starting ensure-sysext.service... Jul 6 23:47:53.661244 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:47:53.663283 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:47:53.682321 systemd[1]: Reload requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:47:53.682344 systemd[1]: Reloading... Jul 6 23:47:53.701395 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:47:53.702645 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:47:53.704047 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:47:53.706456 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 6 23:47:53.706531 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jul 6 23:47:53.715724 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:47:53.715738 systemd-tmpfiles[1248]: Skipping /boot Jul 6 23:47:53.731952 systemd-udevd[1249]: Using default interface naming scheme 'v255'. Jul 6 23:47:53.737318 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:47:53.737335 systemd-tmpfiles[1248]: Skipping /boot Jul 6 23:47:53.797159 zram_generator::config[1279]: No configuration found. Jul 6 23:47:53.936151 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1312) Jul 6 23:47:54.027342 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:47:54.035160 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:47:54.039160 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 6 23:47:54.057658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:47:54.082324 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:47:54.132071 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 6 23:47:54.132193 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 6 23:47:54.144157 kernel: Console: switching to colour dummy device 80x25 Jul 6 23:47:54.156440 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 6 23:47:54.156607 kernel: [drm] features: -context_init Jul 6 23:47:54.171163 kernel: [drm] number of scanouts: 1 Jul 6 23:47:54.171318 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:47:54.180222 kernel: [drm] number of cap sets: 0 Jul 6 23:47:54.189814 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jul 6 23:47:54.222696 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jul 6 23:47:54.222772 kernel: Console: switching to colour frame buffer device 160x50 Jul 6 23:47:54.227625 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 6 23:47:54.226005 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:47:54.226204 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:47:54.226800 systemd[1]: Reloading finished in 544 ms. Jul 6 23:47:54.241849 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:47:54.255486 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:47:54.296987 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:47:54.299211 systemd[1]: Finished ensure-sysext.service. Jul 6 23:47:54.350366 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:47:54.355267 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:47:54.362300 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:47:54.362557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:47:54.365361 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:47:54.368293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:47:54.372983 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:47:54.377093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:47:54.383298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:47:54.384425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:47:54.386306 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:47:54.387220 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:47:54.391823 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:47:54.396210 lvm[1370]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:47:54.402366 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:47:54.412299 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:47:54.415494 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:47:54.423301 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:47:54.432761 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:47:54.432866 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:47:54.439442 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:47:54.441624 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:47:54.442819 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:47:54.452356 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:47:54.468455 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:47:54.469449 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:47:54.487919 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:47:54.499728 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:47:54.500569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:47:54.506231 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:47:54.506494 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:47:54.513627 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:47:54.522190 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:47:54.529093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:47:54.530013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:47:54.536289 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:47:54.541923 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:47:54.547248 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:47:54.565614 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:47:54.578338 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:47:54.585018 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:47:54.607428 augenrules[1421]: No rules Jul 6 23:47:54.625548 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:47:54.627092 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:47:54.631776 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:47:54.636987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:47:54.683561 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:47:54.684882 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:47:54.725783 systemd-networkd[1382]: lo: Link UP Jul 6 23:47:54.725794 systemd-networkd[1382]: lo: Gained carrier Jul 6 23:47:54.727417 systemd-networkd[1382]: Enumeration completed Jul 6 23:47:54.727522 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:47:54.731980 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:47:54.731993 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:47:54.732717 systemd-networkd[1382]: eth0: Link UP Jul 6 23:47:54.732728 systemd-networkd[1382]: eth0: Gained carrier Jul 6 23:47:54.732743 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:47:54.738451 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:47:54.749210 systemd-networkd[1382]: eth0: DHCPv4 address 172.24.4.123/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 6 23:47:54.750610 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:47:54.751106 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Jul 6 23:47:54.751446 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:47:54.756216 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:47:54.768405 systemd-resolved[1383]: Positive Trust Anchors: Jul 6 23:47:54.768801 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:47:54.768907 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:47:54.770157 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:47:54.776936 systemd-resolved[1383]: Using system hostname 'ci-4230-2-1-3-a5860ac047.novalocal'. Jul 6 23:47:54.778966 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:47:54.780949 systemd[1]: Reached target network.target - Network. Jul 6 23:47:54.784962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:47:54.785593 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:47:54.788755 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:47:54.789436 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:47:54.792389 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:47:54.793086 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:47:54.793636 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:47:54.794311 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:47:54.794372 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:47:54.796059 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:47:54.801232 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:47:54.807599 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:47:54.813356 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:47:54.817789 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:47:54.820191 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:47:54.825239 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:47:54.828001 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:47:54.831029 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:47:54.833365 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:47:54.835607 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:47:54.838025 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:47:54.838165 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:47:54.848285 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:47:54.855907 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 6 23:47:54.862354 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:47:54.868774 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:47:54.878304 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:47:54.879028 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:47:54.880794 jq[1444]: false Jul 6 23:47:54.888403 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:47:54.894352 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:47:54.905601 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:47:54.917334 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:47:54.928872 extend-filesystems[1446]: Found loop4 Jul 6 23:47:54.930928 extend-filesystems[1446]: Found loop5 Jul 6 23:47:54.930928 extend-filesystems[1446]: Found loop6 Jul 6 23:47:54.930928 extend-filesystems[1446]: Found loop7 Jul 6 23:47:54.930928 extend-filesystems[1446]: Found vda Jul 6 23:47:54.930928 extend-filesystems[1446]: Found vda1 Jul 6 23:47:54.930928 extend-filesystems[1446]: Found vda2 Jul 6 23:47:54.930928 extend-filesystems[1446]: Found vda3 Jul 6 23:47:54.930928 extend-filesystems[1446]: Found usr Jul 6 23:47:54.969811 extend-filesystems[1446]: Found vda4 Jul 6 23:47:54.969811 extend-filesystems[1446]: Found vda6 Jul 6 23:47:54.969811 extend-filesystems[1446]: Found vda7 Jul 6 23:47:54.969811 extend-filesystems[1446]: Found vda9 Jul 6 23:47:54.969811 extend-filesystems[1446]: Checking size of /dev/vda9 Jul 6 23:47:54.969811 extend-filesystems[1446]: Resized partition /dev/vda9 Jul 6 23:47:55.045242 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 6 23:47:55.045294 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 6 23:47:54.938442 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:47:54.972830 dbus-daemon[1442]: [system] SELinux support is enabled Jul 6 23:47:55.045687 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:47:55.045687 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:47:55.045687 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:47:55.045687 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 6 23:47:54.953306 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:47:55.075343 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Jul 6 23:47:55.075729 update_engine[1463]: I20250706 23:47:55.042683 1463 main.cc:92] Flatcar Update Engine starting Jul 6 23:47:55.075729 update_engine[1463]: I20250706 23:47:55.062565 1463 update_check_scheduler.cc:74] Next update check in 11m7s Jul 6 23:47:54.954057 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:47:55.093113 jq[1468]: true Jul 6 23:47:54.962516 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:47:54.985397 systemd-timesyncd[1384]: Contacted time server 69.89.207.199:123 (0.flatcar.pool.ntp.org). Jul 6 23:47:55.093573 jq[1472]: true Jul 6 23:47:54.985456 systemd-timesyncd[1384]: Initial clock synchronization to Sun 2025-07-06 23:47:55.028125 UTC. Jul 6 23:47:54.998271 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:47:55.012948 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:47:55.033538 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:47:55.034175 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:47:55.034490 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:47:55.034729 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:47:55.046716 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:47:55.047211 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:47:55.070960 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:47:55.071198 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:47:55.119936 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:47:55.127038 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:47:55.134896 tar[1471]: linux-amd64/helm Jul 6 23:47:55.128074 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:47:55.128100 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:47:55.128623 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:47:55.128642 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:47:55.142942 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:47:55.153763 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:47:55.168421 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1299) Jul 6 23:47:55.219685 systemd-logind[1457]: New seat seat0. Jul 6 23:47:55.231875 bash[1501]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:47:55.232830 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:47:55.249022 systemd-logind[1457]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:47:55.249051 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:47:55.260929 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:47:55.282495 systemd[1]: Starting sshkeys.service... Jul 6 23:47:55.320309 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 6 23:47:55.339778 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 6 23:47:55.474042 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:47:55.616997 containerd[1473]: time="2025-07-06T23:47:55.616837008Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 6 23:47:55.676243 containerd[1473]: time="2025-07-06T23:47:55.675295835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:47:55.678170 containerd[1473]: time="2025-07-06T23:47:55.677345481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:47:55.678170 containerd[1473]: time="2025-07-06T23:47:55.677394704Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:47:55.678170 containerd[1473]: time="2025-07-06T23:47:55.677422953Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:47:55.678170 containerd[1473]: time="2025-07-06T23:47:55.677627441Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:47:55.678170 containerd[1473]: time="2025-07-06T23:47:55.677651842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:47:55.678170 containerd[1473]: time="2025-07-06T23:47:55.677738074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:47:55.678170 containerd[1473]: time="2025-07-06T23:47:55.677757803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:47:55.678170 containerd[1473]: time="2025-07-06T23:47:55.677994055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:47:55.678170 containerd[1473]: time="2025-07-06T23:47:55.678013604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:47:55.678170 containerd[1473]: time="2025-07-06T23:47:55.678031325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:47:55.678170 containerd[1473]: time="2025-07-06T23:47:55.678044645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:47:55.678627 containerd[1473]: time="2025-07-06T23:47:55.678146598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:47:55.679681 containerd[1473]: time="2025-07-06T23:47:55.679148085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:47:55.679681 containerd[1473]: time="2025-07-06T23:47:55.679367009Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:47:55.679681 containerd[1473]: time="2025-07-06T23:47:55.679385915Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:47:55.679681 containerd[1473]: time="2025-07-06T23:47:55.679481137Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:47:55.679681 containerd[1473]: time="2025-07-06T23:47:55.679537884Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:47:55.689019 containerd[1473]: time="2025-07-06T23:47:55.688992015Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:47:55.689162 containerd[1473]: time="2025-07-06T23:47:55.689144689Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:47:55.689362 containerd[1473]: time="2025-07-06T23:47:55.689336660Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:47:55.689528 containerd[1473]: time="2025-07-06T23:47:55.689510147Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:47:55.689637 containerd[1473]: time="2025-07-06T23:47:55.689612050Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:47:55.689874 containerd[1473]: time="2025-07-06T23:47:55.689847218Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:47:55.690325 containerd[1473]: time="2025-07-06T23:47:55.690303289Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690470919Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690494506Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690511353Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690526873Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690543831Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690558286Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690575294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690591989Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690605902Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690620238Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690633789Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690657577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690672866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.690976 containerd[1473]: time="2025-07-06T23:47:55.690686639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690702270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690724350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690743126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690759117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690774156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690789185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690807608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690821813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690836589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690849880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690869589Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690892503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690909159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.691345 containerd[1473]: time="2025-07-06T23:47:55.690922590Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:47:55.692733 containerd[1473]: time="2025-07-06T23:47:55.691683625Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:47:55.692733 containerd[1473]: time="2025-07-06T23:47:55.691712064Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:47:55.692733 containerd[1473]: time="2025-07-06T23:47:55.691786442Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:47:55.692733 containerd[1473]: time="2025-07-06T23:47:55.691807708Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:47:55.692733 containerd[1473]: time="2025-07-06T23:47:55.691821270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.692733 containerd[1473]: time="2025-07-06T23:47:55.691835103Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:47:55.692733 containerd[1473]: time="2025-07-06T23:47:55.691846665Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:47:55.692733 containerd[1473]: time="2025-07-06T23:47:55.691863934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:47:55.692942 containerd[1473]: time="2025-07-06T23:47:55.692198513Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:47:55.692942 containerd[1473]: time="2025-07-06T23:47:55.692258535Z" level=info msg="Connect containerd service" Jul 6 23:47:55.692942 containerd[1473]: time="2025-07-06T23:47:55.692285688Z" level=info msg="using legacy CRI server" Jul 6 23:47:55.692942 containerd[1473]: time="2025-07-06T23:47:55.692293002Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:47:55.692942 containerd[1473]: time="2025-07-06T23:47:55.692423022Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:47:55.693596 containerd[1473]: time="2025-07-06T23:47:55.693571968Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:47:55.694731 containerd[1473]: time="2025-07-06T23:47:55.693904578Z" level=info msg="Start subscribing containerd event" Jul 6 23:47:55.694827 containerd[1473]: time="2025-07-06T23:47:55.694811545Z" level=info msg="Start recovering state" Jul 6 23:47:55.694939 containerd[1473]: time="2025-07-06T23:47:55.694923242Z" level=info msg="Start event monitor" Jul 6 23:47:55.695004 containerd[1473]: time="2025-07-06T23:47:55.694991120Z" level=info msg="Start snapshots syncer" Jul 6 23:47:55.695059 containerd[1473]: time="2025-07-06T23:47:55.695046422Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:47:55.695111 containerd[1473]: time="2025-07-06T23:47:55.695098990Z" level=info msg="Start streaming server" Jul 6 23:47:55.695332 containerd[1473]: time="2025-07-06T23:47:55.694652916Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:47:55.696147 containerd[1473]: time="2025-07-06T23:47:55.695509132Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:47:55.696147 containerd[1473]: time="2025-07-06T23:47:55.695585639Z" level=info msg="containerd successfully booted in 0.079901s" Jul 6 23:47:55.695678 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:47:55.774201 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:47:55.820399 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:47:55.834482 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:47:55.838994 systemd[1]: Started sshd@0-172.24.4.123:22-172.24.4.1:34966.service - OpenSSH per-connection server daemon (172.24.4.1:34966). Jul 6 23:47:55.854843 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:47:55.856210 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:47:55.867564 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:47:55.895644 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:47:55.907877 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:47:55.921809 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:47:55.925461 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:47:55.937639 tar[1471]: linux-amd64/LICENSE Jul 6 23:47:55.937750 tar[1471]: linux-amd64/README.md Jul 6 23:47:55.948100 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:47:56.267756 systemd-networkd[1382]: eth0: Gained IPv6LL Jul 6 23:47:56.272170 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:47:56.278994 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:47:56.292730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:47:56.311013 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:47:56.377787 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:47:57.189322 sshd[1529]: Accepted publickey for core from 172.24.4.1 port 34966 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:47:57.193254 sshd-session[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:57.214809 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:47:57.229805 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:47:57.258251 systemd-logind[1457]: New session 1 of user core. Jul 6 23:47:57.274732 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:47:57.286587 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:47:57.302161 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:47:57.305138 systemd-logind[1457]: New session c1 of user core. Jul 6 23:47:57.474036 systemd[1557]: Queued start job for default target default.target. Jul 6 23:47:57.482195 systemd[1557]: Created slice app.slice - User Application Slice. Jul 6 23:47:57.482223 systemd[1557]: Reached target paths.target - Paths. Jul 6 23:47:57.482263 systemd[1557]: Reached target timers.target - Timers. Jul 6 23:47:57.486296 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:47:57.501628 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:47:57.501753 systemd[1557]: Reached target sockets.target - Sockets. Jul 6 23:47:57.501803 systemd[1557]: Reached target basic.target - Basic System. Jul 6 23:47:57.501843 systemd[1557]: Reached target default.target - Main User Target. Jul 6 23:47:57.501875 systemd[1557]: Startup finished in 187ms. Jul 6 23:47:57.502440 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:47:57.512413 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:47:57.872629 systemd[1]: Started sshd@1-172.24.4.123:22-172.24.4.1:54070.service - OpenSSH per-connection server daemon (172.24.4.1:54070). Jul 6 23:47:58.273450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:47:58.285940 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:47:59.075792 sshd[1568]: Accepted publickey for core from 172.24.4.1 port 54070 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:47:59.077881 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:59.090823 systemd-logind[1457]: New session 2 of user core. Jul 6 23:47:59.102310 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:47:59.685076 kubelet[1576]: E0706 23:47:59.684965 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:47:59.689038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:47:59.689457 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:47:59.690440 systemd[1]: kubelet.service: Consumed 2.218s CPU time, 266.2M memory peak. Jul 6 23:47:59.718160 sshd[1582]: Connection closed by 172.24.4.1 port 54070 Jul 6 23:47:59.718855 sshd-session[1568]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:59.736251 systemd[1]: sshd@1-172.24.4.123:22-172.24.4.1:54070.service: Deactivated successfully. Jul 6 23:47:59.740017 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:47:59.742261 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:47:59.751999 systemd[1]: Started sshd@2-172.24.4.123:22-172.24.4.1:54072.service - OpenSSH per-connection server daemon (172.24.4.1:54072). Jul 6 23:47:59.760007 systemd-logind[1457]: Removed session 2. Jul 6 23:48:00.962259 sshd[1589]: Accepted publickey for core from 172.24.4.1 port 54072 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:48:00.969071 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:00.995660 login[1537]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:48:00.999791 login[1538]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 6 23:48:01.003918 systemd-logind[1457]: New session 3 of user core. Jul 6 23:48:01.015330 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:48:01.020887 systemd-logind[1457]: New session 5 of user core. Jul 6 23:48:01.024345 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:48:01.028720 systemd-logind[1457]: New session 4 of user core. Jul 6 23:48:01.035596 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:48:01.589318 sshd[1598]: Connection closed by 172.24.4.1 port 54072 Jul 6 23:48:01.589739 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:01.597438 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:48:01.599404 systemd[1]: sshd@2-172.24.4.123:22-172.24.4.1:54072.service: Deactivated successfully. Jul 6 23:48:01.604555 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:48:01.607397 systemd-logind[1457]: Removed session 3. Jul 6 23:48:01.977646 coreos-metadata[1441]: Jul 06 23:48:01.977 WARN failed to locate config-drive, using the metadata service API instead Jul 6 23:48:02.068386 coreos-metadata[1441]: Jul 06 23:48:02.068 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 6 23:48:02.265561 coreos-metadata[1441]: Jul 06 23:48:02.265 INFO Fetch successful Jul 6 23:48:02.266218 coreos-metadata[1441]: Jul 06 23:48:02.266 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 6 23:48:02.282328 coreos-metadata[1441]: Jul 06 23:48:02.282 INFO Fetch successful Jul 6 23:48:02.282328 coreos-metadata[1441]: Jul 06 23:48:02.282 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 6 23:48:02.295884 coreos-metadata[1441]: Jul 06 23:48:02.295 INFO Fetch successful Jul 6 23:48:02.295884 coreos-metadata[1441]: Jul 06 23:48:02.295 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 6 23:48:02.310310 coreos-metadata[1441]: Jul 06 23:48:02.310 INFO Fetch successful Jul 6 23:48:02.310310 coreos-metadata[1441]: Jul 06 23:48:02.310 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 6 23:48:02.326267 coreos-metadata[1441]: Jul 06 23:48:02.326 INFO Fetch successful Jul 6 23:48:02.326267 coreos-metadata[1441]: Jul 06 23:48:02.326 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 6 23:48:02.340723 coreos-metadata[1441]: Jul 06 23:48:02.340 INFO Fetch successful Jul 6 23:48:02.408461 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 6 23:48:02.410100 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:48:02.470211 coreos-metadata[1504]: Jul 06 23:48:02.469 WARN failed to locate config-drive, using the metadata service API instead Jul 6 23:48:02.532044 coreos-metadata[1504]: Jul 06 23:48:02.531 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 6 23:48:02.547534 coreos-metadata[1504]: Jul 06 23:48:02.547 INFO Fetch successful Jul 6 23:48:02.547534 coreos-metadata[1504]: Jul 06 23:48:02.547 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 6 23:48:02.561076 coreos-metadata[1504]: Jul 06 23:48:02.560 INFO Fetch successful Jul 6 23:48:02.567835 unknown[1504]: wrote ssh authorized keys file for user: core Jul 6 23:48:02.620753 update-ssh-keys[1634]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:48:02.622588 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 6 23:48:02.626926 systemd[1]: Finished sshkeys.service. Jul 6 23:48:02.633688 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:48:02.634381 systemd[1]: Startup finished in 1.198s (kernel) + 17.140s (initrd) + 11.742s (userspace) = 30.081s. Jul 6 23:48:09.745591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:48:09.758504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:10.104387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:10.117604 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:48:10.248521 kubelet[1645]: E0706 23:48:10.248297 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:48:10.256713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:48:10.257042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:48:10.258041 systemd[1]: kubelet.service: Consumed 255ms CPU time, 110.9M memory peak. Jul 6 23:48:11.631759 systemd[1]: Started sshd@3-172.24.4.123:22-172.24.4.1:42104.service - OpenSSH per-connection server daemon (172.24.4.1:42104). Jul 6 23:48:13.215527 sshd[1653]: Accepted publickey for core from 172.24.4.1 port 42104 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:48:13.219420 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:13.236287 systemd-logind[1457]: New session 6 of user core. Jul 6 23:48:13.244469 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:48:13.796177 sshd[1655]: Connection closed by 172.24.4.1 port 42104 Jul 6 23:48:13.796574 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:13.815639 systemd[1]: sshd@3-172.24.4.123:22-172.24.4.1:42104.service: Deactivated successfully. Jul 6 23:48:13.819719 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:48:13.821855 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:48:13.831762 systemd[1]: Started sshd@4-172.24.4.123:22-172.24.4.1:39262.service - OpenSSH per-connection server daemon (172.24.4.1:39262). Jul 6 23:48:13.835787 systemd-logind[1457]: Removed session 6. Jul 6 23:48:15.147767 sshd[1660]: Accepted publickey for core from 172.24.4.1 port 39262 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:48:15.151923 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:15.166257 systemd-logind[1457]: New session 7 of user core. Jul 6 23:48:15.178562 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:48:15.874837 sshd[1663]: Connection closed by 172.24.4.1 port 39262 Jul 6 23:48:15.876018 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:15.893309 systemd[1]: sshd@4-172.24.4.123:22-172.24.4.1:39262.service: Deactivated successfully. Jul 6 23:48:15.897598 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:48:15.901343 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:48:15.914821 systemd[1]: Started sshd@5-172.24.4.123:22-172.24.4.1:39272.service - OpenSSH per-connection server daemon (172.24.4.1:39272). Jul 6 23:48:15.918717 systemd-logind[1457]: Removed session 7. Jul 6 23:48:17.044260 sshd[1668]: Accepted publickey for core from 172.24.4.1 port 39272 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:48:17.047299 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:17.060236 systemd-logind[1457]: New session 8 of user core. Jul 6 23:48:17.068474 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:48:17.821235 sshd[1671]: Connection closed by 172.24.4.1 port 39272 Jul 6 23:48:17.822069 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:17.838585 systemd[1]: sshd@5-172.24.4.123:22-172.24.4.1:39272.service: Deactivated successfully. Jul 6 23:48:17.842581 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:48:17.847559 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:48:17.854899 systemd[1]: Started sshd@6-172.24.4.123:22-172.24.4.1:39282.service - OpenSSH per-connection server daemon (172.24.4.1:39282). Jul 6 23:48:17.859119 systemd-logind[1457]: Removed session 8. Jul 6 23:48:18.990929 sshd[1676]: Accepted publickey for core from 172.24.4.1 port 39282 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:48:18.994285 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:19.008568 systemd-logind[1457]: New session 9 of user core. Jul 6 23:48:19.019622 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:48:19.424240 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:48:19.425851 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:19.482191 sudo[1680]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:19.644186 sshd[1679]: Connection closed by 172.24.4.1 port 39282 Jul 6 23:48:19.647981 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:19.677431 systemd[1]: sshd@6-172.24.4.123:22-172.24.4.1:39282.service: Deactivated successfully. Jul 6 23:48:19.684110 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:48:19.692855 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:48:19.701990 systemd[1]: Started sshd@7-172.24.4.123:22-172.24.4.1:39284.service - OpenSSH per-connection server daemon (172.24.4.1:39284). Jul 6 23:48:19.706291 systemd-logind[1457]: Removed session 9. Jul 6 23:48:20.494026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:48:20.511588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:20.857593 sshd[1685]: Accepted publickey for core from 172.24.4.1 port 39284 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:48:20.859118 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:20.867388 systemd-logind[1457]: New session 10 of user core. Jul 6 23:48:20.881380 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:48:20.919306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:20.931767 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:48:21.155899 kubelet[1696]: E0706 23:48:21.155547 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:48:21.166599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:48:21.167189 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:48:21.168554 systemd[1]: kubelet.service: Consumed 476ms CPU time, 108.7M memory peak. Jul 6 23:48:21.301753 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:48:21.303709 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:21.316653 sudo[1706]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:21.335679 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:48:21.336591 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:21.381890 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:48:21.480874 augenrules[1728]: No rules Jul 6 23:48:21.480984 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:48:21.481356 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:48:21.483607 sudo[1705]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:21.736298 sshd[1694]: Connection closed by 172.24.4.1 port 39284 Jul 6 23:48:21.738397 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:21.756230 systemd[1]: sshd@7-172.24.4.123:22-172.24.4.1:39284.service: Deactivated successfully. Jul 6 23:48:21.760938 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:48:21.763590 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:48:21.773819 systemd[1]: Started sshd@8-172.24.4.123:22-172.24.4.1:39290.service - OpenSSH per-connection server daemon (172.24.4.1:39290). Jul 6 23:48:21.778046 systemd-logind[1457]: Removed session 10. Jul 6 23:48:22.775449 sshd[1736]: Accepted publickey for core from 172.24.4.1 port 39290 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:48:22.779426 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:22.799068 systemd-logind[1457]: New session 11 of user core. Jul 6 23:48:22.810559 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:48:23.252518 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:48:23.253322 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:24.314789 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:48:24.315061 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:48:25.085021 dockerd[1757]: time="2025-07-06T23:48:25.084543386Z" level=info msg="Starting up" Jul 6 23:48:25.366830 dockerd[1757]: time="2025-07-06T23:48:25.366529794Z" level=info msg="Loading containers: start." Jul 6 23:48:25.693388 kernel: Initializing XFRM netlink socket Jul 6 23:48:25.843007 systemd-networkd[1382]: docker0: Link UP Jul 6 23:48:25.876535 dockerd[1757]: time="2025-07-06T23:48:25.875406953Z" level=info msg="Loading containers: done." Jul 6 23:48:25.895570 dockerd[1757]: time="2025-07-06T23:48:25.895492862Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:48:25.895778 dockerd[1757]: time="2025-07-06T23:48:25.895626224Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 6 23:48:25.895778 dockerd[1757]: time="2025-07-06T23:48:25.895765057Z" level=info msg="Daemon has completed initialization" Jul 6 23:48:25.954078 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:48:25.958070 dockerd[1757]: time="2025-07-06T23:48:25.954444797Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:48:27.779877 containerd[1473]: time="2025-07-06T23:48:27.779222649Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:48:28.614953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910148518.mount: Deactivated successfully. Jul 6 23:48:30.439557 containerd[1473]: time="2025-07-06T23:48:30.438927461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:30.445166 containerd[1473]: time="2025-07-06T23:48:30.441903934Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077752" Jul 6 23:48:30.445166 containerd[1473]: time="2025-07-06T23:48:30.443580175Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:30.450364 containerd[1473]: time="2025-07-06T23:48:30.450302107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:30.453711 containerd[1473]: time="2025-07-06T23:48:30.453623123Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 2.671667866s" Jul 6 23:48:30.453886 containerd[1473]: time="2025-07-06T23:48:30.453725233Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 6 23:48:30.460524 containerd[1473]: time="2025-07-06T23:48:30.458303286Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:48:31.251325 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 6 23:48:31.285262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:31.602494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:31.611757 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:48:32.060766 kubelet[2009]: E0706 23:48:32.060630 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:48:32.064905 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:48:32.065076 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:48:32.065877 systemd[1]: kubelet.service: Consumed 441ms CPU time, 108.6M memory peak. Jul 6 23:48:32.789050 containerd[1473]: time="2025-07-06T23:48:32.788997882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:32.795903 containerd[1473]: time="2025-07-06T23:48:32.795440175Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713302" Jul 6 23:48:32.797162 containerd[1473]: time="2025-07-06T23:48:32.796885049Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:32.803151 containerd[1473]: time="2025-07-06T23:48:32.802264548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:32.804440 containerd[1473]: time="2025-07-06T23:48:32.804337063Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 2.345890068s" Jul 6 23:48:32.804440 containerd[1473]: time="2025-07-06T23:48:32.804435101Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 6 23:48:32.807331 containerd[1473]: time="2025-07-06T23:48:32.807286969Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:48:34.834355 containerd[1473]: time="2025-07-06T23:48:34.834284229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:34.836158 containerd[1473]: time="2025-07-06T23:48:34.836068620Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783679" Jul 6 23:48:34.837734 containerd[1473]: time="2025-07-06T23:48:34.837686453Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:34.841388 containerd[1473]: time="2025-07-06T23:48:34.841320552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:34.843630 containerd[1473]: time="2025-07-06T23:48:34.842676988Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 2.03535754s" Jul 6 23:48:34.843630 containerd[1473]: time="2025-07-06T23:48:34.842732264Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 6 23:48:34.844086 containerd[1473]: time="2025-07-06T23:48:34.844059450Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:48:36.255733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324399632.mount: Deactivated successfully. Jul 6 23:48:36.797171 containerd[1473]: time="2025-07-06T23:48:36.795776946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:36.797171 containerd[1473]: time="2025-07-06T23:48:36.796982345Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383951" Jul 6 23:48:36.798939 containerd[1473]: time="2025-07-06T23:48:36.798866756Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:36.803427 containerd[1473]: time="2025-07-06T23:48:36.803099887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:36.804031 containerd[1473]: time="2025-07-06T23:48:36.803993512Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 1.959743677s" Jul 6 23:48:36.804137 containerd[1473]: time="2025-07-06T23:48:36.804099592Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 6 23:48:36.806115 containerd[1473]: time="2025-07-06T23:48:36.805697091Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:48:37.463110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3953915227.mount: Deactivated successfully. Jul 6 23:48:39.258164 containerd[1473]: time="2025-07-06T23:48:39.256902067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:39.259553 containerd[1473]: time="2025-07-06T23:48:39.259485025Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 6 23:48:39.261290 containerd[1473]: time="2025-07-06T23:48:39.261217815Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:39.264562 containerd[1473]: time="2025-07-06T23:48:39.264515575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:39.266046 containerd[1473]: time="2025-07-06T23:48:39.265892743Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.45962757s" Jul 6 23:48:39.266046 containerd[1473]: time="2025-07-06T23:48:39.265933576Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:48:39.266872 containerd[1473]: time="2025-07-06T23:48:39.266697428Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:48:39.831645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536142369.mount: Deactivated successfully. Jul 6 23:48:39.844608 containerd[1473]: time="2025-07-06T23:48:39.844518865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:39.846919 containerd[1473]: time="2025-07-06T23:48:39.846637159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 6 23:48:39.848302 containerd[1473]: time="2025-07-06T23:48:39.848085612Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:39.856777 containerd[1473]: time="2025-07-06T23:48:39.856618938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:39.859920 containerd[1473]: time="2025-07-06T23:48:39.858870843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 592.132022ms" Jul 6 23:48:39.859920 containerd[1473]: time="2025-07-06T23:48:39.858943691Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:48:39.860657 containerd[1473]: time="2025-07-06T23:48:39.860494602Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:48:40.525108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2916253956.mount: Deactivated successfully. Jul 6 23:48:40.567615 update_engine[1463]: I20250706 23:48:40.567158 1463 update_attempter.cc:509] Updating boot flags... Jul 6 23:48:40.673690 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2108) Jul 6 23:48:40.790175 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2102) Jul 6 23:48:42.243418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 6 23:48:42.252433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:42.694233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:42.699075 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:48:42.770769 kubelet[2163]: E0706 23:48:42.769392 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:48:42.771595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:48:42.771751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:48:42.772105 systemd[1]: kubelet.service: Consumed 462ms CPU time, 112.2M memory peak. Jul 6 23:48:43.596732 containerd[1473]: time="2025-07-06T23:48:43.596514439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:43.602694 containerd[1473]: time="2025-07-06T23:48:43.602535376Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 6 23:48:43.606828 containerd[1473]: time="2025-07-06T23:48:43.606691783Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:43.619925 containerd[1473]: time="2025-07-06T23:48:43.619384439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:43.623813 containerd[1473]: time="2025-07-06T23:48:43.622702412Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.762125915s" Jul 6 23:48:43.623813 containerd[1473]: time="2025-07-06T23:48:43.622776078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 6 23:48:49.561672 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:49.562168 systemd[1]: kubelet.service: Consumed 462ms CPU time, 112.2M memory peak. Jul 6 23:48:49.576401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:49.610110 systemd[1]: Reload requested from client PID 2200 ('systemctl') (unit session-11.scope)... Jul 6 23:48:49.610208 systemd[1]: Reloading... Jul 6 23:48:49.759331 zram_generator::config[2246]: No configuration found. Jul 6 23:48:49.928368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:48:50.064683 systemd[1]: Reloading finished in 454 ms. Jul 6 23:48:50.507029 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:48:50.507304 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:48:50.509909 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:50.510023 systemd[1]: kubelet.service: Consumed 232ms CPU time, 92.7M memory peak. Jul 6 23:48:50.523845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:51.101207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:51.114852 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:48:51.217830 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:48:51.217830 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:48:51.217830 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:48:51.219609 kubelet[2309]: I0706 23:48:51.217943 2309 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:48:51.720161 kubelet[2309]: I0706 23:48:51.719973 2309 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:48:51.720161 kubelet[2309]: I0706 23:48:51.720076 2309 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:48:51.723145 kubelet[2309]: I0706 23:48:51.721573 2309 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:48:51.775773 kubelet[2309]: I0706 23:48:51.775695 2309 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:48:51.776704 kubelet[2309]: E0706 23:48:51.776637 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:51.796927 kubelet[2309]: E0706 23:48:51.796831 2309 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:48:51.796927 kubelet[2309]: I0706 23:48:51.796911 2309 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:48:51.812162 kubelet[2309]: I0706 23:48:51.812096 2309 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:48:51.812976 kubelet[2309]: I0706 23:48:51.812910 2309 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:48:51.813708 kubelet[2309]: I0706 23:48:51.813607 2309 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:48:51.814511 kubelet[2309]: I0706 23:48:51.813694 2309 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-1-3-a5860ac047.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:48:51.815114 kubelet[2309]: I0706 23:48:51.814699 2309 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:48:51.815114 kubelet[2309]: I0706 23:48:51.814745 2309 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:48:51.815545 kubelet[2309]: I0706 23:48:51.815478 2309 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:48:51.822557 kubelet[2309]: I0706 23:48:51.821784 2309 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:48:51.822557 kubelet[2309]: I0706 23:48:51.821929 2309 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:48:51.822557 kubelet[2309]: I0706 23:48:51.822244 2309 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:48:51.822557 kubelet[2309]: I0706 23:48:51.822495 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:48:51.827869 kubelet[2309]: W0706 23:48:51.827270 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-3-a5860ac047.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.123:6443: connect: connection refused Jul 6 23:48:51.827869 kubelet[2309]: E0706 23:48:51.827495 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-3-a5860ac047.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:51.830883 kubelet[2309]: W0706 23:48:51.830795 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.123:6443: connect: connection refused Jul 6 23:48:51.832189 kubelet[2309]: E0706 23:48:51.831081 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:51.832189 kubelet[2309]: I0706 23:48:51.831409 2309 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:48:51.833218 kubelet[2309]: I0706 23:48:51.833161 2309 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:48:51.835023 kubelet[2309]: W0706 23:48:51.834980 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:48:51.842210 kubelet[2309]: I0706 23:48:51.842176 2309 server.go:1274] "Started kubelet" Jul 6 23:48:51.848774 kubelet[2309]: I0706 23:48:51.848427 2309 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:48:51.849570 kubelet[2309]: I0706 23:48:51.849502 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:48:51.851688 kubelet[2309]: I0706 23:48:51.851650 2309 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:48:51.858561 kubelet[2309]: E0706 23:48:51.853516 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.123:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.123:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-1-3-a5860ac047.novalocal.184fce6953bb288a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-1-3-a5860ac047.novalocal,UID:ci-4230-2-1-3-a5860ac047.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-1-3-a5860ac047.novalocal,},FirstTimestamp:2025-07-06 23:48:51.842074762 +0000 UTC m=+0.708773372,LastTimestamp:2025-07-06 23:48:51.842074762 +0000 UTC m=+0.708773372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-1-3-a5860ac047.novalocal,}" Jul 6 23:48:51.861095 kubelet[2309]: I0706 23:48:51.861057 2309 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:48:51.864949 kubelet[2309]: I0706 23:48:51.864894 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:48:51.868375 kubelet[2309]: I0706 23:48:51.868331 2309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:48:51.876174 kubelet[2309]: E0706 23:48:51.876093 2309 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:48:51.876613 kubelet[2309]: I0706 23:48:51.876578 2309 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:48:51.876872 kubelet[2309]: I0706 23:48:51.876841 2309 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:48:51.877021 kubelet[2309]: I0706 23:48:51.876999 2309 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:48:51.878306 kubelet[2309]: W0706 23:48:51.878216 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.123:6443: connect: connection refused Jul 6 23:48:51.878491 kubelet[2309]: E0706 23:48:51.878469 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:51.878588 kubelet[2309]: E0706 23:48:51.878286 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-1-3-a5860ac047.novalocal\" not found" Jul 6 23:48:51.879256 kubelet[2309]: E0706 23:48:51.879223 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-3-a5860ac047.novalocal?timeout=10s\": dial tcp 172.24.4.123:6443: connect: connection refused" interval="200ms" Jul 6 23:48:51.881782 kubelet[2309]: I0706 23:48:51.881762 2309 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:48:51.882065 kubelet[2309]: I0706 23:48:51.882051 2309 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:48:51.882265 kubelet[2309]: I0706 23:48:51.882246 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:48:51.902105 kubelet[2309]: I0706 23:48:51.902017 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:48:51.904540 kubelet[2309]: I0706 23:48:51.904500 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:48:51.904655 kubelet[2309]: I0706 23:48:51.904633 2309 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:48:51.908848 kubelet[2309]: I0706 23:48:51.908816 2309 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:48:51.908957 kubelet[2309]: I0706 23:48:51.908673 2309 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:48:51.908957 kubelet[2309]: I0706 23:48:51.908877 2309 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:48:51.908957 kubelet[2309]: I0706 23:48:51.908914 2309 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:48:51.909704 kubelet[2309]: W0706 23:48:51.909655 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.123:6443: connect: connection refused Jul 6 23:48:51.909784 kubelet[2309]: E0706 23:48:51.909701 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:51.911142 kubelet[2309]: E0706 23:48:51.910467 2309 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:48:51.915583 kubelet[2309]: I0706 23:48:51.915547 2309 policy_none.go:49] "None policy: Start" Jul 6 23:48:51.916543 kubelet[2309]: I0706 23:48:51.916526 2309 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:48:51.916634 kubelet[2309]: I0706 23:48:51.916570 2309 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:48:51.928153 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:48:51.939646 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:48:51.942933 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:48:51.950099 kubelet[2309]: I0706 23:48:51.950080 2309 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:48:51.950808 kubelet[2309]: I0706 23:48:51.950672 2309 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:48:51.950935 kubelet[2309]: I0706 23:48:51.950846 2309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:48:51.951433 kubelet[2309]: I0706 23:48:51.951224 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:48:51.953809 kubelet[2309]: E0706 23:48:51.953773 2309 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-1-3-a5860ac047.novalocal\" not found" Jul 6 23:48:52.042921 systemd[1]: Created slice kubepods-burstable-pod1a07604c3637df5f29d8ef9e289d2030.slice - libcontainer container kubepods-burstable-pod1a07604c3637df5f29d8ef9e289d2030.slice. Jul 6 23:48:52.060960 kubelet[2309]: I0706 23:48:52.060687 2309 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.065361 kubelet[2309]: E0706 23:48:52.065295 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.123:6443/api/v1/nodes\": dial tcp 172.24.4.123:6443: connect: connection refused" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.073827 systemd[1]: Created slice kubepods-burstable-pod42207947b7c654d8b0160003d3d3faa9.slice - libcontainer container kubepods-burstable-pod42207947b7c654d8b0160003d3d3faa9.slice. Jul 6 23:48:52.080099 kubelet[2309]: E0706 23:48:52.079971 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-3-a5860ac047.novalocal?timeout=10s\": dial tcp 172.24.4.123:6443: connect: connection refused" interval="400ms" Jul 6 23:48:52.090818 systemd[1]: Created slice kubepods-burstable-pod487e7832aca3eb64940e7ac6cad29811.slice - libcontainer container kubepods-burstable-pod487e7832aca3eb64940e7ac6cad29811.slice. Jul 6 23:48:52.179940 kubelet[2309]: I0706 23:48:52.179843 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a07604c3637df5f29d8ef9e289d2030-ca-certs\") pod \"kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"1a07604c3637df5f29d8ef9e289d2030\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.179940 kubelet[2309]: I0706 23:48:52.179952 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42207947b7c654d8b0160003d3d3faa9-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"42207947b7c654d8b0160003d3d3faa9\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.180327 kubelet[2309]: I0706 23:48:52.180004 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a07604c3637df5f29d8ef9e289d2030-k8s-certs\") pod \"kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"1a07604c3637df5f29d8ef9e289d2030\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.180327 kubelet[2309]: I0706 23:48:52.180053 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a07604c3637df5f29d8ef9e289d2030-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"1a07604c3637df5f29d8ef9e289d2030\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.180327 kubelet[2309]: I0706 23:48:52.180099 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42207947b7c654d8b0160003d3d3faa9-ca-certs\") pod \"kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"42207947b7c654d8b0160003d3d3faa9\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.180327 kubelet[2309]: I0706 23:48:52.180187 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42207947b7c654d8b0160003d3d3faa9-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"42207947b7c654d8b0160003d3d3faa9\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.180556 kubelet[2309]: I0706 23:48:52.180237 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42207947b7c654d8b0160003d3d3faa9-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"42207947b7c654d8b0160003d3d3faa9\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.180556 kubelet[2309]: I0706 23:48:52.180289 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42207947b7c654d8b0160003d3d3faa9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"42207947b7c654d8b0160003d3d3faa9\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.180556 kubelet[2309]: I0706 23:48:52.180352 2309 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/487e7832aca3eb64940e7ac6cad29811-kubeconfig\") pod \"kube-scheduler-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"487e7832aca3eb64940e7ac6cad29811\") " pod="kube-system/kube-scheduler-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.269402 kubelet[2309]: I0706 23:48:52.269301 2309 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.270711 kubelet[2309]: E0706 23:48:52.270255 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.123:6443/api/v1/nodes\": dial tcp 172.24.4.123:6443: connect: connection refused" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.364296 containerd[1473]: time="2025-07-06T23:48:52.362827719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal,Uid:1a07604c3637df5f29d8ef9e289d2030,Namespace:kube-system,Attempt:0,}" Jul 6 23:48:52.385786 containerd[1473]: time="2025-07-06T23:48:52.385679739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal,Uid:42207947b7c654d8b0160003d3d3faa9,Namespace:kube-system,Attempt:0,}" Jul 6 23:48:52.396501 containerd[1473]: time="2025-07-06T23:48:52.396413145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-1-3-a5860ac047.novalocal,Uid:487e7832aca3eb64940e7ac6cad29811,Namespace:kube-system,Attempt:0,}" Jul 6 23:48:52.482214 kubelet[2309]: E0706 23:48:52.481985 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-3-a5860ac047.novalocal?timeout=10s\": dial tcp 172.24.4.123:6443: connect: connection refused" interval="800ms" Jul 6 23:48:52.675117 kubelet[2309]: I0706 23:48:52.674905 2309 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:52.676405 kubelet[2309]: E0706 23:48:52.676326 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.123:6443/api/v1/nodes\": dial tcp 172.24.4.123:6443: connect: connection refused" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:53.000520 kubelet[2309]: W0706 23:48:52.999987 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-3-a5860ac047.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.123:6443: connect: connection refused Jul 6 23:48:53.000520 kubelet[2309]: E0706 23:48:53.000176 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.123:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-1-3-a5860ac047.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:53.014659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4229454407.mount: Deactivated successfully. Jul 6 23:48:53.034225 containerd[1473]: time="2025-07-06T23:48:53.034037457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:48:53.042493 containerd[1473]: time="2025-07-06T23:48:53.042174243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 6 23:48:53.044514 containerd[1473]: time="2025-07-06T23:48:53.044305405Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:48:53.047253 containerd[1473]: time="2025-07-06T23:48:53.047080365Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:48:53.054187 containerd[1473]: time="2025-07-06T23:48:53.053945645Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:48:53.054187 containerd[1473]: time="2025-07-06T23:48:53.054060567Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:48:53.056283 containerd[1473]: time="2025-07-06T23:48:53.055958457Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:48:53.063501 containerd[1473]: time="2025-07-06T23:48:53.063387339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:48:53.071300 containerd[1473]: time="2025-07-06T23:48:53.070618819Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 707.217716ms" Jul 6 23:48:53.077846 containerd[1473]: time="2025-07-06T23:48:53.077502204Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 680.598829ms" Jul 6 23:48:53.088040 containerd[1473]: time="2025-07-06T23:48:53.087533393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 701.554442ms" Jul 6 23:48:53.285192 kubelet[2309]: E0706 23:48:53.283510 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-1-3-a5860ac047.novalocal?timeout=10s\": dial tcp 172.24.4.123:6443: connect: connection refused" interval="1.6s" Jul 6 23:48:53.308823 kubelet[2309]: W0706 23:48:53.308717 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.123:6443: connect: connection refused Jul 6 23:48:53.308970 kubelet[2309]: E0706 23:48:53.308842 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.123:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:53.320580 containerd[1473]: time="2025-07-06T23:48:53.317784843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:48:53.320580 containerd[1473]: time="2025-07-06T23:48:53.320475289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:48:53.320580 containerd[1473]: time="2025-07-06T23:48:53.320519234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:48:53.320892 containerd[1473]: time="2025-07-06T23:48:53.320682250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:48:53.324322 containerd[1473]: time="2025-07-06T23:48:53.323234097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:48:53.324322 containerd[1473]: time="2025-07-06T23:48:53.323346005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:48:53.324322 containerd[1473]: time="2025-07-06T23:48:53.323401933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:48:53.324322 containerd[1473]: time="2025-07-06T23:48:53.323530622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:48:53.329281 containerd[1473]: time="2025-07-06T23:48:53.328946021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:48:53.330185 containerd[1473]: time="2025-07-06T23:48:53.329385634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:48:53.330185 containerd[1473]: time="2025-07-06T23:48:53.329501278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:48:53.330941 containerd[1473]: time="2025-07-06T23:48:53.330526465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:48:53.361308 systemd[1]: Started cri-containerd-090cdebe18cb742745afce2aeedf4b824b2710fae4b9ec45f001a4a1dd741bed.scope - libcontainer container 090cdebe18cb742745afce2aeedf4b824b2710fae4b9ec45f001a4a1dd741bed. Jul 6 23:48:53.367632 systemd[1]: Started cri-containerd-509b77503f829791ff102e35b6933222722ef2ddad01b96511ea354de66bd85b.scope - libcontainer container 509b77503f829791ff102e35b6933222722ef2ddad01b96511ea354de66bd85b. Jul 6 23:48:53.369753 systemd[1]: Started cri-containerd-8f0053c754b167f0432191d206c6b253e7853d40d53af9074ab2d6ce14e90690.scope - libcontainer container 8f0053c754b167f0432191d206c6b253e7853d40d53af9074ab2d6ce14e90690. Jul 6 23:48:53.373939 kubelet[2309]: W0706 23:48:53.373878 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.123:6443: connect: connection refused Jul 6 23:48:53.374326 kubelet[2309]: E0706 23:48:53.374305 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.123:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:53.448719 containerd[1473]: time="2025-07-06T23:48:53.448374062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal,Uid:42207947b7c654d8b0160003d3d3faa9,Namespace:kube-system,Attempt:0,} returns sandbox id \"509b77503f829791ff102e35b6933222722ef2ddad01b96511ea354de66bd85b\"" Jul 6 23:48:53.453479 containerd[1473]: time="2025-07-06T23:48:53.453291526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal,Uid:1a07604c3637df5f29d8ef9e289d2030,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f0053c754b167f0432191d206c6b253e7853d40d53af9074ab2d6ce14e90690\"" Jul 6 23:48:53.463430 containerd[1473]: time="2025-07-06T23:48:53.463381007Z" level=info msg="CreateContainer within sandbox \"509b77503f829791ff102e35b6933222722ef2ddad01b96511ea354de66bd85b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:48:53.464172 containerd[1473]: time="2025-07-06T23:48:53.464096825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-1-3-a5860ac047.novalocal,Uid:487e7832aca3eb64940e7ac6cad29811,Namespace:kube-system,Attempt:0,} returns sandbox id \"090cdebe18cb742745afce2aeedf4b824b2710fae4b9ec45f001a4a1dd741bed\"" Jul 6 23:48:53.464260 containerd[1473]: time="2025-07-06T23:48:53.464229923Z" level=info msg="CreateContainer within sandbox \"8f0053c754b167f0432191d206c6b253e7853d40d53af9074ab2d6ce14e90690\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:48:53.478277 containerd[1473]: time="2025-07-06T23:48:53.477891249Z" level=info msg="CreateContainer within sandbox \"090cdebe18cb742745afce2aeedf4b824b2710fae4b9ec45f001a4a1dd741bed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:48:53.479604 kubelet[2309]: I0706 23:48:53.478573 2309 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:53.479860 kubelet[2309]: E0706 23:48:53.479817 2309 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.123:6443/api/v1/nodes\": dial tcp 172.24.4.123:6443: connect: connection refused" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:53.498438 containerd[1473]: time="2025-07-06T23:48:53.498393347Z" level=info msg="CreateContainer within sandbox \"509b77503f829791ff102e35b6933222722ef2ddad01b96511ea354de66bd85b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"623ea38ae090440907a46939eb6ab75f236ab3aadd44509eaf7da16a015e7c7d\"" Jul 6 23:48:53.499957 containerd[1473]: time="2025-07-06T23:48:53.499930929Z" level=info msg="StartContainer for \"623ea38ae090440907a46939eb6ab75f236ab3aadd44509eaf7da16a015e7c7d\"" Jul 6 23:48:53.505800 kubelet[2309]: W0706 23:48:53.505733 2309 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.123:6443: connect: connection refused Jul 6 23:48:53.505916 kubelet[2309]: E0706 23:48:53.505816 2309 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.123:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.123:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:48:53.512754 containerd[1473]: time="2025-07-06T23:48:53.512703521Z" level=info msg="CreateContainer within sandbox \"8f0053c754b167f0432191d206c6b253e7853d40d53af9074ab2d6ce14e90690\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a31b70f5cbbfd4926b57c5e1399946d2f8d0f4dffc8c15f4c80aa407e3430da6\"" Jul 6 23:48:53.515150 containerd[1473]: time="2025-07-06T23:48:53.514350865Z" level=info msg="StartContainer for \"a31b70f5cbbfd4926b57c5e1399946d2f8d0f4dffc8c15f4c80aa407e3430da6\"" Jul 6 23:48:53.524847 containerd[1473]: time="2025-07-06T23:48:53.524793000Z" level=info msg="CreateContainer within sandbox \"090cdebe18cb742745afce2aeedf4b824b2710fae4b9ec45f001a4a1dd741bed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5413c99ed23af1896d85c194e13731af161ae00cd62a092a668231c0ac577b03\"" Jul 6 23:48:53.525963 containerd[1473]: time="2025-07-06T23:48:53.525927229Z" level=info msg="StartContainer for \"5413c99ed23af1896d85c194e13731af161ae00cd62a092a668231c0ac577b03\"" Jul 6 23:48:53.538334 systemd[1]: Started cri-containerd-623ea38ae090440907a46939eb6ab75f236ab3aadd44509eaf7da16a015e7c7d.scope - libcontainer container 623ea38ae090440907a46939eb6ab75f236ab3aadd44509eaf7da16a015e7c7d. Jul 6 23:48:53.556319 systemd[1]: Started cri-containerd-a31b70f5cbbfd4926b57c5e1399946d2f8d0f4dffc8c15f4c80aa407e3430da6.scope - libcontainer container a31b70f5cbbfd4926b57c5e1399946d2f8d0f4dffc8c15f4c80aa407e3430da6. Jul 6 23:48:53.585424 systemd[1]: Started cri-containerd-5413c99ed23af1896d85c194e13731af161ae00cd62a092a668231c0ac577b03.scope - libcontainer container 5413c99ed23af1896d85c194e13731af161ae00cd62a092a668231c0ac577b03. Jul 6 23:48:53.623403 containerd[1473]: time="2025-07-06T23:48:53.623275824Z" level=info msg="StartContainer for \"623ea38ae090440907a46939eb6ab75f236ab3aadd44509eaf7da16a015e7c7d\" returns successfully" Jul 6 23:48:53.640929 containerd[1473]: time="2025-07-06T23:48:53.640796211Z" level=info msg="StartContainer for \"a31b70f5cbbfd4926b57c5e1399946d2f8d0f4dffc8c15f4c80aa407e3430da6\" returns successfully" Jul 6 23:48:53.661294 containerd[1473]: time="2025-07-06T23:48:53.661169299Z" level=info msg="StartContainer for \"5413c99ed23af1896d85c194e13731af161ae00cd62a092a668231c0ac577b03\" returns successfully" Jul 6 23:48:55.082193 kubelet[2309]: I0706 23:48:55.082153 2309 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:55.493956 kubelet[2309]: E0706 23:48:55.493915 2309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-1-3-a5860ac047.novalocal\" not found" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:55.565569 kubelet[2309]: I0706 23:48:55.565532 2309 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:55.565569 kubelet[2309]: E0706 23:48:55.565566 2309 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4230-2-1-3-a5860ac047.novalocal\": node \"ci-4230-2-1-3-a5860ac047.novalocal\" not found" Jul 6 23:48:55.599819 kubelet[2309]: E0706 23:48:55.599764 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-1-3-a5860ac047.novalocal\" not found" Jul 6 23:48:55.700779 kubelet[2309]: E0706 23:48:55.700736 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-1-3-a5860ac047.novalocal\" not found" Jul 6 23:48:55.801916 kubelet[2309]: E0706 23:48:55.801585 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-1-3-a5860ac047.novalocal\" not found" Jul 6 23:48:55.902355 kubelet[2309]: E0706 23:48:55.902320 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-1-3-a5860ac047.novalocal\" not found" Jul 6 23:48:56.003173 kubelet[2309]: E0706 23:48:56.003138 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-1-3-a5860ac047.novalocal\" not found" Jul 6 23:48:56.103733 kubelet[2309]: E0706 23:48:56.103649 2309 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-1-3-a5860ac047.novalocal\" not found" Jul 6 23:48:56.833516 kubelet[2309]: I0706 23:48:56.833476 2309 apiserver.go:52] "Watching apiserver" Jul 6 23:48:56.877431 kubelet[2309]: I0706 23:48:56.877386 2309 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:48:58.137439 systemd[1]: Reload requested from client PID 2584 ('systemctl') (unit session-11.scope)... Jul 6 23:48:58.138254 systemd[1]: Reloading... Jul 6 23:48:58.263172 zram_generator::config[2630]: No configuration found. Jul 6 23:48:58.441153 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:48:58.584934 systemd[1]: Reloading finished in 445 ms. Jul 6 23:48:58.615013 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:58.622537 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:48:58.622973 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:58.623230 systemd[1]: kubelet.service: Consumed 1.308s CPU time, 132M memory peak. Jul 6 23:48:58.629455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:58.885655 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:58.893412 (kubelet)[2693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:48:58.947588 kubelet[2693]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:48:58.947588 kubelet[2693]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:48:58.947588 kubelet[2693]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:48:58.948393 kubelet[2693]: I0706 23:48:58.947593 2693 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:48:58.961098 kubelet[2693]: I0706 23:48:58.960270 2693 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:48:58.961098 kubelet[2693]: I0706 23:48:58.960329 2693 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:48:58.961098 kubelet[2693]: I0706 23:48:58.960849 2693 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:48:58.968322 kubelet[2693]: I0706 23:48:58.968272 2693 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:48:58.973642 kubelet[2693]: I0706 23:48:58.973583 2693 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:48:58.979467 kubelet[2693]: E0706 23:48:58.979350 2693 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:48:58.979467 kubelet[2693]: I0706 23:48:58.979435 2693 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:48:58.982693 kubelet[2693]: I0706 23:48:58.982642 2693 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:48:58.982976 kubelet[2693]: I0706 23:48:58.982755 2693 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:48:58.982976 kubelet[2693]: I0706 23:48:58.982878 2693 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:48:58.984501 kubelet[2693]: I0706 23:48:58.982906 2693 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-1-3-a5860ac047.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:48:58.984501 kubelet[2693]: I0706 23:48:58.983101 2693 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:48:58.984501 kubelet[2693]: I0706 23:48:58.983113 2693 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:48:58.984501 kubelet[2693]: I0706 23:48:58.983165 2693 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:48:58.984501 kubelet[2693]: I0706 23:48:58.983261 2693 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:48:58.985070 kubelet[2693]: I0706 23:48:58.983276 2693 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:48:58.985070 kubelet[2693]: I0706 23:48:58.983301 2693 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:48:58.985070 kubelet[2693]: I0706 23:48:58.983311 2693 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:48:58.987192 kubelet[2693]: I0706 23:48:58.986095 2693 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 6 23:48:58.987475 kubelet[2693]: I0706 23:48:58.987442 2693 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:48:58.991182 kubelet[2693]: I0706 23:48:58.989261 2693 server.go:1274] "Started kubelet" Jul 6 23:48:58.993046 kubelet[2693]: I0706 23:48:58.992832 2693 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:48:58.996824 kubelet[2693]: I0706 23:48:58.994197 2693 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:48:58.996824 kubelet[2693]: I0706 23:48:58.994560 2693 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:48:58.996824 kubelet[2693]: I0706 23:48:58.995333 2693 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:48:59.004791 kubelet[2693]: I0706 23:48:59.003606 2693 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:48:59.014363 kubelet[2693]: I0706 23:48:59.003651 2693 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:48:59.017716 kubelet[2693]: I0706 23:48:59.005240 2693 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:48:59.018172 kubelet[2693]: I0706 23:48:59.005255 2693 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:48:59.018370 kubelet[2693]: I0706 23:48:59.018280 2693 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:48:59.018370 kubelet[2693]: E0706 23:48:59.005394 2693 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-2-1-3-a5860ac047.novalocal\" not found" Jul 6 23:48:59.028469 kubelet[2693]: I0706 23:48:59.028431 2693 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:48:59.029415 kubelet[2693]: I0706 23:48:59.029388 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:48:59.033942 kubelet[2693]: E0706 23:48:59.033500 2693 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:48:59.035165 kubelet[2693]: I0706 23:48:59.034623 2693 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:48:59.035496 kubelet[2693]: I0706 23:48:59.034640 2693 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:48:59.038113 kubelet[2693]: I0706 23:48:59.036552 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:48:59.038113 kubelet[2693]: I0706 23:48:59.036586 2693 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:48:59.038113 kubelet[2693]: I0706 23:48:59.036616 2693 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:48:59.038113 kubelet[2693]: E0706 23:48:59.036667 2693 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:48:59.109972 kubelet[2693]: I0706 23:48:59.109772 2693 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:48:59.109972 kubelet[2693]: I0706 23:48:59.109792 2693 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:48:59.109972 kubelet[2693]: I0706 23:48:59.109820 2693 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:48:59.110428 kubelet[2693]: I0706 23:48:59.110032 2693 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:48:59.110428 kubelet[2693]: I0706 23:48:59.110045 2693 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:48:59.110428 kubelet[2693]: I0706 23:48:59.110069 2693 policy_none.go:49] "None policy: Start" Jul 6 23:48:59.114238 kubelet[2693]: I0706 23:48:59.113684 2693 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:48:59.114238 kubelet[2693]: I0706 23:48:59.113719 2693 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:48:59.114238 kubelet[2693]: I0706 23:48:59.113999 2693 state_mem.go:75] "Updated machine memory state" Jul 6 23:48:59.124788 kubelet[2693]: I0706 23:48:59.124743 2693 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:48:59.124953 kubelet[2693]: I0706 23:48:59.124930 2693 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:48:59.125054 kubelet[2693]: I0706 23:48:59.124948 2693 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:48:59.125545 kubelet[2693]: I0706 23:48:59.125467 2693 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:48:59.177365 kubelet[2693]: W0706 23:48:59.175890 2693 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:48:59.182099 kubelet[2693]: W0706 23:48:59.180765 2693 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:48:59.184737 kubelet[2693]: W0706 23:48:59.184612 2693 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:48:59.220015 sudo[2728]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:48:59.220517 sudo[2728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:48:59.222272 kubelet[2693]: I0706 23:48:59.222009 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42207947b7c654d8b0160003d3d3faa9-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"42207947b7c654d8b0160003d3d3faa9\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.222272 kubelet[2693]: I0706 23:48:59.222136 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1a07604c3637df5f29d8ef9e289d2030-ca-certs\") pod \"kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"1a07604c3637df5f29d8ef9e289d2030\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.222272 kubelet[2693]: I0706 23:48:59.222169 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42207947b7c654d8b0160003d3d3faa9-ca-certs\") pod \"kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"42207947b7c654d8b0160003d3d3faa9\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.222272 kubelet[2693]: I0706 23:48:59.222189 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42207947b7c654d8b0160003d3d3faa9-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"42207947b7c654d8b0160003d3d3faa9\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.222459 kubelet[2693]: I0706 23:48:59.222208 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42207947b7c654d8b0160003d3d3faa9-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"42207947b7c654d8b0160003d3d3faa9\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.222459 kubelet[2693]: I0706 23:48:59.222233 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42207947b7c654d8b0160003d3d3faa9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"42207947b7c654d8b0160003d3d3faa9\") " pod="kube-system/kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.222459 kubelet[2693]: I0706 23:48:59.222256 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/487e7832aca3eb64940e7ac6cad29811-kubeconfig\") pod \"kube-scheduler-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"487e7832aca3eb64940e7ac6cad29811\") " pod="kube-system/kube-scheduler-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.222459 kubelet[2693]: I0706 23:48:59.222274 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1a07604c3637df5f29d8ef9e289d2030-k8s-certs\") pod \"kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"1a07604c3637df5f29d8ef9e289d2030\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.222582 kubelet[2693]: I0706 23:48:59.222297 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1a07604c3637df5f29d8ef9e289d2030-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal\" (UID: \"1a07604c3637df5f29d8ef9e289d2030\") " pod="kube-system/kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.241354 kubelet[2693]: I0706 23:48:59.240973 2693 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.263505 kubelet[2693]: I0706 23:48:59.263139 2693 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.263505 kubelet[2693]: I0706 23:48:59.263263 2693 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:48:59.855946 sudo[2728]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:59.984968 kubelet[2693]: I0706 23:48:59.984658 2693 apiserver.go:52] "Watching apiserver" Jul 6 23:49:00.020172 kubelet[2693]: I0706 23:49:00.018903 2693 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:49:00.093154 kubelet[2693]: W0706 23:49:00.092784 2693 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 6 23:49:00.093154 kubelet[2693]: E0706 23:49:00.092889 2693 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal" Jul 6 23:49:00.122139 kubelet[2693]: I0706 23:49:00.121947 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-1-3-a5860ac047.novalocal" podStartSLOduration=1.121852735 podStartE2EDuration="1.121852735s" podCreationTimestamp="2025-07-06 23:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:00.109159525 +0000 UTC m=+1.210713374" watchObservedRunningTime="2025-07-06 23:49:00.121852735 +0000 UTC m=+1.223406574" Jul 6 23:49:00.141683 kubelet[2693]: I0706 23:49:00.141613 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-1-3-a5860ac047.novalocal" podStartSLOduration=1.141588327 podStartE2EDuration="1.141588327s" podCreationTimestamp="2025-07-06 23:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:00.122086691 +0000 UTC m=+1.223640530" watchObservedRunningTime="2025-07-06 23:49:00.141588327 +0000 UTC m=+1.243142166" Jul 6 23:49:00.141918 kubelet[2693]: I0706 23:49:00.141748 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-1-3-a5860ac047.novalocal" podStartSLOduration=1.141740829 podStartE2EDuration="1.141740829s" podCreationTimestamp="2025-07-06 23:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:00.141346525 +0000 UTC m=+1.242900394" watchObservedRunningTime="2025-07-06 23:49:00.141740829 +0000 UTC m=+1.243294699" Jul 6 23:49:02.221880 sudo[1740]: pam_unix(sudo:session): session closed for user root Jul 6 23:49:02.498645 sshd[1739]: Connection closed by 172.24.4.1 port 39290 Jul 6 23:49:02.510314 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:02.532691 systemd[1]: sshd@8-172.24.4.123:22-172.24.4.1:39290.service: Deactivated successfully. Jul 6 23:49:02.554436 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:49:02.554974 systemd[1]: session-11.scope: Consumed 9.737s CPU time, 262.4M memory peak. Jul 6 23:49:02.560382 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:49:02.566053 systemd-logind[1457]: Removed session 11. Jul 6 23:49:03.759412 kubelet[2693]: I0706 23:49:03.759334 2693 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:49:03.761144 containerd[1473]: time="2025-07-06T23:49:03.761045085Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:49:03.763239 kubelet[2693]: I0706 23:49:03.762274 2693 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:49:04.438390 systemd[1]: Created slice kubepods-besteffort-pod4f04a56c_8f43_44d9_918d_eac389aa7bbe.slice - libcontainer container kubepods-besteffort-pod4f04a56c_8f43_44d9_918d_eac389aa7bbe.slice. Jul 6 23:49:04.455956 kubelet[2693]: I0706 23:49:04.455917 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4f04a56c-8f43-44d9-918d-eac389aa7bbe-kube-proxy\") pod \"kube-proxy-lmpz4\" (UID: \"4f04a56c-8f43-44d9-918d-eac389aa7bbe\") " pod="kube-system/kube-proxy-lmpz4" Jul 6 23:49:04.456757 kubelet[2693]: I0706 23:49:04.456262 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37df6520-3416-48d1-8b5c-7d00ab869118-clustermesh-secrets\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.456757 kubelet[2693]: I0706 23:49:04.456299 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rc5\" (UniqueName: \"kubernetes.io/projected/4f04a56c-8f43-44d9-918d-eac389aa7bbe-kube-api-access-x2rc5\") pod \"kube-proxy-lmpz4\" (UID: \"4f04a56c-8f43-44d9-918d-eac389aa7bbe\") " pod="kube-system/kube-proxy-lmpz4" Jul 6 23:49:04.456757 kubelet[2693]: I0706 23:49:04.456340 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cni-path\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.456757 kubelet[2693]: I0706 23:49:04.456365 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-lib-modules\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.456757 kubelet[2693]: I0706 23:49:04.456387 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-host-proc-sys-kernel\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.456936 kubelet[2693]: I0706 23:49:04.456405 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bmmg\" (UniqueName: \"kubernetes.io/projected/37df6520-3416-48d1-8b5c-7d00ab869118-kube-api-access-6bmmg\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.456936 kubelet[2693]: I0706 23:49:04.456422 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37df6520-3416-48d1-8b5c-7d00ab869118-hubble-tls\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.456936 kubelet[2693]: I0706 23:49:04.456439 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-etc-cni-netd\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.456936 kubelet[2693]: I0706 23:49:04.456456 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-run\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.456936 kubelet[2693]: I0706 23:49:04.456481 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-host-proc-sys-net\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.456936 kubelet[2693]: I0706 23:49:04.456499 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f04a56c-8f43-44d9-918d-eac389aa7bbe-xtables-lock\") pod \"kube-proxy-lmpz4\" (UID: \"4f04a56c-8f43-44d9-918d-eac389aa7bbe\") " pod="kube-system/kube-proxy-lmpz4" Jul 6 23:49:04.457150 kubelet[2693]: I0706 23:49:04.456528 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f04a56c-8f43-44d9-918d-eac389aa7bbe-lib-modules\") pod \"kube-proxy-lmpz4\" (UID: \"4f04a56c-8f43-44d9-918d-eac389aa7bbe\") " pod="kube-system/kube-proxy-lmpz4" Jul 6 23:49:04.457150 kubelet[2693]: I0706 23:49:04.456547 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-hostproc\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.457150 kubelet[2693]: I0706 23:49:04.456569 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-config-path\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.457150 kubelet[2693]: I0706 23:49:04.456590 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-bpf-maps\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.457150 kubelet[2693]: I0706 23:49:04.456607 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-cgroup\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.457150 kubelet[2693]: I0706 23:49:04.456625 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-xtables-lock\") pod \"cilium-l2bw5\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " pod="kube-system/cilium-l2bw5" Jul 6 23:49:04.458400 systemd[1]: Created slice kubepods-burstable-pod37df6520_3416_48d1_8b5c_7d00ab869118.slice - libcontainer container kubepods-burstable-pod37df6520_3416_48d1_8b5c_7d00ab869118.slice. Jul 6 23:49:04.603140 kubelet[2693]: E0706 23:49:04.601096 2693 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:49:04.603627 kubelet[2693]: E0706 23:49:04.603500 2693 projected.go:194] Error preparing data for projected volume kube-api-access-x2rc5 for pod kube-system/kube-proxy-lmpz4: configmap "kube-root-ca.crt" not found Jul 6 23:49:04.603865 kubelet[2693]: E0706 23:49:04.603782 2693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f04a56c-8f43-44d9-918d-eac389aa7bbe-kube-api-access-x2rc5 podName:4f04a56c-8f43-44d9-918d-eac389aa7bbe nodeName:}" failed. No retries permitted until 2025-07-06 23:49:05.103698678 +0000 UTC m=+6.205252527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x2rc5" (UniqueName: "kubernetes.io/projected/4f04a56c-8f43-44d9-918d-eac389aa7bbe-kube-api-access-x2rc5") pod "kube-proxy-lmpz4" (UID: "4f04a56c-8f43-44d9-918d-eac389aa7bbe") : configmap "kube-root-ca.crt" not found Jul 6 23:49:04.604416 kubelet[2693]: E0706 23:49:04.603428 2693 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:49:04.604416 kubelet[2693]: E0706 23:49:04.604340 2693 projected.go:194] Error preparing data for projected volume kube-api-access-6bmmg for pod kube-system/cilium-l2bw5: configmap "kube-root-ca.crt" not found Jul 6 23:49:04.604416 kubelet[2693]: E0706 23:49:04.604375 2693 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37df6520-3416-48d1-8b5c-7d00ab869118-kube-api-access-6bmmg podName:37df6520-3416-48d1-8b5c-7d00ab869118 nodeName:}" failed. No retries permitted until 2025-07-06 23:49:05.104363697 +0000 UTC m=+6.205917536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6bmmg" (UniqueName: "kubernetes.io/projected/37df6520-3416-48d1-8b5c-7d00ab869118-kube-api-access-6bmmg") pod "cilium-l2bw5" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118") : configmap "kube-root-ca.crt" not found Jul 6 23:49:04.821819 systemd[1]: Created slice kubepods-besteffort-pod456ff60a_41cf_482e_a5ff_2d3b34eec312.slice - libcontainer container kubepods-besteffort-pod456ff60a_41cf_482e_a5ff_2d3b34eec312.slice. Jul 6 23:49:04.860567 kubelet[2693]: I0706 23:49:04.860435 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqb5s\" (UniqueName: \"kubernetes.io/projected/456ff60a-41cf-482e-a5ff-2d3b34eec312-kube-api-access-cqb5s\") pod \"cilium-operator-5d85765b45-b7wft\" (UID: \"456ff60a-41cf-482e-a5ff-2d3b34eec312\") " pod="kube-system/cilium-operator-5d85765b45-b7wft" Jul 6 23:49:04.860567 kubelet[2693]: I0706 23:49:04.860509 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/456ff60a-41cf-482e-a5ff-2d3b34eec312-cilium-config-path\") pod \"cilium-operator-5d85765b45-b7wft\" (UID: \"456ff60a-41cf-482e-a5ff-2d3b34eec312\") " pod="kube-system/cilium-operator-5d85765b45-b7wft" Jul 6 23:49:05.149621 containerd[1473]: time="2025-07-06T23:49:05.148438203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-b7wft,Uid:456ff60a-41cf-482e-a5ff-2d3b34eec312,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:05.244694 containerd[1473]: time="2025-07-06T23:49:05.244570616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:05.244972 containerd[1473]: time="2025-07-06T23:49:05.244878202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:05.244972 containerd[1473]: time="2025-07-06T23:49:05.244924641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:05.245334 containerd[1473]: time="2025-07-06T23:49:05.245247216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:05.281336 systemd[1]: Started cri-containerd-ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3.scope - libcontainer container ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3. Jul 6 23:49:05.333872 containerd[1473]: time="2025-07-06T23:49:05.333779864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-b7wft,Uid:456ff60a-41cf-482e-a5ff-2d3b34eec312,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3\"" Jul 6 23:49:05.339040 containerd[1473]: time="2025-07-06T23:49:05.338908476Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:49:05.352354 containerd[1473]: time="2025-07-06T23:49:05.352300877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lmpz4,Uid:4f04a56c-8f43-44d9-918d-eac389aa7bbe,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:05.364562 containerd[1473]: time="2025-07-06T23:49:05.364487129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2bw5,Uid:37df6520-3416-48d1-8b5c-7d00ab869118,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:05.423333 containerd[1473]: time="2025-07-06T23:49:05.422983222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:05.425249 containerd[1473]: time="2025-07-06T23:49:05.424375096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:05.425249 containerd[1473]: time="2025-07-06T23:49:05.424410424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:05.425249 containerd[1473]: time="2025-07-06T23:49:05.424725285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:05.433937 containerd[1473]: time="2025-07-06T23:49:05.433726481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:05.433937 containerd[1473]: time="2025-07-06T23:49:05.433888862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:05.434380 containerd[1473]: time="2025-07-06T23:49:05.433910272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:05.434380 containerd[1473]: time="2025-07-06T23:49:05.434065177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:05.450368 systemd[1]: Started cri-containerd-61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb.scope - libcontainer container 61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb. Jul 6 23:49:05.467353 systemd[1]: Started cri-containerd-c1bc3b2ec832f0be9ff9e05fa4dbc731a6c646b5b8472707164610d08cfc0514.scope - libcontainer container c1bc3b2ec832f0be9ff9e05fa4dbc731a6c646b5b8472707164610d08cfc0514. Jul 6 23:49:05.491884 containerd[1473]: time="2025-07-06T23:49:05.491460771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l2bw5,Uid:37df6520-3416-48d1-8b5c-7d00ab869118,Namespace:kube-system,Attempt:0,} returns sandbox id \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\"" Jul 6 23:49:05.517840 containerd[1473]: time="2025-07-06T23:49:05.517779607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lmpz4,Uid:4f04a56c-8f43-44d9-918d-eac389aa7bbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1bc3b2ec832f0be9ff9e05fa4dbc731a6c646b5b8472707164610d08cfc0514\"" Jul 6 23:49:05.526178 containerd[1473]: time="2025-07-06T23:49:05.526077933Z" level=info msg="CreateContainer within sandbox \"c1bc3b2ec832f0be9ff9e05fa4dbc731a6c646b5b8472707164610d08cfc0514\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:49:05.562284 containerd[1473]: time="2025-07-06T23:49:05.562202148Z" level=info msg="CreateContainer within sandbox \"c1bc3b2ec832f0be9ff9e05fa4dbc731a6c646b5b8472707164610d08cfc0514\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4686fc6ab980268357f3db061d73c65ae298eaa433feef3fd5417fdcc6fcdb98\"" Jul 6 23:49:05.563485 containerd[1473]: time="2025-07-06T23:49:05.563432595Z" level=info msg="StartContainer for \"4686fc6ab980268357f3db061d73c65ae298eaa433feef3fd5417fdcc6fcdb98\"" Jul 6 23:49:05.613321 systemd[1]: run-containerd-runc-k8s.io-4686fc6ab980268357f3db061d73c65ae298eaa433feef3fd5417fdcc6fcdb98-runc.XwXPbI.mount: Deactivated successfully. Jul 6 23:49:05.621302 systemd[1]: Started cri-containerd-4686fc6ab980268357f3db061d73c65ae298eaa433feef3fd5417fdcc6fcdb98.scope - libcontainer container 4686fc6ab980268357f3db061d73c65ae298eaa433feef3fd5417fdcc6fcdb98. Jul 6 23:49:05.660333 containerd[1473]: time="2025-07-06T23:49:05.660269771Z" level=info msg="StartContainer for \"4686fc6ab980268357f3db061d73c65ae298eaa433feef3fd5417fdcc6fcdb98\" returns successfully" Jul 6 23:49:06.128819 kubelet[2693]: I0706 23:49:06.127621 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lmpz4" podStartSLOduration=2.127539393 podStartE2EDuration="2.127539393s" podCreationTimestamp="2025-07-06 23:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:06.126884324 +0000 UTC m=+7.228438183" watchObservedRunningTime="2025-07-06 23:49:06.127539393 +0000 UTC m=+7.229093232" Jul 6 23:49:06.783668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2960878615.mount: Deactivated successfully. Jul 6 23:49:07.492288 containerd[1473]: time="2025-07-06T23:49:07.492210547Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:07.493452 containerd[1473]: time="2025-07-06T23:49:07.493381077Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 6 23:49:07.495001 containerd[1473]: time="2025-07-06T23:49:07.494955998Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:07.497426 containerd[1473]: time="2025-07-06T23:49:07.496808308Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.157752391s" Jul 6 23:49:07.497426 containerd[1473]: time="2025-07-06T23:49:07.496856780Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 6 23:49:07.498773 containerd[1473]: time="2025-07-06T23:49:07.498749066Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:49:07.501300 containerd[1473]: time="2025-07-06T23:49:07.501259159Z" level=info msg="CreateContainer within sandbox \"ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:49:07.527818 containerd[1473]: time="2025-07-06T23:49:07.527765126Z" level=info msg="CreateContainer within sandbox \"ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\"" Jul 6 23:49:07.528639 containerd[1473]: time="2025-07-06T23:49:07.528613832Z" level=info msg="StartContainer for \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\"" Jul 6 23:49:07.567299 systemd[1]: Started cri-containerd-cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164.scope - libcontainer container cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164. Jul 6 23:49:07.609071 containerd[1473]: time="2025-07-06T23:49:07.608309873Z" level=info msg="StartContainer for \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\" returns successfully" Jul 6 23:49:10.562141 kubelet[2693]: I0706 23:49:10.561485 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-b7wft" podStartSLOduration=4.400597134 podStartE2EDuration="6.56145732s" podCreationTimestamp="2025-07-06 23:49:04 +0000 UTC" firstStartedPulling="2025-07-06 23:49:05.337164249 +0000 UTC m=+6.438718088" lastFinishedPulling="2025-07-06 23:49:07.498024435 +0000 UTC m=+8.599578274" observedRunningTime="2025-07-06 23:49:08.233773306 +0000 UTC m=+9.335327145" watchObservedRunningTime="2025-07-06 23:49:10.56145732 +0000 UTC m=+11.663011179" Jul 6 23:49:12.406675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515118361.mount: Deactivated successfully. Jul 6 23:49:15.188180 containerd[1473]: time="2025-07-06T23:49:15.186620484Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:15.188180 containerd[1473]: time="2025-07-06T23:49:15.188147195Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 6 23:49:15.189905 containerd[1473]: time="2025-07-06T23:49:15.189858495Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:15.192486 containerd[1473]: time="2025-07-06T23:49:15.192422035Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.693535728s" Jul 6 23:49:15.192766 containerd[1473]: time="2025-07-06T23:49:15.192701256Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 6 23:49:15.199443 containerd[1473]: time="2025-07-06T23:49:15.198578201Z" level=info msg="CreateContainer within sandbox \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:49:15.228950 containerd[1473]: time="2025-07-06T23:49:15.228895664Z" level=info msg="CreateContainer within sandbox \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2\"" Jul 6 23:49:15.231688 containerd[1473]: time="2025-07-06T23:49:15.229807275Z" level=info msg="StartContainer for \"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2\"" Jul 6 23:49:15.232239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469977501.mount: Deactivated successfully. Jul 6 23:49:15.279597 systemd[1]: run-containerd-runc-k8s.io-40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2-runc.fFZLq0.mount: Deactivated successfully. Jul 6 23:49:15.291297 systemd[1]: Started cri-containerd-40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2.scope - libcontainer container 40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2. Jul 6 23:49:15.327370 containerd[1473]: time="2025-07-06T23:49:15.327312643Z" level=info msg="StartContainer for \"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2\" returns successfully" Jul 6 23:49:15.336070 systemd[1]: cri-containerd-40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2.scope: Deactivated successfully. Jul 6 23:49:16.220994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2-rootfs.mount: Deactivated successfully. Jul 6 23:49:16.413608 containerd[1473]: time="2025-07-06T23:49:16.412971386Z" level=info msg="shim disconnected" id=40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2 namespace=k8s.io Jul 6 23:49:16.413608 containerd[1473]: time="2025-07-06T23:49:16.413254905Z" level=warning msg="cleaning up after shim disconnected" id=40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2 namespace=k8s.io Jul 6 23:49:16.413608 containerd[1473]: time="2025-07-06T23:49:16.413339606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:49:17.162565 containerd[1473]: time="2025-07-06T23:49:17.162085417Z" level=info msg="CreateContainer within sandbox \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:49:17.209112 containerd[1473]: time="2025-07-06T23:49:17.209026692Z" level=info msg="CreateContainer within sandbox \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3\"" Jul 6 23:49:17.213871 containerd[1473]: time="2025-07-06T23:49:17.212625714Z" level=info msg="StartContainer for \"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3\"" Jul 6 23:49:17.272279 systemd[1]: Started cri-containerd-c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3.scope - libcontainer container c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3. Jul 6 23:49:17.307785 containerd[1473]: time="2025-07-06T23:49:17.307530765Z" level=info msg="StartContainer for \"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3\" returns successfully" Jul 6 23:49:17.323651 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:49:17.323990 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:49:17.325341 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:49:17.331573 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:49:17.334828 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:49:17.335835 systemd[1]: cri-containerd-c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3.scope: Deactivated successfully. Jul 6 23:49:17.360992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3-rootfs.mount: Deactivated successfully. Jul 6 23:49:17.371366 containerd[1473]: time="2025-07-06T23:49:17.371201479Z" level=info msg="shim disconnected" id=c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3 namespace=k8s.io Jul 6 23:49:17.371366 containerd[1473]: time="2025-07-06T23:49:17.371277013Z" level=warning msg="cleaning up after shim disconnected" id=c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3 namespace=k8s.io Jul 6 23:49:17.371366 containerd[1473]: time="2025-07-06T23:49:17.371287783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:49:17.376233 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:49:18.178711 containerd[1473]: time="2025-07-06T23:49:18.178094726Z" level=info msg="CreateContainer within sandbox \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:49:18.234051 containerd[1473]: time="2025-07-06T23:49:18.233716407Z" level=info msg="CreateContainer within sandbox \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65\"" Jul 6 23:49:18.235032 containerd[1473]: time="2025-07-06T23:49:18.234934780Z" level=info msg="StartContainer for \"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65\"" Jul 6 23:49:18.294518 systemd[1]: Started cri-containerd-e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65.scope - libcontainer container e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65. Jul 6 23:49:18.334520 systemd[1]: cri-containerd-e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65.scope: Deactivated successfully. Jul 6 23:49:18.335981 containerd[1473]: time="2025-07-06T23:49:18.335630574Z" level=info msg="StartContainer for \"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65\" returns successfully" Jul 6 23:49:18.361359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65-rootfs.mount: Deactivated successfully. Jul 6 23:49:18.367109 containerd[1473]: time="2025-07-06T23:49:18.367045645Z" level=info msg="shim disconnected" id=e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65 namespace=k8s.io Jul 6 23:49:18.367109 containerd[1473]: time="2025-07-06T23:49:18.367108104Z" level=warning msg="cleaning up after shim disconnected" id=e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65 namespace=k8s.io Jul 6 23:49:18.367447 containerd[1473]: time="2025-07-06T23:49:18.367132710Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:49:19.181778 containerd[1473]: time="2025-07-06T23:49:19.181241399Z" level=info msg="CreateContainer within sandbox \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:49:19.230551 containerd[1473]: time="2025-07-06T23:49:19.230454640Z" level=info msg="CreateContainer within sandbox \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210\"" Jul 6 23:49:19.237165 containerd[1473]: time="2025-07-06T23:49:19.236349294Z" level=info msg="StartContainer for \"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210\"" Jul 6 23:49:19.290527 systemd[1]: run-containerd-runc-k8s.io-509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210-runc.l7lsO0.mount: Deactivated successfully. Jul 6 23:49:19.299297 systemd[1]: Started cri-containerd-509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210.scope - libcontainer container 509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210. Jul 6 23:49:19.335385 systemd[1]: cri-containerd-509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210.scope: Deactivated successfully. Jul 6 23:49:19.340040 containerd[1473]: time="2025-07-06T23:49:19.338686806Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37df6520_3416_48d1_8b5c_7d00ab869118.slice/cri-containerd-509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210.scope/memory.events\": no such file or directory" Jul 6 23:49:19.342717 containerd[1473]: time="2025-07-06T23:49:19.342683669Z" level=info msg="StartContainer for \"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210\" returns successfully" Jul 6 23:49:19.367103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210-rootfs.mount: Deactivated successfully. Jul 6 23:49:19.378686 containerd[1473]: time="2025-07-06T23:49:19.378607876Z" level=info msg="shim disconnected" id=509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210 namespace=k8s.io Jul 6 23:49:19.378686 containerd[1473]: time="2025-07-06T23:49:19.378679923Z" level=warning msg="cleaning up after shim disconnected" id=509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210 namespace=k8s.io Jul 6 23:49:19.378686 containerd[1473]: time="2025-07-06T23:49:19.378690804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:49:20.189118 containerd[1473]: time="2025-07-06T23:49:20.188409947Z" level=info msg="CreateContainer within sandbox \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:49:20.253152 containerd[1473]: time="2025-07-06T23:49:20.253010385Z" level=info msg="CreateContainer within sandbox \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\"" Jul 6 23:49:20.257958 containerd[1473]: time="2025-07-06T23:49:20.257871685Z" level=info msg="StartContainer for \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\"" Jul 6 23:49:20.306282 systemd[1]: Started cri-containerd-32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263.scope - libcontainer container 32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263. Jul 6 23:49:20.347494 containerd[1473]: time="2025-07-06T23:49:20.347440372Z" level=info msg="StartContainer for \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\" returns successfully" Jul 6 23:49:20.430763 kubelet[2693]: I0706 23:49:20.430708 2693 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:49:20.485071 kubelet[2693]: I0706 23:49:20.484856 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30dbba5a-b1c6-4c44-9aa9-841a0ce719b0-config-volume\") pod \"coredns-7c65d6cfc9-vrvtw\" (UID: \"30dbba5a-b1c6-4c44-9aa9-841a0ce719b0\") " pod="kube-system/coredns-7c65d6cfc9-vrvtw" Jul 6 23:49:20.492447 systemd[1]: Created slice kubepods-burstable-pod30dbba5a_b1c6_4c44_9aa9_841a0ce719b0.slice - libcontainer container kubepods-burstable-pod30dbba5a_b1c6_4c44_9aa9_841a0ce719b0.slice. Jul 6 23:49:20.502439 systemd[1]: Created slice kubepods-burstable-pod8fc1fa6c_68fa_420a_9e09_2cef99c2412d.slice - libcontainer container kubepods-burstable-pod8fc1fa6c_68fa_420a_9e09_2cef99c2412d.slice. Jul 6 23:49:20.586567 kubelet[2693]: I0706 23:49:20.585737 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fc1fa6c-68fa-420a-9e09-2cef99c2412d-config-volume\") pod \"coredns-7c65d6cfc9-pvgbt\" (UID: \"8fc1fa6c-68fa-420a-9e09-2cef99c2412d\") " pod="kube-system/coredns-7c65d6cfc9-pvgbt" Jul 6 23:49:20.586567 kubelet[2693]: I0706 23:49:20.585809 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4knvl\" (UniqueName: \"kubernetes.io/projected/8fc1fa6c-68fa-420a-9e09-2cef99c2412d-kube-api-access-4knvl\") pod \"coredns-7c65d6cfc9-pvgbt\" (UID: \"8fc1fa6c-68fa-420a-9e09-2cef99c2412d\") " pod="kube-system/coredns-7c65d6cfc9-pvgbt" Jul 6 23:49:20.586567 kubelet[2693]: I0706 23:49:20.585839 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csmrs\" (UniqueName: \"kubernetes.io/projected/30dbba5a-b1c6-4c44-9aa9-841a0ce719b0-kube-api-access-csmrs\") pod \"coredns-7c65d6cfc9-vrvtw\" (UID: \"30dbba5a-b1c6-4c44-9aa9-841a0ce719b0\") " pod="kube-system/coredns-7c65d6cfc9-vrvtw" Jul 6 23:49:20.799041 containerd[1473]: time="2025-07-06T23:49:20.798958783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vrvtw,Uid:30dbba5a-b1c6-4c44-9aa9-841a0ce719b0,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:20.810328 containerd[1473]: time="2025-07-06T23:49:20.809768316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pvgbt,Uid:8fc1fa6c-68fa-420a-9e09-2cef99c2412d,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:22.680875 systemd-networkd[1382]: cilium_host: Link UP Jul 6 23:49:22.681065 systemd-networkd[1382]: cilium_net: Link UP Jul 6 23:49:22.681326 systemd-networkd[1382]: cilium_net: Gained carrier Jul 6 23:49:22.681513 systemd-networkd[1382]: cilium_host: Gained carrier Jul 6 23:49:22.806952 systemd-networkd[1382]: cilium_vxlan: Link UP Jul 6 23:49:22.806963 systemd-networkd[1382]: cilium_vxlan: Gained carrier Jul 6 23:49:23.153210 kernel: NET: Registered PF_ALG protocol family Jul 6 23:49:23.563749 systemd-networkd[1382]: cilium_host: Gained IPv6LL Jul 6 23:49:23.627401 systemd-networkd[1382]: cilium_net: Gained IPv6LL Jul 6 23:49:24.050300 systemd-networkd[1382]: lxc_health: Link UP Jul 6 23:49:24.060708 systemd-networkd[1382]: lxc_health: Gained carrier Jul 6 23:49:24.204450 systemd-networkd[1382]: cilium_vxlan: Gained IPv6LL Jul 6 23:49:24.428201 kernel: eth0: renamed from tmp28b32 Jul 6 23:49:24.451741 kernel: eth0: renamed from tmp3e329 Jul 6 23:49:24.458531 systemd-networkd[1382]: lxcee6897243af0: Link UP Jul 6 23:49:24.468811 systemd-networkd[1382]: lxc5b898e29344d: Link UP Jul 6 23:49:24.474344 systemd-networkd[1382]: lxcee6897243af0: Gained carrier Jul 6 23:49:24.474947 systemd-networkd[1382]: lxc5b898e29344d: Gained carrier Jul 6 23:49:25.355790 systemd-networkd[1382]: lxc_health: Gained IPv6LL Jul 6 23:49:25.419759 kubelet[2693]: I0706 23:49:25.419565 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l2bw5" podStartSLOduration=11.719834031 podStartE2EDuration="21.419496819s" podCreationTimestamp="2025-07-06 23:49:04 +0000 UTC" firstStartedPulling="2025-07-06 23:49:05.495493412 +0000 UTC m=+6.597047251" lastFinishedPulling="2025-07-06 23:49:15.19515619 +0000 UTC m=+16.296710039" observedRunningTime="2025-07-06 23:49:21.233727011 +0000 UTC m=+22.335280970" watchObservedRunningTime="2025-07-06 23:49:25.419496819 +0000 UTC m=+26.521050668" Jul 6 23:49:25.741461 systemd-networkd[1382]: lxcee6897243af0: Gained IPv6LL Jul 6 23:49:25.867418 systemd-networkd[1382]: lxc5b898e29344d: Gained IPv6LL Jul 6 23:49:29.113984 containerd[1473]: time="2025-07-06T23:49:29.113828029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:29.113984 containerd[1473]: time="2025-07-06T23:49:29.113936885Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:29.115082 containerd[1473]: time="2025-07-06T23:49:29.113951683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:29.115082 containerd[1473]: time="2025-07-06T23:49:29.114067131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:29.167726 systemd[1]: run-containerd-runc-k8s.io-28b32bcc9ea4ebee48e1e1ff9dab638f8e5a85f057735dd5bf6c8f3a93b492f5-runc.RXQU8W.mount: Deactivated successfully. Jul 6 23:49:29.180764 systemd[1]: Started cri-containerd-28b32bcc9ea4ebee48e1e1ff9dab638f8e5a85f057735dd5bf6c8f3a93b492f5.scope - libcontainer container 28b32bcc9ea4ebee48e1e1ff9dab638f8e5a85f057735dd5bf6c8f3a93b492f5. Jul 6 23:49:29.200243 containerd[1473]: time="2025-07-06T23:49:29.199947697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:29.200243 containerd[1473]: time="2025-07-06T23:49:29.200019713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:29.200243 containerd[1473]: time="2025-07-06T23:49:29.200040261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:29.200534 containerd[1473]: time="2025-07-06T23:49:29.200293100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:29.244304 systemd[1]: Started cri-containerd-3e3294bf723c91284877a22a5091fe3a633945e1617cb7454242efb0e82ce741.scope - libcontainer container 3e3294bf723c91284877a22a5091fe3a633945e1617cb7454242efb0e82ce741. Jul 6 23:49:29.301259 containerd[1473]: time="2025-07-06T23:49:29.301080415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vrvtw,Uid:30dbba5a-b1c6-4c44-9aa9-841a0ce719b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"28b32bcc9ea4ebee48e1e1ff9dab638f8e5a85f057735dd5bf6c8f3a93b492f5\"" Jul 6 23:49:29.308367 containerd[1473]: time="2025-07-06T23:49:29.308248310Z" level=info msg="CreateContainer within sandbox \"28b32bcc9ea4ebee48e1e1ff9dab638f8e5a85f057735dd5bf6c8f3a93b492f5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:49:29.338203 containerd[1473]: time="2025-07-06T23:49:29.338031483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pvgbt,Uid:8fc1fa6c-68fa-420a-9e09-2cef99c2412d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e3294bf723c91284877a22a5091fe3a633945e1617cb7454242efb0e82ce741\"" Jul 6 23:49:29.343853 containerd[1473]: time="2025-07-06T23:49:29.343634306Z" level=info msg="CreateContainer within sandbox \"28b32bcc9ea4ebee48e1e1ff9dab638f8e5a85f057735dd5bf6c8f3a93b492f5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"656b793fff0bd93ab45dcabc05e05adc0a036ad79282537d6181d4fc17cbcd51\"" Jul 6 23:49:29.344649 containerd[1473]: time="2025-07-06T23:49:29.344565879Z" level=info msg="StartContainer for \"656b793fff0bd93ab45dcabc05e05adc0a036ad79282537d6181d4fc17cbcd51\"" Jul 6 23:49:29.346497 containerd[1473]: time="2025-07-06T23:49:29.346374632Z" level=info msg="CreateContainer within sandbox \"3e3294bf723c91284877a22a5091fe3a633945e1617cb7454242efb0e82ce741\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:49:29.402763 containerd[1473]: time="2025-07-06T23:49:29.400016366Z" level=info msg="CreateContainer within sandbox \"3e3294bf723c91284877a22a5091fe3a633945e1617cb7454242efb0e82ce741\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad1bdcb2ac89d84d952d28113102763343b581c65dc82c4c062499d3a5a851de\"" Jul 6 23:49:29.402763 containerd[1473]: time="2025-07-06T23:49:29.401628136Z" level=info msg="StartContainer for \"ad1bdcb2ac89d84d952d28113102763343b581c65dc82c4c062499d3a5a851de\"" Jul 6 23:49:29.430413 systemd[1]: Started cri-containerd-656b793fff0bd93ab45dcabc05e05adc0a036ad79282537d6181d4fc17cbcd51.scope - libcontainer container 656b793fff0bd93ab45dcabc05e05adc0a036ad79282537d6181d4fc17cbcd51. Jul 6 23:49:29.461440 systemd[1]: Started cri-containerd-ad1bdcb2ac89d84d952d28113102763343b581c65dc82c4c062499d3a5a851de.scope - libcontainer container ad1bdcb2ac89d84d952d28113102763343b581c65dc82c4c062499d3a5a851de. Jul 6 23:49:29.480035 containerd[1473]: time="2025-07-06T23:49:29.479769164Z" level=info msg="StartContainer for \"656b793fff0bd93ab45dcabc05e05adc0a036ad79282537d6181d4fc17cbcd51\" returns successfully" Jul 6 23:49:29.519535 containerd[1473]: time="2025-07-06T23:49:29.517640384Z" level=info msg="StartContainer for \"ad1bdcb2ac89d84d952d28113102763343b581c65dc82c4c062499d3a5a851de\" returns successfully" Jul 6 23:49:30.279893 kubelet[2693]: I0706 23:49:30.279788 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pvgbt" podStartSLOduration=26.279760399 podStartE2EDuration="26.279760399s" podCreationTimestamp="2025-07-06 23:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:30.277236855 +0000 UTC m=+31.378790724" watchObservedRunningTime="2025-07-06 23:49:30.279760399 +0000 UTC m=+31.381314248" Jul 6 23:49:30.309518 kubelet[2693]: I0706 23:49:30.308529 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vrvtw" podStartSLOduration=26.30848123 podStartE2EDuration="26.30848123s" podCreationTimestamp="2025-07-06 23:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:30.308052839 +0000 UTC m=+31.409606718" watchObservedRunningTime="2025-07-06 23:49:30.30848123 +0000 UTC m=+31.410035129" Jul 6 23:50:53.441201 systemd[1]: Started sshd@9-172.24.4.123:22-172.24.4.1:48790.service - OpenSSH per-connection server daemon (172.24.4.1:48790). Jul 6 23:50:54.915976 sshd[4081]: Accepted publickey for core from 172.24.4.1 port 48790 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:50:54.921306 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:54.943322 systemd-logind[1457]: New session 12 of user core. Jul 6 23:50:54.956546 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:50:55.704735 sshd[4083]: Connection closed by 172.24.4.1 port 48790 Jul 6 23:50:55.706717 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:55.723553 systemd[1]: sshd@9-172.24.4.123:22-172.24.4.1:48790.service: Deactivated successfully. Jul 6 23:50:55.732881 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:50:55.736092 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:50:55.739710 systemd-logind[1457]: Removed session 12. Jul 6 23:51:00.736846 systemd[1]: Started sshd@10-172.24.4.123:22-172.24.4.1:41974.service - OpenSSH per-connection server daemon (172.24.4.1:41974). Jul 6 23:51:02.011342 sshd[4101]: Accepted publickey for core from 172.24.4.1 port 41974 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:02.015916 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:02.029181 systemd-logind[1457]: New session 13 of user core. Jul 6 23:51:02.037009 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:51:02.663788 sshd[4103]: Connection closed by 172.24.4.1 port 41974 Jul 6 23:51:02.664489 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:02.672962 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:51:02.673946 systemd[1]: sshd@10-172.24.4.123:22-172.24.4.1:41974.service: Deactivated successfully. Jul 6 23:51:02.681588 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:51:02.687403 systemd-logind[1457]: Removed session 13. Jul 6 23:51:07.700221 systemd[1]: Started sshd@11-172.24.4.123:22-172.24.4.1:58090.service - OpenSSH per-connection server daemon (172.24.4.1:58090). Jul 6 23:51:09.039836 sshd[4119]: Accepted publickey for core from 172.24.4.1 port 58090 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:09.045293 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:09.065273 systemd-logind[1457]: New session 14 of user core. Jul 6 23:51:09.073674 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:51:09.642605 sshd[4121]: Connection closed by 172.24.4.1 port 58090 Jul 6 23:51:09.644646 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:09.651682 systemd[1]: sshd@11-172.24.4.123:22-172.24.4.1:58090.service: Deactivated successfully. Jul 6 23:51:09.656734 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:51:09.660847 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:51:09.663720 systemd-logind[1457]: Removed session 14. Jul 6 23:51:14.669763 systemd[1]: Started sshd@12-172.24.4.123:22-172.24.4.1:54306.service - OpenSSH per-connection server daemon (172.24.4.1:54306). Jul 6 23:51:15.986209 sshd[4134]: Accepted publickey for core from 172.24.4.1 port 54306 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:15.990988 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:16.004204 systemd-logind[1457]: New session 15 of user core. Jul 6 23:51:16.015468 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:51:16.762820 sshd[4136]: Connection closed by 172.24.4.1 port 54306 Jul 6 23:51:16.762477 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:16.783437 systemd[1]: sshd@12-172.24.4.123:22-172.24.4.1:54306.service: Deactivated successfully. Jul 6 23:51:16.789871 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:51:16.794531 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:51:16.802813 systemd[1]: Started sshd@13-172.24.4.123:22-172.24.4.1:54322.service - OpenSSH per-connection server daemon (172.24.4.1:54322). Jul 6 23:51:16.806936 systemd-logind[1457]: Removed session 15. Jul 6 23:51:17.975228 sshd[4148]: Accepted publickey for core from 172.24.4.1 port 54322 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:17.976510 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:17.989686 systemd-logind[1457]: New session 16 of user core. Jul 6 23:51:17.999543 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:51:18.750176 sshd[4151]: Connection closed by 172.24.4.1 port 54322 Jul 6 23:51:18.752013 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:18.766558 systemd[1]: sshd@13-172.24.4.123:22-172.24.4.1:54322.service: Deactivated successfully. Jul 6 23:51:18.773688 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:51:18.780315 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:51:18.792849 systemd[1]: Started sshd@14-172.24.4.123:22-172.24.4.1:54332.service - OpenSSH per-connection server daemon (172.24.4.1:54332). Jul 6 23:51:18.797634 systemd-logind[1457]: Removed session 16. Jul 6 23:51:19.998336 sshd[4160]: Accepted publickey for core from 172.24.4.1 port 54332 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:20.002265 sshd-session[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:20.026322 systemd-logind[1457]: New session 17 of user core. Jul 6 23:51:20.032652 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:51:20.631611 sshd[4163]: Connection closed by 172.24.4.1 port 54332 Jul 6 23:51:20.633118 sshd-session[4160]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:20.640708 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:51:20.641333 systemd[1]: sshd@14-172.24.4.123:22-172.24.4.1:54332.service: Deactivated successfully. Jul 6 23:51:20.645692 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:51:20.652354 systemd-logind[1457]: Removed session 17. Jul 6 23:51:25.670900 systemd[1]: Started sshd@15-172.24.4.123:22-172.24.4.1:43488.service - OpenSSH per-connection server daemon (172.24.4.1:43488). Jul 6 23:51:27.123571 sshd[4176]: Accepted publickey for core from 172.24.4.1 port 43488 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:27.127315 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:27.142048 systemd-logind[1457]: New session 18 of user core. Jul 6 23:51:27.149548 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:51:27.922571 sshd[4178]: Connection closed by 172.24.4.1 port 43488 Jul 6 23:51:27.924603 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:27.935196 systemd[1]: sshd@15-172.24.4.123:22-172.24.4.1:43488.service: Deactivated successfully. Jul 6 23:51:27.941443 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:51:27.945882 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:51:27.948727 systemd-logind[1457]: Removed session 18. Jul 6 23:51:32.965965 systemd[1]: Started sshd@16-172.24.4.123:22-172.24.4.1:43494.service - OpenSSH per-connection server daemon (172.24.4.1:43494). Jul 6 23:51:34.095974 sshd[4190]: Accepted publickey for core from 172.24.4.1 port 43494 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:34.099876 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:34.112270 systemd-logind[1457]: New session 19 of user core. Jul 6 23:51:34.121473 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:51:34.932185 sshd[4192]: Connection closed by 172.24.4.1 port 43494 Jul 6 23:51:34.934582 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:34.959385 systemd[1]: sshd@16-172.24.4.123:22-172.24.4.1:43494.service: Deactivated successfully. Jul 6 23:51:34.967847 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:51:34.973201 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:51:34.985565 systemd[1]: Started sshd@17-172.24.4.123:22-172.24.4.1:40996.service - OpenSSH per-connection server daemon (172.24.4.1:40996). Jul 6 23:51:34.988548 systemd-logind[1457]: Removed session 19. Jul 6 23:51:36.112218 sshd[4203]: Accepted publickey for core from 172.24.4.1 port 40996 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:36.114729 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:36.128326 systemd-logind[1457]: New session 20 of user core. Jul 6 23:51:36.141669 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:51:36.997626 sshd[4209]: Connection closed by 172.24.4.1 port 40996 Jul 6 23:51:36.999174 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:37.021997 systemd[1]: sshd@17-172.24.4.123:22-172.24.4.1:40996.service: Deactivated successfully. Jul 6 23:51:37.027211 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:51:37.031755 systemd-logind[1457]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:51:37.040875 systemd[1]: Started sshd@18-172.24.4.123:22-172.24.4.1:41010.service - OpenSSH per-connection server daemon (172.24.4.1:41010). Jul 6 23:51:37.044042 systemd-logind[1457]: Removed session 20. Jul 6 23:51:38.630788 sshd[4218]: Accepted publickey for core from 172.24.4.1 port 41010 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:38.633809 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:38.650216 systemd-logind[1457]: New session 21 of user core. Jul 6 23:51:38.657976 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:51:41.869201 sshd[4221]: Connection closed by 172.24.4.1 port 41010 Jul 6 23:51:41.870892 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:41.896258 systemd[1]: sshd@18-172.24.4.123:22-172.24.4.1:41010.service: Deactivated successfully. Jul 6 23:51:41.903360 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:51:41.908049 systemd-logind[1457]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:51:41.916034 systemd[1]: Started sshd@19-172.24.4.123:22-172.24.4.1:41012.service - OpenSSH per-connection server daemon (172.24.4.1:41012). Jul 6 23:51:41.921361 systemd-logind[1457]: Removed session 21. Jul 6 23:51:43.307281 sshd[4238]: Accepted publickey for core from 172.24.4.1 port 41012 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:43.311422 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:43.323255 systemd-logind[1457]: New session 22 of user core. Jul 6 23:51:43.332493 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:51:44.321995 sshd[4241]: Connection closed by 172.24.4.1 port 41012 Jul 6 23:51:44.325640 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:44.344962 systemd[1]: sshd@19-172.24.4.123:22-172.24.4.1:41012.service: Deactivated successfully. Jul 6 23:51:44.352371 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:51:44.357453 systemd-logind[1457]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:51:44.367874 systemd[1]: Started sshd@20-172.24.4.123:22-172.24.4.1:52856.service - OpenSSH per-connection server daemon (172.24.4.1:52856). Jul 6 23:51:44.392057 systemd-logind[1457]: Removed session 22. Jul 6 23:51:45.594276 sshd[4250]: Accepted publickey for core from 172.24.4.1 port 52856 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:45.599109 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:45.619839 systemd-logind[1457]: New session 23 of user core. Jul 6 23:51:45.630673 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:51:46.367464 sshd[4253]: Connection closed by 172.24.4.1 port 52856 Jul 6 23:51:46.371488 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:46.380087 systemd[1]: sshd@20-172.24.4.123:22-172.24.4.1:52856.service: Deactivated successfully. Jul 6 23:51:46.390220 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:51:46.396208 systemd-logind[1457]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:51:46.399310 systemd-logind[1457]: Removed session 23. Jul 6 23:51:51.415227 systemd[1]: Started sshd@21-172.24.4.123:22-172.24.4.1:52866.service - OpenSSH per-connection server daemon (172.24.4.1:52866). Jul 6 23:51:52.804998 sshd[4269]: Accepted publickey for core from 172.24.4.1 port 52866 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:52.809369 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:52.829712 systemd-logind[1457]: New session 24 of user core. Jul 6 23:51:52.840177 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:51:53.713776 sshd[4271]: Connection closed by 172.24.4.1 port 52866 Jul 6 23:51:53.715801 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:53.724757 systemd[1]: sshd@21-172.24.4.123:22-172.24.4.1:52866.service: Deactivated successfully. Jul 6 23:51:53.730253 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:51:53.735979 systemd-logind[1457]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:51:53.739758 systemd-logind[1457]: Removed session 24. Jul 6 23:51:58.747738 systemd[1]: Started sshd@22-172.24.4.123:22-172.24.4.1:54000.service - OpenSSH per-connection server daemon (172.24.4.1:54000). Jul 6 23:51:59.944236 sshd[4283]: Accepted publickey for core from 172.24.4.1 port 54000 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:51:59.949204 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:59.968115 systemd-logind[1457]: New session 25 of user core. Jul 6 23:51:59.979480 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:52:00.758719 sshd[4287]: Connection closed by 172.24.4.1 port 54000 Jul 6 23:52:00.761757 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jul 6 23:52:00.772295 systemd[1]: sshd@22-172.24.4.123:22-172.24.4.1:54000.service: Deactivated successfully. Jul 6 23:52:00.780646 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:52:00.785680 systemd-logind[1457]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:52:00.790702 systemd-logind[1457]: Removed session 25. Jul 6 23:52:05.792825 systemd[1]: Started sshd@23-172.24.4.123:22-172.24.4.1:48376.service - OpenSSH per-connection server daemon (172.24.4.1:48376). Jul 6 23:52:07.097515 sshd[4299]: Accepted publickey for core from 172.24.4.1 port 48376 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:52:07.101699 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:52:07.115422 systemd-logind[1457]: New session 26 of user core. Jul 6 23:52:07.126482 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:52:08.015554 sshd[4303]: Connection closed by 172.24.4.1 port 48376 Jul 6 23:52:08.018093 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Jul 6 23:52:08.038219 systemd[1]: sshd@23-172.24.4.123:22-172.24.4.1:48376.service: Deactivated successfully. Jul 6 23:52:08.043028 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:52:08.046341 systemd-logind[1457]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:52:08.057803 systemd[1]: Started sshd@24-172.24.4.123:22-172.24.4.1:48386.service - OpenSSH per-connection server daemon (172.24.4.1:48386). Jul 6 23:52:08.060968 systemd-logind[1457]: Removed session 26. Jul 6 23:52:09.316446 sshd[4314]: Accepted publickey for core from 172.24.4.1 port 48386 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:52:09.319698 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:52:09.335250 systemd-logind[1457]: New session 27 of user core. Jul 6 23:52:09.346531 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 6 23:52:11.353154 containerd[1473]: time="2025-07-06T23:52:11.352917561Z" level=info msg="StopContainer for \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\" with timeout 30 (s)" Jul 6 23:52:11.355961 containerd[1473]: time="2025-07-06T23:52:11.355002373Z" level=info msg="Stop container \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\" with signal terminated" Jul 6 23:52:11.388753 systemd[1]: run-containerd-runc-k8s.io-32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263-runc.vczI3R.mount: Deactivated successfully. Jul 6 23:52:11.394065 systemd[1]: cri-containerd-cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164.scope: Deactivated successfully. Jul 6 23:52:11.398593 systemd[1]: cri-containerd-cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164.scope: Consumed 1.028s CPU time, 26.3M memory peak, 4K written to disk. Jul 6 23:52:11.420801 containerd[1473]: time="2025-07-06T23:52:11.420724655Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:52:11.432429 containerd[1473]: time="2025-07-06T23:52:11.432198384Z" level=info msg="StopContainer for \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\" with timeout 2 (s)" Jul 6 23:52:11.433183 containerd[1473]: time="2025-07-06T23:52:11.433045684Z" level=info msg="Stop container \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\" with signal terminated" Jul 6 23:52:11.453437 systemd-networkd[1382]: lxc_health: Link DOWN Jul 6 23:52:11.453455 systemd-networkd[1382]: lxc_health: Lost carrier Jul 6 23:52:11.484967 systemd[1]: cri-containerd-32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263.scope: Deactivated successfully. Jul 6 23:52:11.486195 systemd[1]: cri-containerd-32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263.scope: Consumed 9.827s CPU time, 125M memory peak, 120K read from disk, 13.3M written to disk. Jul 6 23:52:11.491326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164-rootfs.mount: Deactivated successfully. Jul 6 23:52:11.505749 containerd[1473]: time="2025-07-06T23:52:11.505412354Z" level=info msg="shim disconnected" id=cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164 namespace=k8s.io Jul 6 23:52:11.505749 containerd[1473]: time="2025-07-06T23:52:11.505538811Z" level=warning msg="cleaning up after shim disconnected" id=cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164 namespace=k8s.io Jul 6 23:52:11.505749 containerd[1473]: time="2025-07-06T23:52:11.505559189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:11.523362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263-rootfs.mount: Deactivated successfully. Jul 6 23:52:11.541898 containerd[1473]: time="2025-07-06T23:52:11.541427714Z" level=info msg="shim disconnected" id=32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263 namespace=k8s.io Jul 6 23:52:11.541898 containerd[1473]: time="2025-07-06T23:52:11.541488608Z" level=warning msg="cleaning up after shim disconnected" id=32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263 namespace=k8s.io Jul 6 23:52:11.541898 containerd[1473]: time="2025-07-06T23:52:11.541510699Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:11.577113 containerd[1473]: time="2025-07-06T23:52:11.577037411Z" level=info msg="StopContainer for \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\" returns successfully" Jul 6 23:52:11.578150 containerd[1473]: time="2025-07-06T23:52:11.578071161Z" level=info msg="StopPodSandbox for \"ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3\"" Jul 6 23:52:11.578261 containerd[1473]: time="2025-07-06T23:52:11.578177471Z" level=info msg="Container to stop \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:52:11.581967 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3-shm.mount: Deactivated successfully. Jul 6 23:52:11.590734 containerd[1473]: time="2025-07-06T23:52:11.590503609Z" level=info msg="StopContainer for \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\" returns successfully" Jul 6 23:52:11.592044 containerd[1473]: time="2025-07-06T23:52:11.591765678Z" level=info msg="StopPodSandbox for \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\"" Jul 6 23:52:11.592044 containerd[1473]: time="2025-07-06T23:52:11.591893428Z" level=info msg="Container to stop \"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:52:11.592044 containerd[1473]: time="2025-07-06T23:52:11.591943882Z" level=info msg="Container to stop \"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:52:11.592044 containerd[1473]: time="2025-07-06T23:52:11.591956386Z" level=info msg="Container to stop \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:52:11.592044 containerd[1473]: time="2025-07-06T23:52:11.591970663Z" level=info msg="Container to stop \"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:52:11.592044 containerd[1473]: time="2025-07-06T23:52:11.592009536Z" level=info msg="Container to stop \"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:52:11.598278 systemd[1]: cri-containerd-ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3.scope: Deactivated successfully. Jul 6 23:52:11.606720 systemd[1]: cri-containerd-61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb.scope: Deactivated successfully. Jul 6 23:52:11.652036 containerd[1473]: time="2025-07-06T23:52:11.651943325Z" level=info msg="shim disconnected" id=ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3 namespace=k8s.io Jul 6 23:52:11.652315 containerd[1473]: time="2025-07-06T23:52:11.652097555Z" level=warning msg="cleaning up after shim disconnected" id=ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3 namespace=k8s.io Jul 6 23:52:11.652544 containerd[1473]: time="2025-07-06T23:52:11.652385155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:11.652875 containerd[1473]: time="2025-07-06T23:52:11.652745050Z" level=info msg="shim disconnected" id=61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb namespace=k8s.io Jul 6 23:52:11.652875 containerd[1473]: time="2025-07-06T23:52:11.652828236Z" level=warning msg="cleaning up after shim disconnected" id=61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb namespace=k8s.io Jul 6 23:52:11.652875 containerd[1473]: time="2025-07-06T23:52:11.652855197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:11.675308 containerd[1473]: time="2025-07-06T23:52:11.675048690Z" level=warning msg="cleanup warnings time=\"2025-07-06T23:52:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 6 23:52:11.677752 containerd[1473]: time="2025-07-06T23:52:11.677683465Z" level=info msg="TearDown network for sandbox \"ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3\" successfully" Jul 6 23:52:11.677752 containerd[1473]: time="2025-07-06T23:52:11.677731635Z" level=info msg="StopPodSandbox for \"ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3\" returns successfully" Jul 6 23:52:11.699060 containerd[1473]: time="2025-07-06T23:52:11.698981458Z" level=info msg="TearDown network for sandbox \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" successfully" Jul 6 23:52:11.699060 containerd[1473]: time="2025-07-06T23:52:11.699049576Z" level=info msg="StopPodSandbox for \"61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb\" returns successfully" Jul 6 23:52:11.845329 kubelet[2693]: I0706 23:52:11.844899 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37df6520-3416-48d1-8b5c-7d00ab869118-clustermesh-secrets\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.846865 kubelet[2693]: I0706 23:52:11.845505 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bmmg\" (UniqueName: \"kubernetes.io/projected/37df6520-3416-48d1-8b5c-7d00ab869118-kube-api-access-6bmmg\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.846865 kubelet[2693]: I0706 23:52:11.845799 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-host-proc-sys-net\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.846865 kubelet[2693]: I0706 23:52:11.845982 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-config-path\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.846865 kubelet[2693]: I0706 23:52:11.846083 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/456ff60a-41cf-482e-a5ff-2d3b34eec312-cilium-config-path\") pod \"456ff60a-41cf-482e-a5ff-2d3b34eec312\" (UID: \"456ff60a-41cf-482e-a5ff-2d3b34eec312\") " Jul 6 23:52:11.846865 kubelet[2693]: I0706 23:52:11.846232 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-host-proc-sys-kernel\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.846865 kubelet[2693]: I0706 23:52:11.846332 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-hostproc\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.849655 kubelet[2693]: I0706 23:52:11.846479 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-xtables-lock\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.849655 kubelet[2693]: I0706 23:52:11.846570 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-lib-modules\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.849655 kubelet[2693]: I0706 23:52:11.846689 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-etc-cni-netd\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.849655 kubelet[2693]: I0706 23:52:11.846756 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-run\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.849655 kubelet[2693]: I0706 23:52:11.846822 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37df6520-3416-48d1-8b5c-7d00ab869118-hubble-tls\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.849655 kubelet[2693]: I0706 23:52:11.846878 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cni-path\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.850394 kubelet[2693]: I0706 23:52:11.847048 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-cgroup\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.850394 kubelet[2693]: I0706 23:52:11.847287 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-bpf-maps\") pod \"37df6520-3416-48d1-8b5c-7d00ab869118\" (UID: \"37df6520-3416-48d1-8b5c-7d00ab869118\") " Jul 6 23:52:11.850394 kubelet[2693]: I0706 23:52:11.847487 2693 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqb5s\" (UniqueName: \"kubernetes.io/projected/456ff60a-41cf-482e-a5ff-2d3b34eec312-kube-api-access-cqb5s\") pod \"456ff60a-41cf-482e-a5ff-2d3b34eec312\" (UID: \"456ff60a-41cf-482e-a5ff-2d3b34eec312\") " Jul 6 23:52:11.852190 kubelet[2693]: I0706 23:52:11.850687 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:52:11.853784 kubelet[2693]: I0706 23:52:11.853695 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:52:11.854799 kubelet[2693]: I0706 23:52:11.854619 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:52:11.860871 kubelet[2693]: I0706 23:52:11.860698 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cni-path" (OuterVolumeSpecName: "cni-path") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:52:11.860871 kubelet[2693]: I0706 23:52:11.860794 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:52:11.860871 kubelet[2693]: I0706 23:52:11.860840 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:52:11.862611 kubelet[2693]: I0706 23:52:11.860730 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:52:11.867611 kubelet[2693]: I0706 23:52:11.867502 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:52:11.867831 kubelet[2693]: I0706 23:52:11.867686 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:52:11.869311 kubelet[2693]: I0706 23:52:11.869232 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-hostproc" (OuterVolumeSpecName: "hostproc") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:52:11.878091 kubelet[2693]: I0706 23:52:11.877738 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:52:11.880244 kubelet[2693]: I0706 23:52:11.879465 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37df6520-3416-48d1-8b5c-7d00ab869118-kube-api-access-6bmmg" (OuterVolumeSpecName: "kube-api-access-6bmmg") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "kube-api-access-6bmmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:52:11.880244 kubelet[2693]: I0706 23:52:11.879692 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456ff60a-41cf-482e-a5ff-2d3b34eec312-kube-api-access-cqb5s" (OuterVolumeSpecName: "kube-api-access-cqb5s") pod "456ff60a-41cf-482e-a5ff-2d3b34eec312" (UID: "456ff60a-41cf-482e-a5ff-2d3b34eec312"). InnerVolumeSpecName "kube-api-access-cqb5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:52:11.880244 kubelet[2693]: I0706 23:52:11.879836 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37df6520-3416-48d1-8b5c-7d00ab869118-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:52:11.880244 kubelet[2693]: I0706 23:52:11.880159 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37df6520-3416-48d1-8b5c-7d00ab869118-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "37df6520-3416-48d1-8b5c-7d00ab869118" (UID: "37df6520-3416-48d1-8b5c-7d00ab869118"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:52:11.882512 kubelet[2693]: I0706 23:52:11.882371 2693 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/456ff60a-41cf-482e-a5ff-2d3b34eec312-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "456ff60a-41cf-482e-a5ff-2d3b34eec312" (UID: "456ff60a-41cf-482e-a5ff-2d3b34eec312"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:52:11.948927 kubelet[2693]: I0706 23:52:11.948734 2693 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-xtables-lock\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.948927 kubelet[2693]: I0706 23:52:11.948867 2693 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-lib-modules\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.949925 kubelet[2693]: I0706 23:52:11.949451 2693 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-etc-cni-netd\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.949925 kubelet[2693]: I0706 23:52:11.949539 2693 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-run\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.949925 kubelet[2693]: I0706 23:52:11.949576 2693 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/37df6520-3416-48d1-8b5c-7d00ab869118-hubble-tls\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.949925 kubelet[2693]: I0706 23:52:11.949643 2693 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cni-path\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.949925 kubelet[2693]: I0706 23:52:11.949726 2693 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-cgroup\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.949925 kubelet[2693]: I0706 23:52:11.949757 2693 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-bpf-maps\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.951087 kubelet[2693]: I0706 23:52:11.950576 2693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqb5s\" (UniqueName: \"kubernetes.io/projected/456ff60a-41cf-482e-a5ff-2d3b34eec312-kube-api-access-cqb5s\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.951087 kubelet[2693]: I0706 23:52:11.950827 2693 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/37df6520-3416-48d1-8b5c-7d00ab869118-clustermesh-secrets\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.951087 kubelet[2693]: I0706 23:52:11.950879 2693 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bmmg\" (UniqueName: \"kubernetes.io/projected/37df6520-3416-48d1-8b5c-7d00ab869118-kube-api-access-6bmmg\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.951087 kubelet[2693]: I0706 23:52:11.950941 2693 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-host-proc-sys-net\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.951087 kubelet[2693]: I0706 23:52:11.950969 2693 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37df6520-3416-48d1-8b5c-7d00ab869118-cilium-config-path\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.951087 kubelet[2693]: I0706 23:52:11.950994 2693 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/456ff60a-41cf-482e-a5ff-2d3b34eec312-cilium-config-path\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.951087 kubelet[2693]: I0706 23:52:11.951018 2693 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-host-proc-sys-kernel\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:11.951801 kubelet[2693]: I0706 23:52:11.951042 2693 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/37df6520-3416-48d1-8b5c-7d00ab869118-hostproc\") on node \"ci-4230-2-1-3-a5860ac047.novalocal\" DevicePath \"\"" Jul 6 23:52:12.064916 kubelet[2693]: I0706 23:52:12.061762 2693 scope.go:117] "RemoveContainer" containerID="32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263" Jul 6 23:52:12.078698 containerd[1473]: time="2025-07-06T23:52:12.078444468Z" level=info msg="RemoveContainer for \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\"" Jul 6 23:52:12.095370 systemd[1]: Removed slice kubepods-burstable-pod37df6520_3416_48d1_8b5c_7d00ab869118.slice - libcontainer container kubepods-burstable-pod37df6520_3416_48d1_8b5c_7d00ab869118.slice. Jul 6 23:52:12.095678 systemd[1]: kubepods-burstable-pod37df6520_3416_48d1_8b5c_7d00ab869118.slice: Consumed 9.924s CPU time, 125.4M memory peak, 120K read from disk, 13.3M written to disk. Jul 6 23:52:12.120179 systemd[1]: Removed slice kubepods-besteffort-pod456ff60a_41cf_482e_a5ff_2d3b34eec312.slice - libcontainer container kubepods-besteffort-pod456ff60a_41cf_482e_a5ff_2d3b34eec312.slice. Jul 6 23:52:12.120310 systemd[1]: kubepods-besteffort-pod456ff60a_41cf_482e_a5ff_2d3b34eec312.slice: Consumed 1.061s CPU time, 26.5M memory peak, 4K written to disk. Jul 6 23:52:12.126819 containerd[1473]: time="2025-07-06T23:52:12.126545828Z" level=info msg="RemoveContainer for \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\" returns successfully" Jul 6 23:52:12.129371 kubelet[2693]: I0706 23:52:12.129074 2693 scope.go:117] "RemoveContainer" containerID="509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210" Jul 6 23:52:12.132037 containerd[1473]: time="2025-07-06T23:52:12.131964306Z" level=info msg="RemoveContainer for \"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210\"" Jul 6 23:52:12.142748 containerd[1473]: time="2025-07-06T23:52:12.142537115Z" level=info msg="RemoveContainer for \"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210\" returns successfully" Jul 6 23:52:12.143282 kubelet[2693]: I0706 23:52:12.143159 2693 scope.go:117] "RemoveContainer" containerID="e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65" Jul 6 23:52:12.145666 containerd[1473]: time="2025-07-06T23:52:12.145592659Z" level=info msg="RemoveContainer for \"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65\"" Jul 6 23:52:12.153455 containerd[1473]: time="2025-07-06T23:52:12.153373489Z" level=info msg="RemoveContainer for \"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65\" returns successfully" Jul 6 23:52:12.154050 kubelet[2693]: I0706 23:52:12.153648 2693 scope.go:117] "RemoveContainer" containerID="c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3" Jul 6 23:52:12.155987 containerd[1473]: time="2025-07-06T23:52:12.155883268Z" level=info msg="RemoveContainer for \"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3\"" Jul 6 23:52:12.165663 containerd[1473]: time="2025-07-06T23:52:12.163776970Z" level=info msg="RemoveContainer for \"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3\" returns successfully" Jul 6 23:52:12.166078 kubelet[2693]: I0706 23:52:12.166053 2693 scope.go:117] "RemoveContainer" containerID="40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2" Jul 6 23:52:12.168650 containerd[1473]: time="2025-07-06T23:52:12.168600130Z" level=info msg="RemoveContainer for \"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2\"" Jul 6 23:52:12.180391 containerd[1473]: time="2025-07-06T23:52:12.180334710Z" level=info msg="RemoveContainer for \"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2\" returns successfully" Jul 6 23:52:12.180867 kubelet[2693]: I0706 23:52:12.180618 2693 scope.go:117] "RemoveContainer" containerID="32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263" Jul 6 23:52:12.181165 containerd[1473]: time="2025-07-06T23:52:12.180955976Z" level=error msg="ContainerStatus for \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\": not found" Jul 6 23:52:12.181390 kubelet[2693]: E0706 23:52:12.181258 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\": not found" containerID="32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263" Jul 6 23:52:12.181602 kubelet[2693]: I0706 23:52:12.181330 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263"} err="failed to get container status \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\": rpc error: code = NotFound desc = an error occurred when try to find container \"32c2aea25edb25dfd49003274048ea1fa0d6e6cb62138c3be46fff1ae215d263\": not found" Jul 6 23:52:12.181602 kubelet[2693]: I0706 23:52:12.181514 2693 scope.go:117] "RemoveContainer" containerID="509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210" Jul 6 23:52:12.181825 containerd[1473]: time="2025-07-06T23:52:12.181787797Z" level=error msg="ContainerStatus for \"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210\": not found" Jul 6 23:52:12.182335 kubelet[2693]: E0706 23:52:12.181955 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210\": not found" containerID="509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210" Jul 6 23:52:12.182335 kubelet[2693]: I0706 23:52:12.182051 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210"} err="failed to get container status \"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210\": rpc error: code = NotFound desc = an error occurred when try to find container \"509f95421fd92d1ab397e02dac5a05ffd7a4ba03164b8449845ac9eea8887210\": not found" Jul 6 23:52:12.182335 kubelet[2693]: I0706 23:52:12.182142 2693 scope.go:117] "RemoveContainer" containerID="e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65" Jul 6 23:52:12.183009 containerd[1473]: time="2025-07-06T23:52:12.182790829Z" level=error msg="ContainerStatus for \"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65\": not found" Jul 6 23:52:12.183510 kubelet[2693]: E0706 23:52:12.183285 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65\": not found" containerID="e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65" Jul 6 23:52:12.183510 kubelet[2693]: I0706 23:52:12.183321 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65"} err="failed to get container status \"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0872a3e0c3a3440f51edb7e453411cad305ebec6710cb4a63ee6ae4c39a2a65\": not found" Jul 6 23:52:12.183510 kubelet[2693]: I0706 23:52:12.183340 2693 scope.go:117] "RemoveContainer" containerID="c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3" Jul 6 23:52:12.184487 containerd[1473]: time="2025-07-06T23:52:12.184424193Z" level=error msg="ContainerStatus for \"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3\": not found" Jul 6 23:52:12.184878 kubelet[2693]: E0706 23:52:12.184736 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3\": not found" containerID="c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3" Jul 6 23:52:12.184878 kubelet[2693]: I0706 23:52:12.184764 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3"} err="failed to get container status \"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3cabaff7ee4d45e109ba2103a1860cdb3b475c46c6101c47424d668cf9fa0b3\": not found" Jul 6 23:52:12.184878 kubelet[2693]: I0706 23:52:12.184803 2693 scope.go:117] "RemoveContainer" containerID="40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2" Jul 6 23:52:12.185569 containerd[1473]: time="2025-07-06T23:52:12.185456831Z" level=error msg="ContainerStatus for \"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2\": not found" Jul 6 23:52:12.185955 kubelet[2693]: E0706 23:52:12.185898 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2\": not found" containerID="40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2" Jul 6 23:52:12.186018 kubelet[2693]: I0706 23:52:12.185961 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2"} err="failed to get container status \"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2\": rpc error: code = NotFound desc = an error occurred when try to find container \"40add816dde0d148fc77d920721bb79617dca72de79b014fd142dd0f59b80de2\": not found" Jul 6 23:52:12.186018 kubelet[2693]: I0706 23:52:12.185997 2693 scope.go:117] "RemoveContainer" containerID="cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164" Jul 6 23:52:12.189919 containerd[1473]: time="2025-07-06T23:52:12.188694105Z" level=info msg="RemoveContainer for \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\"" Jul 6 23:52:12.193260 containerd[1473]: time="2025-07-06T23:52:12.193230789Z" level=info msg="RemoveContainer for \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\" returns successfully" Jul 6 23:52:12.193647 kubelet[2693]: I0706 23:52:12.193624 2693 scope.go:117] "RemoveContainer" containerID="cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164" Jul 6 23:52:12.194222 containerd[1473]: time="2025-07-06T23:52:12.194179980Z" level=error msg="ContainerStatus for \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\": not found" Jul 6 23:52:12.194539 kubelet[2693]: E0706 23:52:12.194418 2693 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\": not found" containerID="cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164" Jul 6 23:52:12.194539 kubelet[2693]: I0706 23:52:12.194496 2693 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164"} err="failed to get container status \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\": rpc error: code = NotFound desc = an error occurred when try to find container \"cbc6541e41db1bfdb9b4c3e4a26fdafc8160ce9c2da05fcd2278b516d79fa164\": not found" Jul 6 23:52:12.384064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb-rootfs.mount: Deactivated successfully. Jul 6 23:52:12.386109 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61e409788b4d6a1366762d54946d264b84064ae4452ecb45629f419c0b16f5bb-shm.mount: Deactivated successfully. Jul 6 23:52:12.387880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff2bda6e2937feb61a9f08ef69536353c03edf2912c78c639a19e6bde78827b3-rootfs.mount: Deactivated successfully. Jul 6 23:52:12.388030 systemd[1]: var-lib-kubelet-pods-37df6520\x2d3416\x2d48d1\x2d8b5c\x2d7d00ab869118-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6bmmg.mount: Deactivated successfully. Jul 6 23:52:12.388176 systemd[1]: var-lib-kubelet-pods-456ff60a\x2d41cf\x2d482e\x2da5ff\x2d2d3b34eec312-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcqb5s.mount: Deactivated successfully. Jul 6 23:52:12.388340 systemd[1]: var-lib-kubelet-pods-37df6520\x2d3416\x2d48d1\x2d8b5c\x2d7d00ab869118-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:52:12.388515 systemd[1]: var-lib-kubelet-pods-37df6520\x2d3416\x2d48d1\x2d8b5c\x2d7d00ab869118-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:52:13.051185 kubelet[2693]: I0706 23:52:13.050716 2693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37df6520-3416-48d1-8b5c-7d00ab869118" path="/var/lib/kubelet/pods/37df6520-3416-48d1-8b5c-7d00ab869118/volumes" Jul 6 23:52:13.056767 kubelet[2693]: I0706 23:52:13.055935 2693 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456ff60a-41cf-482e-a5ff-2d3b34eec312" path="/var/lib/kubelet/pods/456ff60a-41cf-482e-a5ff-2d3b34eec312/volumes" Jul 6 23:52:13.473389 sshd[4317]: Connection closed by 172.24.4.1 port 48386 Jul 6 23:52:13.473496 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Jul 6 23:52:13.511516 systemd[1]: sshd@24-172.24.4.123:22-172.24.4.1:48386.service: Deactivated successfully. Jul 6 23:52:13.519907 systemd[1]: session-27.scope: Deactivated successfully. Jul 6 23:52:13.523595 systemd-logind[1457]: Session 27 logged out. Waiting for processes to exit. Jul 6 23:52:13.538347 systemd[1]: Started sshd@25-172.24.4.123:22-172.24.4.1:48390.service - OpenSSH per-connection server daemon (172.24.4.1:48390). Jul 6 23:52:13.546004 systemd-logind[1457]: Removed session 27. Jul 6 23:52:14.238644 kubelet[2693]: E0706 23:52:14.238426 2693 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:52:14.499205 sshd[4481]: Accepted publickey for core from 172.24.4.1 port 48390 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:52:14.501889 sshd-session[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:52:14.528191 systemd-logind[1457]: New session 28 of user core. Jul 6 23:52:14.538572 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 6 23:52:15.689755 kubelet[2693]: E0706 23:52:15.689656 2693 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="456ff60a-41cf-482e-a5ff-2d3b34eec312" containerName="cilium-operator" Jul 6 23:52:15.689755 kubelet[2693]: E0706 23:52:15.689737 2693 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37df6520-3416-48d1-8b5c-7d00ab869118" containerName="mount-cgroup" Jul 6 23:52:15.689755 kubelet[2693]: E0706 23:52:15.689746 2693 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37df6520-3416-48d1-8b5c-7d00ab869118" containerName="clean-cilium-state" Jul 6 23:52:15.690918 kubelet[2693]: E0706 23:52:15.689822 2693 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37df6520-3416-48d1-8b5c-7d00ab869118" containerName="cilium-agent" Jul 6 23:52:15.690918 kubelet[2693]: E0706 23:52:15.689835 2693 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37df6520-3416-48d1-8b5c-7d00ab869118" containerName="apply-sysctl-overwrites" Jul 6 23:52:15.690918 kubelet[2693]: E0706 23:52:15.689842 2693 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37df6520-3416-48d1-8b5c-7d00ab869118" containerName="mount-bpf-fs" Jul 6 23:52:15.690918 kubelet[2693]: I0706 23:52:15.689997 2693 memory_manager.go:354] "RemoveStaleState removing state" podUID="456ff60a-41cf-482e-a5ff-2d3b34eec312" containerName="cilium-operator" Jul 6 23:52:15.690918 kubelet[2693]: I0706 23:52:15.690017 2693 memory_manager.go:354] "RemoveStaleState removing state" podUID="37df6520-3416-48d1-8b5c-7d00ab869118" containerName="cilium-agent" Jul 6 23:52:15.718641 systemd[1]: Created slice kubepods-burstable-pod0046895a_5171_420a_8921_e74b999b797e.slice - libcontainer container kubepods-burstable-pod0046895a_5171_420a_8921_e74b999b797e.slice. Jul 6 23:52:15.786822 kubelet[2693]: I0706 23:52:15.786750 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0046895a-5171-420a-8921-e74b999b797e-hostproc\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787348 kubelet[2693]: I0706 23:52:15.787185 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0046895a-5171-420a-8921-e74b999b797e-cilium-config-path\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787475 kubelet[2693]: I0706 23:52:15.787367 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0046895a-5171-420a-8921-e74b999b797e-host-proc-sys-kernel\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787475 kubelet[2693]: I0706 23:52:15.787396 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0046895a-5171-420a-8921-e74b999b797e-lib-modules\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787475 kubelet[2693]: I0706 23:52:15.787416 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0046895a-5171-420a-8921-e74b999b797e-host-proc-sys-net\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787475 kubelet[2693]: I0706 23:52:15.787434 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgl7r\" (UniqueName: \"kubernetes.io/projected/0046895a-5171-420a-8921-e74b999b797e-kube-api-access-dgl7r\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787475 kubelet[2693]: I0706 23:52:15.787456 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0046895a-5171-420a-8921-e74b999b797e-cilium-ipsec-secrets\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787714 kubelet[2693]: I0706 23:52:15.787484 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0046895a-5171-420a-8921-e74b999b797e-xtables-lock\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787714 kubelet[2693]: I0706 23:52:15.787530 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0046895a-5171-420a-8921-e74b999b797e-hubble-tls\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787714 kubelet[2693]: I0706 23:52:15.787559 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0046895a-5171-420a-8921-e74b999b797e-cilium-run\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787714 kubelet[2693]: I0706 23:52:15.787578 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0046895a-5171-420a-8921-e74b999b797e-bpf-maps\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787714 kubelet[2693]: I0706 23:52:15.787604 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0046895a-5171-420a-8921-e74b999b797e-cilium-cgroup\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787714 kubelet[2693]: I0706 23:52:15.787622 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0046895a-5171-420a-8921-e74b999b797e-clustermesh-secrets\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787967 kubelet[2693]: I0706 23:52:15.787639 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0046895a-5171-420a-8921-e74b999b797e-cni-path\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.787967 kubelet[2693]: I0706 23:52:15.787660 2693 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0046895a-5171-420a-8921-e74b999b797e-etc-cni-netd\") pod \"cilium-p77r8\" (UID: \"0046895a-5171-420a-8921-e74b999b797e\") " pod="kube-system/cilium-p77r8" Jul 6 23:52:15.849474 sshd[4484]: Connection closed by 172.24.4.1 port 48390 Jul 6 23:52:15.849104 sshd-session[4481]: pam_unix(sshd:session): session closed for user core Jul 6 23:52:15.870439 systemd[1]: sshd@25-172.24.4.123:22-172.24.4.1:48390.service: Deactivated successfully. Jul 6 23:52:15.878814 systemd[1]: session-28.scope: Deactivated successfully. Jul 6 23:52:15.886195 systemd-logind[1457]: Session 28 logged out. Waiting for processes to exit. Jul 6 23:52:15.901551 systemd[1]: Started sshd@26-172.24.4.123:22-172.24.4.1:36822.service - OpenSSH per-connection server daemon (172.24.4.1:36822). Jul 6 23:52:15.907881 systemd-logind[1457]: Removed session 28. Jul 6 23:52:16.030632 containerd[1473]: time="2025-07-06T23:52:16.030343667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p77r8,Uid:0046895a-5171-420a-8921-e74b999b797e,Namespace:kube-system,Attempt:0,}" Jul 6 23:52:16.106662 containerd[1473]: time="2025-07-06T23:52:16.106358655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:52:16.106662 containerd[1473]: time="2025-07-06T23:52:16.106533364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:52:16.106662 containerd[1473]: time="2025-07-06T23:52:16.106603956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:16.107477 containerd[1473]: time="2025-07-06T23:52:16.106828818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:52:16.148405 systemd[1]: Started cri-containerd-c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3.scope - libcontainer container c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3. Jul 6 23:52:16.182479 containerd[1473]: time="2025-07-06T23:52:16.182246395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p77r8,Uid:0046895a-5171-420a-8921-e74b999b797e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3\"" Jul 6 23:52:16.188073 containerd[1473]: time="2025-07-06T23:52:16.187980195Z" level=info msg="CreateContainer within sandbox \"c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:52:16.208642 containerd[1473]: time="2025-07-06T23:52:16.208575584Z" level=info msg="CreateContainer within sandbox \"c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5c30aeb1032f7609ec4f74546aedcb7463f9b319388662a0fcd2a935acbf755f\"" Jul 6 23:52:16.211884 containerd[1473]: time="2025-07-06T23:52:16.210455091Z" level=info msg="StartContainer for \"5c30aeb1032f7609ec4f74546aedcb7463f9b319388662a0fcd2a935acbf755f\"" Jul 6 23:52:16.243326 systemd[1]: Started cri-containerd-5c30aeb1032f7609ec4f74546aedcb7463f9b319388662a0fcd2a935acbf755f.scope - libcontainer container 5c30aeb1032f7609ec4f74546aedcb7463f9b319388662a0fcd2a935acbf755f. Jul 6 23:52:16.287974 containerd[1473]: time="2025-07-06T23:52:16.287845581Z" level=info msg="StartContainer for \"5c30aeb1032f7609ec4f74546aedcb7463f9b319388662a0fcd2a935acbf755f\" returns successfully" Jul 6 23:52:16.300202 systemd[1]: cri-containerd-5c30aeb1032f7609ec4f74546aedcb7463f9b319388662a0fcd2a935acbf755f.scope: Deactivated successfully. Jul 6 23:52:16.355985 containerd[1473]: time="2025-07-06T23:52:16.355526711Z" level=info msg="shim disconnected" id=5c30aeb1032f7609ec4f74546aedcb7463f9b319388662a0fcd2a935acbf755f namespace=k8s.io Jul 6 23:52:16.355985 containerd[1473]: time="2025-07-06T23:52:16.355699555Z" level=warning msg="cleaning up after shim disconnected" id=5c30aeb1032f7609ec4f74546aedcb7463f9b319388662a0fcd2a935acbf755f namespace=k8s.io Jul 6 23:52:16.355985 containerd[1473]: time="2025-07-06T23:52:16.355735693Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:17.134008 containerd[1473]: time="2025-07-06T23:52:17.133663735Z" level=info msg="CreateContainer within sandbox \"c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:52:17.174980 containerd[1473]: time="2025-07-06T23:52:17.172577161Z" level=info msg="CreateContainer within sandbox \"c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a471b0d8edd3d6c0638cacdf9ab855f2601db4af52960ebf50a321ad8ae3ddf5\"" Jul 6 23:52:17.177064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount560288299.mount: Deactivated successfully. Jul 6 23:52:17.182378 containerd[1473]: time="2025-07-06T23:52:17.178256940Z" level=info msg="StartContainer for \"a471b0d8edd3d6c0638cacdf9ab855f2601db4af52960ebf50a321ad8ae3ddf5\"" Jul 6 23:52:17.230306 systemd[1]: Started cri-containerd-a471b0d8edd3d6c0638cacdf9ab855f2601db4af52960ebf50a321ad8ae3ddf5.scope - libcontainer container a471b0d8edd3d6c0638cacdf9ab855f2601db4af52960ebf50a321ad8ae3ddf5. Jul 6 23:52:17.234826 sshd[4494]: Accepted publickey for core from 172.24.4.1 port 36822 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:52:17.237048 sshd-session[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:52:17.245282 systemd-logind[1457]: New session 29 of user core. Jul 6 23:52:17.249293 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 6 23:52:17.278415 containerd[1473]: time="2025-07-06T23:52:17.278320170Z" level=info msg="StartContainer for \"a471b0d8edd3d6c0638cacdf9ab855f2601db4af52960ebf50a321ad8ae3ddf5\" returns successfully" Jul 6 23:52:17.289230 systemd[1]: cri-containerd-a471b0d8edd3d6c0638cacdf9ab855f2601db4af52960ebf50a321ad8ae3ddf5.scope: Deactivated successfully. Jul 6 23:52:17.326908 containerd[1473]: time="2025-07-06T23:52:17.326790461Z" level=info msg="shim disconnected" id=a471b0d8edd3d6c0638cacdf9ab855f2601db4af52960ebf50a321ad8ae3ddf5 namespace=k8s.io Jul 6 23:52:17.326908 containerd[1473]: time="2025-07-06T23:52:17.326888375Z" level=warning msg="cleaning up after shim disconnected" id=a471b0d8edd3d6c0638cacdf9ab855f2601db4af52960ebf50a321ad8ae3ddf5 namespace=k8s.io Jul 6 23:52:17.327215 containerd[1473]: time="2025-07-06T23:52:17.326913702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:17.792940 sshd[4629]: Connection closed by 172.24.4.1 port 36822 Jul 6 23:52:17.794660 sshd-session[4494]: pam_unix(sshd:session): session closed for user core Jul 6 23:52:17.814643 systemd[1]: sshd@26-172.24.4.123:22-172.24.4.1:36822.service: Deactivated successfully. Jul 6 23:52:17.819819 systemd[1]: session-29.scope: Deactivated successfully. Jul 6 23:52:17.824725 systemd-logind[1457]: Session 29 logged out. Waiting for processes to exit. Jul 6 23:52:17.831836 systemd[1]: Started sshd@27-172.24.4.123:22-172.24.4.1:36828.service - OpenSSH per-connection server daemon (172.24.4.1:36828). Jul 6 23:52:17.836699 systemd-logind[1457]: Removed session 29. Jul 6 23:52:17.927584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a471b0d8edd3d6c0638cacdf9ab855f2601db4af52960ebf50a321ad8ae3ddf5-rootfs.mount: Deactivated successfully. Jul 6 23:52:18.144209 containerd[1473]: time="2025-07-06T23:52:18.141288757Z" level=info msg="CreateContainer within sandbox \"c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:52:18.261012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206182029.mount: Deactivated successfully. Jul 6 23:52:18.271838 containerd[1473]: time="2025-07-06T23:52:18.271644070Z" level=info msg="CreateContainer within sandbox \"c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"360b5f45b64891a94c896f10587e9721438c3bb0ee769f05525ce1c5e31ea351\"" Jul 6 23:52:18.272469 containerd[1473]: time="2025-07-06T23:52:18.272423001Z" level=info msg="StartContainer for \"360b5f45b64891a94c896f10587e9721438c3bb0ee769f05525ce1c5e31ea351\"" Jul 6 23:52:18.324380 systemd[1]: Started cri-containerd-360b5f45b64891a94c896f10587e9721438c3bb0ee769f05525ce1c5e31ea351.scope - libcontainer container 360b5f45b64891a94c896f10587e9721438c3bb0ee769f05525ce1c5e31ea351. Jul 6 23:52:18.363785 systemd[1]: cri-containerd-360b5f45b64891a94c896f10587e9721438c3bb0ee769f05525ce1c5e31ea351.scope: Deactivated successfully. Jul 6 23:52:18.366322 containerd[1473]: time="2025-07-06T23:52:18.365711512Z" level=info msg="StartContainer for \"360b5f45b64891a94c896f10587e9721438c3bb0ee769f05525ce1c5e31ea351\" returns successfully" Jul 6 23:52:18.399056 containerd[1473]: time="2025-07-06T23:52:18.398889996Z" level=info msg="shim disconnected" id=360b5f45b64891a94c896f10587e9721438c3bb0ee769f05525ce1c5e31ea351 namespace=k8s.io Jul 6 23:52:18.399461 containerd[1473]: time="2025-07-06T23:52:18.399281792Z" level=warning msg="cleaning up after shim disconnected" id=360b5f45b64891a94c896f10587e9721438c3bb0ee769f05525ce1c5e31ea351 namespace=k8s.io Jul 6 23:52:18.399461 containerd[1473]: time="2025-07-06T23:52:18.399302611Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:18.928478 systemd[1]: run-containerd-runc-k8s.io-360b5f45b64891a94c896f10587e9721438c3bb0ee769f05525ce1c5e31ea351-runc.blebYM.mount: Deactivated successfully. Jul 6 23:52:18.928807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-360b5f45b64891a94c896f10587e9721438c3bb0ee769f05525ce1c5e31ea351-rootfs.mount: Deactivated successfully. Jul 6 23:52:19.159177 containerd[1473]: time="2025-07-06T23:52:19.158988191Z" level=info msg="CreateContainer within sandbox \"c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:52:19.236792 containerd[1473]: time="2025-07-06T23:52:19.235409079Z" level=info msg="CreateContainer within sandbox \"c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8453d89e34abfdf056106db0ba3752a2371d07abc845fc566c3d98f4e05de0b4\"" Jul 6 23:52:19.250618 containerd[1473]: time="2025-07-06T23:52:19.248521999Z" level=info msg="StartContainer for \"8453d89e34abfdf056106db0ba3752a2371d07abc845fc566c3d98f4e05de0b4\"" Jul 6 23:52:19.256199 kubelet[2693]: E0706 23:52:19.256117 2693 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:52:19.319480 systemd[1]: Started cri-containerd-8453d89e34abfdf056106db0ba3752a2371d07abc845fc566c3d98f4e05de0b4.scope - libcontainer container 8453d89e34abfdf056106db0ba3752a2371d07abc845fc566c3d98f4e05de0b4. Jul 6 23:52:19.353845 systemd[1]: cri-containerd-8453d89e34abfdf056106db0ba3752a2371d07abc845fc566c3d98f4e05de0b4.scope: Deactivated successfully. Jul 6 23:52:19.360472 containerd[1473]: time="2025-07-06T23:52:19.360416189Z" level=info msg="StartContainer for \"8453d89e34abfdf056106db0ba3752a2371d07abc845fc566c3d98f4e05de0b4\" returns successfully" Jul 6 23:52:19.387623 sshd[4671]: Accepted publickey for core from 172.24.4.1 port 36828 ssh2: RSA SHA256:HYu4eTSY3glt1T8rESuLvG7rbxOiSO4BDdYolY/LIkQ Jul 6 23:52:19.389562 sshd-session[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:52:19.400953 systemd-logind[1457]: New session 30 of user core. Jul 6 23:52:19.407647 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 6 23:52:19.414560 containerd[1473]: time="2025-07-06T23:52:19.413317468Z" level=info msg="shim disconnected" id=8453d89e34abfdf056106db0ba3752a2371d07abc845fc566c3d98f4e05de0b4 namespace=k8s.io Jul 6 23:52:19.414560 containerd[1473]: time="2025-07-06T23:52:19.413386949Z" level=warning msg="cleaning up after shim disconnected" id=8453d89e34abfdf056106db0ba3752a2371d07abc845fc566c3d98f4e05de0b4 namespace=k8s.io Jul 6 23:52:19.414560 containerd[1473]: time="2025-07-06T23:52:19.413398610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:52:19.931881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8453d89e34abfdf056106db0ba3752a2371d07abc845fc566c3d98f4e05de0b4-rootfs.mount: Deactivated successfully. Jul 6 23:52:20.162632 containerd[1473]: time="2025-07-06T23:52:20.162376241Z" level=info msg="CreateContainer within sandbox \"c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:52:20.200850 containerd[1473]: time="2025-07-06T23:52:20.200636044Z" level=info msg="CreateContainer within sandbox \"c5111b8b4a6457a6619cbcef886ff3701ae475b6e5149afe6327aa4cc4c4c8b3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c3e22e8b42c1340a1d5690319b0ea7686646ce3a5d7060f9511247396a3514f0\"" Jul 6 23:52:20.203220 containerd[1473]: time="2025-07-06T23:52:20.202500303Z" level=info msg="StartContainer for \"c3e22e8b42c1340a1d5690319b0ea7686646ce3a5d7060f9511247396a3514f0\"" Jul 6 23:52:20.263319 systemd[1]: Started cri-containerd-c3e22e8b42c1340a1d5690319b0ea7686646ce3a5d7060f9511247396a3514f0.scope - libcontainer container c3e22e8b42c1340a1d5690319b0ea7686646ce3a5d7060f9511247396a3514f0. Jul 6 23:52:20.302248 containerd[1473]: time="2025-07-06T23:52:20.301871204Z" level=info msg="StartContainer for \"c3e22e8b42c1340a1d5690319b0ea7686646ce3a5d7060f9511247396a3514f0\" returns successfully" Jul 6 23:52:20.787341 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:52:20.841267 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jul 6 23:52:21.221561 kubelet[2693]: I0706 23:52:21.221150 2693 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p77r8" podStartSLOduration=6.221019944 podStartE2EDuration="6.221019944s" podCreationTimestamp="2025-07-06 23:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:52:21.219757243 +0000 UTC m=+202.321311183" watchObservedRunningTime="2025-07-06 23:52:21.221019944 +0000 UTC m=+202.322573833" Jul 6 23:52:22.200301 systemd[1]: run-containerd-runc-k8s.io-c3e22e8b42c1340a1d5690319b0ea7686646ce3a5d7060f9511247396a3514f0-runc.MDnRM0.mount: Deactivated successfully. Jul 6 23:52:23.644500 kubelet[2693]: I0706 23:52:23.644399 2693 setters.go:600] "Node became not ready" node="ci-4230-2-1-3-a5860ac047.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-06T23:52:23Z","lastTransitionTime":"2025-07-06T23:52:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 6 23:52:24.332234 systemd-networkd[1382]: lxc_health: Link UP Jul 6 23:52:24.342314 systemd-networkd[1382]: lxc_health: Gained carrier Jul 6 23:52:24.510974 systemd[1]: run-containerd-runc-k8s.io-c3e22e8b42c1340a1d5690319b0ea7686646ce3a5d7060f9511247396a3514f0-runc.MigUpJ.mount: Deactivated successfully. Jul 6 23:52:25.899613 systemd-networkd[1382]: lxc_health: Gained IPv6LL Jul 6 23:52:29.189037 systemd[1]: run-containerd-runc-k8s.io-c3e22e8b42c1340a1d5690319b0ea7686646ce3a5d7060f9511247396a3514f0-runc.jBvdAU.mount: Deactivated successfully. Jul 6 23:52:31.896311 sshd[4773]: Connection closed by 172.24.4.1 port 36828 Jul 6 23:52:31.899739 sshd-session[4671]: pam_unix(sshd:session): session closed for user core Jul 6 23:52:31.916215 systemd[1]: sshd@27-172.24.4.123:22-172.24.4.1:36828.service: Deactivated successfully. Jul 6 23:52:31.924925 systemd[1]: session-30.scope: Deactivated successfully. Jul 6 23:52:31.928946 systemd-logind[1457]: Session 30 logged out. Waiting for processes to exit. Jul 6 23:52:31.934735 systemd-logind[1457]: Removed session 30.