Jul 9 14:42:06.001918 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 08:38:39 -00 2025 Jul 9 14:42:06.001947 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 14:42:06.001958 kernel: BIOS-provided physical RAM map: Jul 9 14:42:06.001969 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 9 14:42:06.001976 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 9 14:42:06.001983 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 9 14:42:06.001992 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jul 9 14:42:06.002000 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jul 9 14:42:06.002008 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 9 14:42:06.002015 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 9 14:42:06.002023 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jul 9 14:42:06.002031 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 9 14:42:06.002041 kernel: NX (Execute Disable) protection: active Jul 9 14:42:06.002049 kernel: APIC: Static calls initialized Jul 9 14:42:06.002058 kernel: SMBIOS 3.0.0 present. Jul 9 14:42:06.002066 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jul 9 14:42:06.002074 kernel: DMI: Memory slots populated: 1/1 Jul 9 14:42:06.002084 kernel: Hypervisor detected: KVM Jul 9 14:42:06.002092 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 9 14:42:06.002100 kernel: kvm-clock: using sched offset of 4841305495 cycles Jul 9 14:42:06.002108 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 9 14:42:06.002117 kernel: tsc: Detected 1996.249 MHz processor Jul 9 14:42:06.002126 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 9 14:42:06.002135 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 9 14:42:06.002143 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jul 9 14:42:06.002152 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 9 14:42:06.002162 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 9 14:42:06.002171 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jul 9 14:42:06.002179 kernel: ACPI: Early table checksum verification disabled Jul 9 14:42:06.002187 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jul 9 14:42:06.002196 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 14:42:06.002204 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 14:42:06.002213 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 14:42:06.002221 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jul 9 14:42:06.002230 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 14:42:06.002240 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 14:42:06.002248 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jul 9 14:42:06.002256 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jul 9 14:42:06.002265 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jul 9 14:42:06.002273 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jul 9 14:42:06.002285 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jul 9 14:42:06.002293 kernel: No NUMA configuration found Jul 9 14:42:06.002304 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jul 9 14:42:06.002313 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] Jul 9 14:42:06.002321 kernel: Zone ranges: Jul 9 14:42:06.002330 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 9 14:42:06.002339 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 9 14:42:06.002347 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jul 9 14:42:06.002356 kernel: Device empty Jul 9 14:42:06.002365 kernel: Movable zone start for each node Jul 9 14:42:06.002375 kernel: Early memory node ranges Jul 9 14:42:06.002384 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 9 14:42:06.002392 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jul 9 14:42:06.002401 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jul 9 14:42:06.002410 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jul 9 14:42:06.002419 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 9 14:42:06.002427 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 9 14:42:06.002436 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jul 9 14:42:06.002445 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 9 14:42:06.002455 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 9 14:42:06.002464 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 9 14:42:06.002473 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 9 14:42:06.002482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 9 14:42:06.002490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 9 14:42:06.002499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 9 14:42:06.002508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 9 14:42:06.002516 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 9 14:42:06.002525 kernel: CPU topo: Max. logical packages: 2 Jul 9 14:42:06.002536 kernel: CPU topo: Max. logical dies: 2 Jul 9 14:42:06.002544 kernel: CPU topo: Max. dies per package: 1 Jul 9 14:42:06.002553 kernel: CPU topo: Max. threads per core: 1 Jul 9 14:42:06.002562 kernel: CPU topo: Num. cores per package: 1 Jul 9 14:42:06.002570 kernel: CPU topo: Num. threads per package: 1 Jul 9 14:42:06.002579 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jul 9 14:42:06.002587 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 9 14:42:06.002596 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jul 9 14:42:06.002605 kernel: Booting paravirtualized kernel on KVM Jul 9 14:42:06.002615 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 9 14:42:06.002624 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 9 14:42:06.002633 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jul 9 14:42:06.002642 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jul 9 14:42:06.002651 kernel: pcpu-alloc: [0] 0 1 Jul 9 14:42:06.002660 kernel: kvm-guest: PV spinlocks disabled, no host support Jul 9 14:42:06.002670 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 14:42:06.002679 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 14:42:06.002690 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 14:42:06.002699 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 14:42:06.002708 kernel: Fallback order for Node 0: 0 Jul 9 14:42:06.002717 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jul 9 14:42:06.002726 kernel: Policy zone: Normal Jul 9 14:42:06.002734 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 14:42:06.002743 kernel: software IO TLB: area num 2. Jul 9 14:42:06.002751 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 9 14:42:06.002760 kernel: ftrace: allocating 40097 entries in 157 pages Jul 9 14:42:06.002770 kernel: ftrace: allocated 157 pages with 5 groups Jul 9 14:42:06.002779 kernel: Dynamic Preempt: voluntary Jul 9 14:42:06.002787 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 14:42:06.002797 kernel: rcu: RCU event tracing is enabled. Jul 9 14:42:06.002806 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 9 14:42:06.002815 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 14:42:06.002824 kernel: Rude variant of Tasks RCU enabled. Jul 9 14:42:06.002848 kernel: Tracing variant of Tasks RCU enabled. Jul 9 14:42:06.002857 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 14:42:06.002868 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 9 14:42:06.002877 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 14:42:06.002886 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 14:42:06.002895 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 9 14:42:06.002904 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 9 14:42:06.002913 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 14:42:06.002921 kernel: Console: colour VGA+ 80x25 Jul 9 14:42:06.002930 kernel: printk: legacy console [tty0] enabled Jul 9 14:42:06.002939 kernel: printk: legacy console [ttyS0] enabled Jul 9 14:42:06.002949 kernel: ACPI: Core revision 20240827 Jul 9 14:42:06.002957 kernel: APIC: Switch to symmetric I/O mode setup Jul 9 14:42:06.002966 kernel: x2apic enabled Jul 9 14:42:06.002975 kernel: APIC: Switched APIC routing to: physical x2apic Jul 9 14:42:06.002983 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 9 14:42:06.002992 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 9 14:42:06.003007 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jul 9 14:42:06.003018 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 9 14:42:06.003026 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 9 14:42:06.003036 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 9 14:42:06.003045 kernel: Spectre V2 : Mitigation: Retpolines Jul 9 14:42:06.003054 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 9 14:42:06.003065 kernel: Speculative Store Bypass: Vulnerable Jul 9 14:42:06.003074 kernel: x86/fpu: x87 FPU will use FXSAVE Jul 9 14:42:06.003083 kernel: Freeing SMP alternatives memory: 32K Jul 9 14:42:06.003092 kernel: pid_max: default: 32768 minimum: 301 Jul 9 14:42:06.003101 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 14:42:06.003112 kernel: landlock: Up and running. Jul 9 14:42:06.003121 kernel: SELinux: Initializing. Jul 9 14:42:06.003130 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 14:42:06.003140 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 14:42:06.003149 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jul 9 14:42:06.003158 kernel: Performance Events: AMD PMU driver. Jul 9 14:42:06.003167 kernel: ... version: 0 Jul 9 14:42:06.003176 kernel: ... bit width: 48 Jul 9 14:42:06.003185 kernel: ... generic registers: 4 Jul 9 14:42:06.003196 kernel: ... value mask: 0000ffffffffffff Jul 9 14:42:06.003205 kernel: ... max period: 00007fffffffffff Jul 9 14:42:06.003214 kernel: ... fixed-purpose events: 0 Jul 9 14:42:06.003223 kernel: ... event mask: 000000000000000f Jul 9 14:42:06.003232 kernel: signal: max sigframe size: 1440 Jul 9 14:42:06.003241 kernel: rcu: Hierarchical SRCU implementation. Jul 9 14:42:06.003251 kernel: rcu: Max phase no-delay instances is 400. Jul 9 14:42:06.003260 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 14:42:06.003269 kernel: smp: Bringing up secondary CPUs ... Jul 9 14:42:06.003280 kernel: smpboot: x86: Booting SMP configuration: Jul 9 14:42:06.003289 kernel: .... node #0, CPUs: #1 Jul 9 14:42:06.003298 kernel: smp: Brought up 1 node, 2 CPUs Jul 9 14:42:06.003307 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jul 9 14:42:06.003316 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54568K init, 2400K bss, 227296K reserved, 0K cma-reserved) Jul 9 14:42:06.003325 kernel: devtmpfs: initialized Jul 9 14:42:06.003334 kernel: x86/mm: Memory block size: 128MB Jul 9 14:42:06.003343 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 14:42:06.003353 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 9 14:42:06.003363 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 14:42:06.003373 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 14:42:06.003382 kernel: audit: initializing netlink subsys (disabled) Jul 9 14:42:06.003391 kernel: audit: type=2000 audit(1752072122.453:1): state=initialized audit_enabled=0 res=1 Jul 9 14:42:06.003400 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 14:42:06.003409 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 9 14:42:06.003418 kernel: cpuidle: using governor menu Jul 9 14:42:06.003428 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 14:42:06.003437 kernel: dca service started, version 1.12.1 Jul 9 14:42:06.003448 kernel: PCI: Using configuration type 1 for base access Jul 9 14:42:06.003457 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 9 14:42:06.003466 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 14:42:06.003475 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 14:42:06.003484 kernel: ACPI: Added _OSI(Module Device) Jul 9 14:42:06.003493 kernel: ACPI: Added _OSI(Processor Device) Jul 9 14:42:06.003502 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 14:42:06.003511 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 14:42:06.003520 kernel: ACPI: Interpreter enabled Jul 9 14:42:06.003531 kernel: ACPI: PM: (supports S0 S3 S5) Jul 9 14:42:06.003540 kernel: ACPI: Using IOAPIC for interrupt routing Jul 9 14:42:06.003550 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 9 14:42:06.003559 kernel: PCI: Using E820 reservations for host bridge windows Jul 9 14:42:06.003568 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 9 14:42:06.003577 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 14:42:06.003720 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 9 14:42:06.004988 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 9 14:42:06.005136 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 9 14:42:06.005153 kernel: acpiphp: Slot [3] registered Jul 9 14:42:06.005165 kernel: acpiphp: Slot [4] registered Jul 9 14:42:06.005175 kernel: acpiphp: Slot [5] registered Jul 9 14:42:06.005186 kernel: acpiphp: Slot [6] registered Jul 9 14:42:06.005196 kernel: acpiphp: Slot [7] registered Jul 9 14:42:06.005206 kernel: acpiphp: Slot [8] registered Jul 9 14:42:06.005216 kernel: acpiphp: Slot [9] registered Jul 9 14:42:06.005226 kernel: acpiphp: Slot [10] registered Jul 9 14:42:06.005241 kernel: acpiphp: Slot [11] registered Jul 9 14:42:06.005251 kernel: acpiphp: Slot [12] registered Jul 9 14:42:06.005260 kernel: acpiphp: Slot [13] registered Jul 9 14:42:06.005269 kernel: acpiphp: Slot [14] registered Jul 9 14:42:06.005278 kernel: acpiphp: Slot [15] registered Jul 9 14:42:06.005287 kernel: acpiphp: Slot [16] registered Jul 9 14:42:06.005297 kernel: acpiphp: Slot [17] registered Jul 9 14:42:06.005306 kernel: acpiphp: Slot [18] registered Jul 9 14:42:06.005315 kernel: acpiphp: Slot [19] registered Jul 9 14:42:06.005326 kernel: acpiphp: Slot [20] registered Jul 9 14:42:06.005335 kernel: acpiphp: Slot [21] registered Jul 9 14:42:06.005344 kernel: acpiphp: Slot [22] registered Jul 9 14:42:06.005353 kernel: acpiphp: Slot [23] registered Jul 9 14:42:06.005362 kernel: acpiphp: Slot [24] registered Jul 9 14:42:06.005371 kernel: acpiphp: Slot [25] registered Jul 9 14:42:06.005380 kernel: acpiphp: Slot [26] registered Jul 9 14:42:06.005389 kernel: acpiphp: Slot [27] registered Jul 9 14:42:06.005399 kernel: acpiphp: Slot [28] registered Jul 9 14:42:06.005408 kernel: acpiphp: Slot [29] registered Jul 9 14:42:06.005420 kernel: acpiphp: Slot [30] registered Jul 9 14:42:06.005429 kernel: acpiphp: Slot [31] registered Jul 9 14:42:06.005438 kernel: PCI host bridge to bus 0000:00 Jul 9 14:42:06.005532 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 9 14:42:06.005612 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 9 14:42:06.005690 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 9 14:42:06.005765 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 9 14:42:06.005919 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jul 9 14:42:06.006002 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 14:42:06.006109 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jul 9 14:42:06.006218 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jul 9 14:42:06.006319 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jul 9 14:42:06.006410 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jul 9 14:42:06.006507 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jul 9 14:42:06.006593 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jul 9 14:42:06.006679 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jul 9 14:42:06.006766 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jul 9 14:42:06.009892 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jul 9 14:42:06.010000 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 9 14:42:06.010088 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 9 14:42:06.010192 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jul 9 14:42:06.010283 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jul 9 14:42:06.010372 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jul 9 14:42:06.010459 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jul 9 14:42:06.010545 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jul 9 14:42:06.010632 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 9 14:42:06.010728 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 9 14:42:06.010821 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jul 9 14:42:06.010933 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jul 9 14:42:06.011021 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jul 9 14:42:06.011108 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jul 9 14:42:06.011202 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 9 14:42:06.011290 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jul 9 14:42:06.011382 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jul 9 14:42:06.011475 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jul 9 14:42:06.011580 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 14:42:06.011674 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jul 9 14:42:06.011787 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jul 9 14:42:06.011929 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 14:42:06.012025 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jul 9 14:42:06.012124 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jul 9 14:42:06.012216 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jul 9 14:42:06.012231 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 9 14:42:06.012241 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 9 14:42:06.012251 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 9 14:42:06.012261 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 9 14:42:06.012271 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 9 14:42:06.012281 kernel: iommu: Default domain type: Translated Jul 9 14:42:06.012291 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 9 14:42:06.012305 kernel: PCI: Using ACPI for IRQ routing Jul 9 14:42:06.012315 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 9 14:42:06.012325 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 9 14:42:06.012335 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jul 9 14:42:06.012426 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 9 14:42:06.012519 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 9 14:42:06.012611 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 9 14:42:06.012625 kernel: vgaarb: loaded Jul 9 14:42:06.012635 kernel: clocksource: Switched to clocksource kvm-clock Jul 9 14:42:06.012648 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 14:42:06.012659 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 14:42:06.012669 kernel: pnp: PnP ACPI init Jul 9 14:42:06.012761 kernel: pnp 00:03: [dma 2] Jul 9 14:42:06.012776 kernel: pnp: PnP ACPI: found 5 devices Jul 9 14:42:06.012787 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 9 14:42:06.012797 kernel: NET: Registered PF_INET protocol family Jul 9 14:42:06.012807 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 14:42:06.012820 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 14:42:06.013872 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 14:42:06.013893 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 14:42:06.013903 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 14:42:06.013913 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 14:42:06.013922 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 14:42:06.013931 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 14:42:06.013941 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 14:42:06.013950 kernel: NET: Registered PF_XDP protocol family Jul 9 14:42:06.014086 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 9 14:42:06.014224 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 9 14:42:06.014359 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 9 14:42:06.014489 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jul 9 14:42:06.014625 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jul 9 14:42:06.014765 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 9 14:42:06.014950 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 9 14:42:06.014973 kernel: PCI: CLS 0 bytes, default 64 Jul 9 14:42:06.014993 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 9 14:42:06.015004 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jul 9 14:42:06.015014 kernel: Initialise system trusted keyrings Jul 9 14:42:06.015024 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 14:42:06.015034 kernel: Key type asymmetric registered Jul 9 14:42:06.015044 kernel: Asymmetric key parser 'x509' registered Jul 9 14:42:06.015054 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 9 14:42:06.015064 kernel: io scheduler mq-deadline registered Jul 9 14:42:06.015076 kernel: io scheduler kyber registered Jul 9 14:42:06.015086 kernel: io scheduler bfq registered Jul 9 14:42:06.015095 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 9 14:42:06.015106 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jul 9 14:42:06.015116 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 9 14:42:06.015127 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 9 14:42:06.015137 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 9 14:42:06.015146 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 14:42:06.015156 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 9 14:42:06.015167 kernel: random: crng init done Jul 9 14:42:06.015179 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 9 14:42:06.015189 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 9 14:42:06.015198 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 9 14:42:06.015301 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 9 14:42:06.015316 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 9 14:42:06.015398 kernel: rtc_cmos 00:04: registered as rtc0 Jul 9 14:42:06.015493 kernel: rtc_cmos 00:04: setting system clock to 2025-07-09T14:42:05 UTC (1752072125) Jul 9 14:42:06.015598 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 9 14:42:06.015614 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 9 14:42:06.015624 kernel: NET: Registered PF_INET6 protocol family Jul 9 14:42:06.015633 kernel: Segment Routing with IPv6 Jul 9 14:42:06.015642 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 14:42:06.015651 kernel: NET: Registered PF_PACKET protocol family Jul 9 14:42:06.015661 kernel: Key type dns_resolver registered Jul 9 14:42:06.015670 kernel: IPI shorthand broadcast: enabled Jul 9 14:42:06.015679 kernel: sched_clock: Marking stable (3598006799, 181202148)->(3822874319, -43665372) Jul 9 14:42:06.015692 kernel: registered taskstats version 1 Jul 9 14:42:06.015701 kernel: Loading compiled-in X.509 certificates Jul 9 14:42:06.015710 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 8ba3d283fde4a005aa35ab9394afe8122b8a3878' Jul 9 14:42:06.015719 kernel: Demotion targets for Node 0: null Jul 9 14:42:06.015728 kernel: Key type .fscrypt registered Jul 9 14:42:06.015760 kernel: Key type fscrypt-provisioning registered Jul 9 14:42:06.015769 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 14:42:06.015778 kernel: ima: Allocated hash algorithm: sha1 Jul 9 14:42:06.015787 kernel: ima: No architecture policies found Jul 9 14:42:06.015798 kernel: clk: Disabling unused clocks Jul 9 14:42:06.015807 kernel: Warning: unable to open an initial console. Jul 9 14:42:06.015817 kernel: Freeing unused kernel image (initmem) memory: 54568K Jul 9 14:42:06.015827 kernel: Write protecting the kernel read-only data: 24576k Jul 9 14:42:06.015852 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 9 14:42:06.015861 kernel: Run /init as init process Jul 9 14:42:06.015870 kernel: with arguments: Jul 9 14:42:06.015880 kernel: /init Jul 9 14:42:06.015889 kernel: with environment: Jul 9 14:42:06.015900 kernel: HOME=/ Jul 9 14:42:06.015909 kernel: TERM=linux Jul 9 14:42:06.015918 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 14:42:06.015930 systemd[1]: Successfully made /usr/ read-only. Jul 9 14:42:06.015944 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 14:42:06.015955 systemd[1]: Detected virtualization kvm. Jul 9 14:42:06.015965 systemd[1]: Detected architecture x86-64. Jul 9 14:42:06.015983 systemd[1]: Running in initrd. Jul 9 14:42:06.015995 systemd[1]: No hostname configured, using default hostname. Jul 9 14:42:06.016005 systemd[1]: Hostname set to . Jul 9 14:42:06.016015 systemd[1]: Initializing machine ID from VM UUID. Jul 9 14:42:06.016025 systemd[1]: Queued start job for default target initrd.target. Jul 9 14:42:06.016036 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 14:42:06.016048 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 14:42:06.016059 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 14:42:06.016069 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 14:42:06.016079 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 14:42:06.016091 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 14:42:06.016102 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 14:42:06.016113 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 14:42:06.016125 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 14:42:06.016135 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 14:42:06.016145 systemd[1]: Reached target paths.target - Path Units. Jul 9 14:42:06.016155 systemd[1]: Reached target slices.target - Slice Units. Jul 9 14:42:06.016165 systemd[1]: Reached target swap.target - Swaps. Jul 9 14:42:06.016176 systemd[1]: Reached target timers.target - Timer Units. Jul 9 14:42:06.016186 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 14:42:06.016944 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 14:42:06.016961 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 14:42:06.016971 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 14:42:06.016982 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 14:42:06.016992 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 14:42:06.017002 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 14:42:06.017013 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 14:42:06.017023 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 14:42:06.017033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 14:42:06.017043 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 14:42:06.017056 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 14:42:06.017066 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 14:42:06.017078 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 14:42:06.017089 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 14:42:06.017099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 14:42:06.017111 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 14:42:06.017122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 14:42:06.017134 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 14:42:06.017145 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 14:42:06.017178 systemd-journald[212]: Collecting audit messages is disabled. Jul 9 14:42:06.017207 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:42:06.017220 systemd-journald[212]: Journal started Jul 9 14:42:06.017246 systemd-journald[212]: Runtime Journal (/run/log/journal/3f181a04658c42258cacbe1c5f1f01a8) is 8M, max 78.5M, 70.5M free. Jul 9 14:42:06.000217 systemd-modules-load[214]: Inserted module 'overlay' Jul 9 14:42:06.022884 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 14:42:06.027859 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 14:42:06.034892 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 14:42:06.037126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 14:42:06.042672 kernel: Bridge firewalling registered Jul 9 14:42:06.040006 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 14:42:06.043297 systemd-modules-load[214]: Inserted module 'br_netfilter' Jul 9 14:42:06.046325 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 14:42:06.049315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 14:42:06.056932 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 14:42:06.057339 systemd-tmpfiles[231]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 14:42:06.062599 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 14:42:06.068088 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 14:42:06.070961 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 14:42:06.076141 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 14:42:06.084277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 14:42:06.089005 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 14:42:06.094867 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f85d3be94c634d7d72fbcd0e670073ce56ae2e0cc763f83b329300b7cea5203d Jul 9 14:42:06.137191 systemd-resolved[256]: Positive Trust Anchors: Jul 9 14:42:06.138022 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 14:42:06.138065 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 14:42:06.149328 systemd-resolved[256]: Defaulting to hostname 'linux'. Jul 9 14:42:06.150394 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 14:42:06.151240 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 14:42:06.164868 kernel: SCSI subsystem initialized Jul 9 14:42:06.174875 kernel: Loading iSCSI transport class v2.0-870. Jul 9 14:42:06.186882 kernel: iscsi: registered transport (tcp) Jul 9 14:42:06.231755 kernel: iscsi: registered transport (qla4xxx) Jul 9 14:42:06.231887 kernel: QLogic iSCSI HBA Driver Jul 9 14:42:06.256204 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 14:42:06.279569 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 14:42:06.281120 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 14:42:06.382309 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 14:42:06.387105 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 14:42:06.476948 kernel: raid6: sse2x4 gen() 5257 MB/s Jul 9 14:42:06.494931 kernel: raid6: sse2x2 gen() 15020 MB/s Jul 9 14:42:06.513318 kernel: raid6: sse2x1 gen() 10170 MB/s Jul 9 14:42:06.513402 kernel: raid6: using algorithm sse2x2 gen() 15020 MB/s Jul 9 14:42:06.532330 kernel: raid6: .... xor() 9379 MB/s, rmw enabled Jul 9 14:42:06.532398 kernel: raid6: using ssse3x2 recovery algorithm Jul 9 14:42:06.553989 kernel: xor: measuring software checksum speed Jul 9 14:42:06.554051 kernel: prefetch64-sse : 18494 MB/sec Jul 9 14:42:06.554935 kernel: generic_sse : 15531 MB/sec Jul 9 14:42:06.557452 kernel: xor: using function: prefetch64-sse (18494 MB/sec) Jul 9 14:42:06.759997 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 14:42:06.772359 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 14:42:06.778262 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 14:42:06.813239 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jul 9 14:42:06.818796 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 14:42:06.828078 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 14:42:06.854364 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jul 9 14:42:06.892701 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 14:42:06.897437 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 14:42:06.945888 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 14:42:06.951456 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 14:42:07.051205 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jul 9 14:42:07.057017 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jul 9 14:42:07.056444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 14:42:07.056592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:42:07.060760 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 14:42:07.062403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 14:42:07.064988 kernel: libata version 3.00 loaded. Jul 9 14:42:07.064743 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 14:42:07.074203 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 14:42:07.074236 kernel: GPT:17805311 != 20971519 Jul 9 14:42:07.076081 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 14:42:07.076102 kernel: GPT:17805311 != 20971519 Jul 9 14:42:07.077123 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 14:42:07.079604 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 14:42:07.080973 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 9 14:42:07.084358 kernel: scsi host0: ata_piix Jul 9 14:42:07.084514 kernel: scsi host1: ata_piix Jul 9 14:42:07.086935 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jul 9 14:42:07.086970 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jul 9 14:42:07.139926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:42:07.279925 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 9 14:42:07.337683 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 14:42:07.347899 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 14:42:07.367507 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 14:42:07.376194 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 14:42:07.376792 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 14:42:07.393009 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 14:42:07.393691 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 14:42:07.395981 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 14:42:07.398215 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 14:42:07.401953 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 14:42:07.404102 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 14:42:07.424207 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 14:42:07.427602 disk-uuid[572]: Primary Header is updated. Jul 9 14:42:07.427602 disk-uuid[572]: Secondary Entries is updated. Jul 9 14:42:07.427602 disk-uuid[572]: Secondary Header is updated. Jul 9 14:42:07.450712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 14:42:08.470227 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 14:42:08.470323 disk-uuid[579]: The operation has completed successfully. Jul 9 14:42:08.549231 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 14:42:08.550116 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 14:42:08.598000 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 14:42:08.627424 sh[590]: Success Jul 9 14:42:08.677877 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 14:42:08.677981 kernel: device-mapper: uevent: version 1.0.3 Jul 9 14:42:08.684060 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 14:42:08.698903 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jul 9 14:42:08.778362 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 14:42:08.787010 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 14:42:08.809546 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 14:42:08.818918 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 14:42:08.822908 kernel: BTRFS: device fsid 082bcfbc-2c86-46fe-87f4-85dea5450235 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (602) Jul 9 14:42:08.822968 kernel: BTRFS info (device dm-0): first mount of filesystem 082bcfbc-2c86-46fe-87f4-85dea5450235 Jul 9 14:42:08.827090 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 9 14:42:08.827168 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 14:42:08.842563 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 14:42:08.844548 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 14:42:08.846039 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 14:42:08.847884 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 14:42:08.851062 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 14:42:08.880992 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (629) Jul 9 14:42:08.883947 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 14:42:08.884042 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 14:42:08.887352 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 14:42:08.898925 kernel: BTRFS info (device vda6): last unmount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 14:42:08.900126 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 14:42:08.901652 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 14:42:09.062535 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 14:42:09.066515 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 14:42:09.119613 systemd-networkd[773]: lo: Link UP Jul 9 14:42:09.119627 systemd-networkd[773]: lo: Gained carrier Jul 9 14:42:09.120765 systemd-networkd[773]: Enumeration completed Jul 9 14:42:09.120854 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 14:42:09.122217 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 14:42:09.125049 ignition[670]: Ignition 2.21.0 Jul 9 14:42:09.122222 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 14:42:09.125057 ignition[670]: Stage: fetch-offline Jul 9 14:42:09.123270 systemd-networkd[773]: eth0: Link UP Jul 9 14:42:09.125094 ignition[670]: no configs at "/usr/lib/ignition/base.d" Jul 9 14:42:09.123274 systemd-networkd[773]: eth0: Gained carrier Jul 9 14:42:09.125103 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:42:09.123283 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 14:42:09.125198 ignition[670]: parsed url from cmdline: "" Jul 9 14:42:09.124681 systemd[1]: Reached target network.target - Network. Jul 9 14:42:09.125202 ignition[670]: no config URL provided Jul 9 14:42:09.126947 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 14:42:09.125208 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 14:42:09.129964 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 9 14:42:09.125215 ignition[670]: no config at "/usr/lib/ignition/user.ign" Jul 9 14:42:09.125220 ignition[670]: failed to fetch config: resource requires networking Jul 9 14:42:09.125378 ignition[670]: Ignition finished successfully Jul 9 14:42:09.138894 systemd-networkd[773]: eth0: DHCPv4 address 172.24.4.161/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 9 14:42:09.156273 ignition[781]: Ignition 2.21.0 Jul 9 14:42:09.156285 ignition[781]: Stage: fetch Jul 9 14:42:09.156429 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 9 14:42:09.156439 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:42:09.156522 ignition[781]: parsed url from cmdline: "" Jul 9 14:42:09.156526 ignition[781]: no config URL provided Jul 9 14:42:09.156531 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 14:42:09.156539 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jul 9 14:42:09.156643 ignition[781]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 9 14:42:09.156666 ignition[781]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 9 14:42:09.156709 ignition[781]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 9 14:42:09.552818 ignition[781]: GET result: OK Jul 9 14:42:09.554970 ignition[781]: parsing config with SHA512: 51211774b82dc9b723991a3b55c757959f9a8b1620400af550a0033c2a3c3e6e3ff72c6538d39c2422b414d610bb5f20ccfa74077428adba300563a446894036 Jul 9 14:42:09.574288 unknown[781]: fetched base config from "system" Jul 9 14:42:09.575700 ignition[781]: fetch: fetch complete Jul 9 14:42:09.574324 unknown[781]: fetched base config from "system" Jul 9 14:42:09.575718 ignition[781]: fetch: fetch passed Jul 9 14:42:09.574349 unknown[781]: fetched user config from "openstack" Jul 9 14:42:09.577027 ignition[781]: Ignition finished successfully Jul 9 14:42:09.584755 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 9 14:42:09.591146 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 14:42:09.661270 ignition[789]: Ignition 2.21.0 Jul 9 14:42:09.661297 ignition[789]: Stage: kargs Jul 9 14:42:09.661543 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jul 9 14:42:09.661563 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:42:09.667173 ignition[789]: kargs: kargs passed Jul 9 14:42:09.667256 ignition[789]: Ignition finished successfully Jul 9 14:42:09.674919 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 14:42:09.678599 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 14:42:09.748368 ignition[795]: Ignition 2.21.0 Jul 9 14:42:09.748413 ignition[795]: Stage: disks Jul 9 14:42:09.748763 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jul 9 14:42:09.752339 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 14:42:09.748779 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:42:09.754079 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 14:42:09.750378 ignition[795]: disks: disks passed Jul 9 14:42:09.756136 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 14:42:09.750450 ignition[795]: Ignition finished successfully Jul 9 14:42:09.758211 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 14:42:09.760283 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 14:42:09.762311 systemd[1]: Reached target basic.target - Basic System. Jul 9 14:42:09.764652 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 14:42:09.804237 systemd-fsck[804]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 9 14:42:09.822366 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 14:42:09.826202 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 14:42:10.046004 kernel: EXT4-fs (vda9): mounted filesystem b08a603c-44fa-43af-af80-90bed9b8770a r/w with ordered data mode. Quota mode: none. Jul 9 14:42:10.047235 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 14:42:10.049133 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 14:42:10.054996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 14:42:10.060006 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 14:42:10.061778 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 14:42:10.075340 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 9 14:42:10.085053 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 14:42:10.085193 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 14:42:10.099685 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 14:42:10.107064 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 14:42:10.128655 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (812) Jul 9 14:42:10.128688 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 14:42:10.128709 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 14:42:10.128727 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 14:42:10.146768 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 14:42:10.270904 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:10.273095 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 14:42:10.281715 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jul 9 14:42:10.288912 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 14:42:10.300756 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 14:42:10.490395 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 14:42:10.495769 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 14:42:10.498684 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 14:42:10.531572 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 14:42:10.537214 kernel: BTRFS info (device vda6): last unmount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 14:42:10.568807 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 14:42:10.574971 ignition[931]: INFO : Ignition 2.21.0 Jul 9 14:42:10.574971 ignition[931]: INFO : Stage: mount Jul 9 14:42:10.576191 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 14:42:10.576191 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:42:10.576191 ignition[931]: INFO : mount: mount passed Jul 9 14:42:10.576191 ignition[931]: INFO : Ignition finished successfully Jul 9 14:42:10.577165 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 14:42:10.733393 systemd-networkd[773]: eth0: Gained IPv6LL Jul 9 14:42:11.324921 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:13.343664 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:17.367928 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:17.383204 coreos-metadata[814]: Jul 09 14:42:17.382 WARN failed to locate config-drive, using the metadata service API instead Jul 9 14:42:17.433315 coreos-metadata[814]: Jul 09 14:42:17.433 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 9 14:42:17.448718 coreos-metadata[814]: Jul 09 14:42:17.448 INFO Fetch successful Jul 9 14:42:17.450269 coreos-metadata[814]: Jul 09 14:42:17.449 INFO wrote hostname ci-9999-9-100-ea23d699c2.novalocal to /sysroot/etc/hostname Jul 9 14:42:17.457528 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 9 14:42:17.458047 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 9 14:42:17.469217 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 14:42:17.510065 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 14:42:17.568901 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (947) Jul 9 14:42:17.578916 kernel: BTRFS info (device vda6): first mount of filesystem 87056a6c-ee99-487a-9330-f1335025b841 Jul 9 14:42:17.579022 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 14:42:17.583355 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 14:42:17.599577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 14:42:17.670989 ignition[965]: INFO : Ignition 2.21.0 Jul 9 14:42:17.670989 ignition[965]: INFO : Stage: files Jul 9 14:42:17.674904 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 14:42:17.674904 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:42:17.674904 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jul 9 14:42:17.679407 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 14:42:17.679407 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 14:42:17.686777 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 14:42:17.688014 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 14:42:17.689278 unknown[965]: wrote ssh authorized keys file for user: core Jul 9 14:42:17.690029 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 14:42:17.695236 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 9 14:42:17.696325 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 9 14:42:17.905402 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 14:42:18.324628 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 9 14:42:18.324628 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 14:42:18.329608 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 9 14:42:19.009306 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 14:42:19.412522 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 14:42:19.412522 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 14:42:19.417195 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 14:42:19.417195 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 14:42:19.417195 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 14:42:19.417195 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 14:42:19.417195 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 14:42:19.417195 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 14:42:19.417195 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 14:42:19.432425 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 14:42:19.432425 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 14:42:19.432425 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 14:42:19.432425 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 14:42:19.432425 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 14:42:19.432425 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 9 14:42:20.105181 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 14:42:22.382671 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 14:42:22.382671 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 9 14:42:22.388791 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 14:42:22.393601 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 14:42:22.393601 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 9 14:42:22.393601 ignition[965]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 9 14:42:22.402566 ignition[965]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 14:42:22.402566 ignition[965]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 14:42:22.402566 ignition[965]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 14:42:22.402566 ignition[965]: INFO : files: files passed Jul 9 14:42:22.402566 ignition[965]: INFO : Ignition finished successfully Jul 9 14:42:22.396821 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 14:42:22.401990 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 14:42:22.407988 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 14:42:22.432594 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 14:42:22.432722 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 14:42:22.440184 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 14:42:22.440184 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 14:42:22.445414 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 14:42:22.443573 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 14:42:22.446400 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 14:42:22.449312 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 14:42:22.567114 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 14:42:22.567598 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 14:42:22.572246 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 14:42:22.573988 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 14:42:22.576927 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 14:42:22.579575 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 14:42:22.632300 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 14:42:22.638621 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 14:42:22.681348 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 14:42:22.683097 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 14:42:22.686187 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 14:42:22.689107 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 14:42:22.689446 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 14:42:22.692669 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 14:42:22.694480 systemd[1]: Stopped target basic.target - Basic System. Jul 9 14:42:22.697503 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 14:42:22.700131 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 14:42:22.702810 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 14:42:22.705792 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 14:42:22.708625 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 14:42:22.711818 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 14:42:22.714710 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 14:42:22.717785 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 14:42:22.720680 systemd[1]: Stopped target swap.target - Swaps. Jul 9 14:42:22.723337 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 14:42:22.723936 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 14:42:22.726675 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 14:42:22.728674 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 14:42:22.731122 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 14:42:22.731421 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 14:42:22.734209 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 14:42:22.734680 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 14:42:22.738319 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 14:42:22.738802 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 14:42:22.741955 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 14:42:22.742386 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 14:42:22.749328 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 14:42:22.756647 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 14:42:22.764335 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 14:42:22.766608 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 14:42:22.768000 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 14:42:22.768127 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 14:42:22.780265 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 14:42:22.785354 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 14:42:22.806515 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 14:42:22.812674 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 14:42:22.813543 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 14:42:22.822321 ignition[1018]: INFO : Ignition 2.21.0 Jul 9 14:42:22.824992 ignition[1018]: INFO : Stage: umount Jul 9 14:42:22.824992 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 14:42:22.824992 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 9 14:42:22.827012 ignition[1018]: INFO : umount: umount passed Jul 9 14:42:22.827012 ignition[1018]: INFO : Ignition finished successfully Jul 9 14:42:22.826905 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 14:42:22.827030 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 14:42:22.828551 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 14:42:22.828642 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 14:42:22.829392 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 14:42:22.829438 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 14:42:22.830435 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 9 14:42:22.830478 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 9 14:42:22.831496 systemd[1]: Stopped target network.target - Network. Jul 9 14:42:22.832494 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 14:42:22.832552 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 14:42:22.833567 systemd[1]: Stopped target paths.target - Path Units. Jul 9 14:42:22.834664 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 14:42:22.837892 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 14:42:22.839055 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 14:42:22.840106 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 14:42:22.841417 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 14:42:22.841470 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 14:42:22.842740 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 14:42:22.842778 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 14:42:22.843808 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 14:42:22.843894 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 14:42:22.844966 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 14:42:22.845030 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 14:42:22.846060 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 14:42:22.846126 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 14:42:22.851323 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 14:42:22.852707 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 14:42:22.863402 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 14:42:22.863563 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 14:42:22.866488 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 14:42:22.866699 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 14:42:22.866824 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 14:42:22.871592 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 14:42:22.872630 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 14:42:22.873353 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 14:42:22.873397 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 14:42:22.875516 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 14:42:22.877213 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 14:42:22.877270 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 14:42:22.880232 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 14:42:22.880282 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 14:42:22.882210 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 14:42:22.882258 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 14:42:22.884078 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 14:42:22.884128 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 14:42:22.885860 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 14:42:22.889778 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 14:42:22.889940 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 14:42:22.896656 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 14:42:22.897942 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 14:42:22.898941 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 14:42:22.898987 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 14:42:22.900204 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 14:42:22.900246 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 14:42:22.901478 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 14:42:22.901538 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 14:42:22.903439 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 14:42:22.903491 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 14:42:22.905325 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 14:42:22.905403 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 14:42:22.908621 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 14:42:22.910172 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 14:42:22.910224 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 14:42:22.913481 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 14:42:22.913531 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 14:42:22.915014 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 14:42:22.915133 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 14:42:22.916473 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 14:42:22.916525 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 14:42:22.920280 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 14:42:22.920411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:42:22.924078 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 14:42:22.924143 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 9 14:42:22.924188 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 14:42:22.924236 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 14:42:22.924713 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 14:42:22.924876 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 14:42:22.928284 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 14:42:22.928460 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 14:42:22.930592 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 14:42:22.932738 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 14:42:22.951157 systemd[1]: Switching root. Jul 9 14:42:23.017162 systemd-journald[212]: Journal stopped Jul 9 14:42:25.215357 systemd-journald[212]: Received SIGTERM from PID 1 (systemd). Jul 9 14:42:25.215436 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 14:42:25.215459 kernel: SELinux: policy capability open_perms=1 Jul 9 14:42:25.215472 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 14:42:25.215484 kernel: SELinux: policy capability always_check_network=0 Jul 9 14:42:25.215502 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 14:42:25.215513 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 14:42:25.215525 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 14:42:25.215541 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 14:42:25.215552 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 14:42:25.215564 kernel: audit: type=1403 audit(1752072143.904:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 14:42:25.215586 systemd[1]: Successfully loaded SELinux policy in 76.727ms. Jul 9 14:42:25.215612 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.686ms. Jul 9 14:42:25.215626 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 14:42:25.215639 systemd[1]: Detected virtualization kvm. Jul 9 14:42:25.215652 systemd[1]: Detected architecture x86-64. Jul 9 14:42:25.215664 systemd[1]: Detected first boot. Jul 9 14:42:25.215687 systemd[1]: Hostname set to . Jul 9 14:42:25.215700 systemd[1]: Initializing machine ID from VM UUID. Jul 9 14:42:25.215718 kernel: Guest personality initialized and is inactive Jul 9 14:42:25.215730 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 9 14:42:25.215741 kernel: Initialized host personality Jul 9 14:42:25.215753 zram_generator::config[1062]: No configuration found. Jul 9 14:42:25.215766 kernel: NET: Registered PF_VSOCK protocol family Jul 9 14:42:25.215777 systemd[1]: Populated /etc with preset unit settings. Jul 9 14:42:25.215791 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 14:42:25.215804 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 14:42:25.215826 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 14:42:25.215855 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 14:42:25.215868 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 14:42:25.215886 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 14:42:25.215899 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 14:42:25.215912 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 14:42:25.215924 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 14:42:25.215936 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 14:42:25.215949 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 14:42:25.215970 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 14:42:25.215983 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 14:42:25.215995 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 14:42:25.216008 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 14:42:25.216021 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 14:42:25.216034 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 14:42:25.216053 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 14:42:25.216066 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 9 14:42:25.216078 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 14:42:25.216091 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 14:42:25.216103 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 14:42:25.216115 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 14:42:25.216128 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 14:42:25.216140 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 14:42:25.216153 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 14:42:25.216172 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 14:42:25.216185 systemd[1]: Reached target slices.target - Slice Units. Jul 9 14:42:25.216197 systemd[1]: Reached target swap.target - Swaps. Jul 9 14:42:25.216209 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 14:42:25.216222 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 14:42:25.216235 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 14:42:25.216247 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 14:42:25.216259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 14:42:25.216272 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 14:42:25.216290 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 14:42:25.216303 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 14:42:25.216321 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 14:42:25.216333 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 14:42:25.216346 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 14:42:25.216362 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 14:42:25.216374 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 14:42:25.216388 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 14:42:25.216400 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 14:42:25.216419 systemd[1]: Reached target machines.target - Containers. Jul 9 14:42:25.216432 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 14:42:25.216447 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 14:42:25.216459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 14:42:25.216472 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 14:42:25.216484 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 14:42:25.216496 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 14:42:25.216508 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 14:42:25.216527 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 14:42:25.216539 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 14:42:25.216552 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 14:42:25.216564 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 14:42:25.216577 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 14:42:25.216589 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 14:42:25.216601 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 14:42:25.216614 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 14:42:25.216635 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 14:42:25.216656 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 14:42:25.216669 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 14:42:25.216688 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 14:42:25.216704 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 14:42:25.216717 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 14:42:25.216729 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 14:42:25.216743 systemd[1]: Stopped verity-setup.service. Jul 9 14:42:25.216760 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 14:42:25.216776 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 14:42:25.216789 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 14:42:25.216808 kernel: loop: module loaded Jul 9 14:42:25.216821 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 14:42:25.216851 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 14:42:25.216865 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 14:42:25.216878 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 14:42:25.216890 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 14:42:25.216903 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 14:42:25.216914 kernel: fuse: init (API version 7.41) Jul 9 14:42:25.216926 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 14:42:25.216948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 14:42:25.216961 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 14:42:25.216974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 14:42:25.216989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 14:42:25.217002 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 14:42:25.217014 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 14:42:25.217027 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 14:42:25.217039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 14:42:25.217061 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 14:42:25.217074 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 14:42:25.217087 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 14:42:25.217100 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 14:42:25.217112 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 14:42:25.217152 systemd-journald[1145]: Collecting audit messages is disabled. Jul 9 14:42:25.217181 systemd-journald[1145]: Journal started Jul 9 14:42:25.217214 systemd-journald[1145]: Runtime Journal (/run/log/journal/3f181a04658c42258cacbe1c5f1f01a8) is 8M, max 78.5M, 70.5M free. Jul 9 14:42:25.233894 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 14:42:25.233957 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 14:42:25.233975 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 14:42:24.814660 systemd[1]: Queued start job for default target multi-user.target. Jul 9 14:42:24.840823 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 14:42:24.841622 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 14:42:25.243580 kernel: ACPI: bus type drm_connector registered Jul 9 14:42:25.243644 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 14:42:25.250994 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 14:42:25.256861 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 14:42:25.267937 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 14:42:25.267998 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 14:42:25.276450 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 14:42:25.280659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 14:42:25.283911 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 14:42:25.290898 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 14:42:25.296906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 14:42:25.301108 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 14:42:25.303152 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 14:42:25.304180 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 14:42:25.304910 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 14:42:25.305788 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 14:42:25.307199 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 14:42:25.309205 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 14:42:25.309972 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 14:42:25.329308 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 14:42:25.338814 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 14:42:25.341988 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 14:42:25.345018 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 14:42:25.360761 kernel: loop0: detected capacity change from 0 to 146480 Jul 9 14:42:25.369613 systemd-journald[1145]: Time spent on flushing to /var/log/journal/3f181a04658c42258cacbe1c5f1f01a8 is 25.912ms for 983 entries. Jul 9 14:42:25.369613 systemd-journald[1145]: System Journal (/var/log/journal/3f181a04658c42258cacbe1c5f1f01a8) is 8M, max 584.8M, 576.8M free. Jul 9 14:42:25.458755 systemd-journald[1145]: Received client request to flush runtime journal. Jul 9 14:42:25.375364 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 14:42:25.440140 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jul 9 14:42:25.440163 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jul 9 14:42:25.457783 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 14:42:25.483127 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 14:42:25.486670 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 14:42:25.494958 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 14:42:25.508568 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 14:42:25.530882 kernel: loop1: detected capacity change from 0 to 224512 Jul 9 14:42:25.573583 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 14:42:25.579119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 14:42:25.609884 kernel: loop2: detected capacity change from 0 to 114008 Jul 9 14:42:25.637232 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jul 9 14:42:25.637999 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jul 9 14:42:25.644718 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 14:42:25.670874 kernel: loop3: detected capacity change from 0 to 8 Jul 9 14:42:25.688900 kernel: loop4: detected capacity change from 0 to 146480 Jul 9 14:42:25.786436 kernel: loop5: detected capacity change from 0 to 224512 Jul 9 14:42:25.848899 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 14:42:25.883955 kernel: loop6: detected capacity change from 0 to 114008 Jul 9 14:42:25.957000 kernel: loop7: detected capacity change from 0 to 8 Jul 9 14:42:25.974727 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 9 14:42:25.975487 (sd-merge)[1227]: Merged extensions into '/usr'. Jul 9 14:42:25.984001 systemd[1]: Reload requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 14:42:25.985916 systemd[1]: Reloading... Jul 9 14:42:26.150897 zram_generator::config[1253]: No configuration found. Jul 9 14:42:26.395225 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 14:42:26.506664 systemd[1]: Reloading finished in 519 ms. Jul 9 14:42:26.528708 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 14:42:26.529801 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 14:42:26.539720 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 14:42:26.540322 systemd[1]: Starting ensure-sysext.service... Jul 9 14:42:26.543699 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 14:42:26.545981 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 14:42:26.557067 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 14:42:26.562069 systemd[1]: Reload requested from client PID 1309 ('systemctl') (unit ensure-sysext.service)... Jul 9 14:42:26.562082 systemd[1]: Reloading... Jul 9 14:42:26.589908 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 14:42:26.589952 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 14:42:26.590229 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 14:42:26.590493 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 14:42:26.592556 systemd-tmpfiles[1310]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 14:42:26.593357 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jul 9 14:42:26.593446 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Jul 9 14:42:26.602814 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Jul 9 14:42:26.605447 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 14:42:26.605472 systemd-tmpfiles[1310]: Skipping /boot Jul 9 14:42:26.633645 systemd-tmpfiles[1310]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 14:42:26.633661 systemd-tmpfiles[1310]: Skipping /boot Jul 9 14:42:26.696875 zram_generator::config[1354]: No configuration found. Jul 9 14:42:26.874223 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 14:42:26.997867 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 14:42:27.014012 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 9 14:42:27.014127 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 14:42:27.014851 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 9 14:42:27.017318 systemd[1]: Reloading finished in 454 ms. Jul 9 14:42:27.025870 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 9 14:42:27.034910 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 9 14:42:27.038856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 14:42:27.053865 kernel: ACPI: button: Power Button [PWRF] Jul 9 14:42:27.054205 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 14:42:27.095947 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 14:42:27.097284 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 14:42:27.100527 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 14:42:27.102058 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 14:42:27.105613 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 14:42:27.108235 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 14:42:27.113896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 14:42:27.125417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 14:42:27.126204 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 14:42:27.130318 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 14:42:27.131028 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 14:42:27.133486 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 14:42:27.139218 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 14:42:27.146261 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 14:42:27.154406 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 14:42:27.155221 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 14:42:27.169946 systemd[1]: Finished ensure-sysext.service. Jul 9 14:42:27.178119 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 14:42:27.179211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 14:42:27.184454 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 14:42:27.216066 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 14:42:27.227869 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 14:42:27.228935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 14:42:27.233662 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 14:42:27.234182 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 14:42:27.234921 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 14:42:27.237802 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 14:42:27.239086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 14:42:27.242096 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 14:42:27.256045 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 14:42:27.266992 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 14:42:27.276934 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 14:42:27.277901 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 14:42:27.280211 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 14:42:27.286167 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 14:42:27.326371 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 14:42:27.343135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 14:42:27.355625 augenrules[1489]: No rules Jul 9 14:42:27.358744 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 14:42:27.360152 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 14:42:27.390866 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jul 9 14:42:27.404878 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jul 9 14:42:27.438457 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 14:42:27.441189 kernel: Console: switching to colour dummy device 80x25 Jul 9 14:42:27.444403 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 9 14:42:27.444448 kernel: [drm] features: -context_init Jul 9 14:42:27.452858 kernel: [drm] number of scanouts: 1 Jul 9 14:42:27.458882 kernel: [drm] number of cap sets: 0 Jul 9 14:42:27.462900 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jul 9 14:42:27.478403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 14:42:27.478908 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:42:27.487087 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 14:42:27.495879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 14:42:27.584928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 14:42:27.593637 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 14:42:27.593978 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 14:42:27.614493 systemd-networkd[1452]: lo: Link UP Jul 9 14:42:27.614856 systemd-networkd[1452]: lo: Gained carrier Jul 9 14:42:27.616554 systemd-networkd[1452]: Enumeration completed Jul 9 14:42:27.616723 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 14:42:27.617194 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 14:42:27.617284 systemd-networkd[1452]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 14:42:27.619004 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 14:42:27.619945 systemd-networkd[1452]: eth0: Link UP Jul 9 14:42:27.620127 systemd-networkd[1452]: eth0: Gained carrier Jul 9 14:42:27.620146 systemd-networkd[1452]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 14:42:27.621184 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 14:42:27.632127 systemd-networkd[1452]: eth0: DHCPv4 address 172.24.4.161/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jul 9 14:42:27.635282 systemd-timesyncd[1461]: Network configuration changed, trying to establish connection. Jul 9 14:42:27.638044 systemd-resolved[1453]: Positive Trust Anchors: Jul 9 14:42:27.638326 systemd-resolved[1453]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 14:42:27.638371 systemd-resolved[1453]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 14:42:27.648793 systemd-resolved[1453]: Using system hostname 'ci-9999-9-100-ea23d699c2.novalocal'. Jul 9 14:42:27.652170 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 14:42:27.652440 systemd[1]: Reached target network.target - Network. Jul 9 14:42:27.652522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 14:42:27.652608 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 14:42:27.652816 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 14:42:27.653016 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 14:42:27.653141 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 9 14:42:27.653475 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 14:42:27.653746 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 14:42:27.653887 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 14:42:27.653998 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 14:42:27.654038 systemd[1]: Reached target paths.target - Path Units. Jul 9 14:42:27.654937 systemd[1]: Reached target timers.target - Timer Units. Jul 9 14:42:27.657063 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 14:42:27.659810 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 14:42:27.663410 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 14:42:27.663965 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 14:42:27.664225 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 14:42:27.670817 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 14:42:27.672238 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 14:42:27.673512 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 14:42:27.673821 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 14:42:27.675452 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 14:42:27.675710 systemd[1]: Reached target basic.target - Basic System. Jul 9 14:42:27.675982 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 14:42:27.676022 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 14:42:27.677482 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 14:42:27.679977 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 9 14:42:27.682103 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 14:42:27.688915 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 14:42:27.691928 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 14:42:27.695475 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 14:42:27.695584 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 14:42:27.695980 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:27.700042 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 9 14:42:27.703197 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 14:42:27.712203 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 14:42:27.715746 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 14:42:27.721153 jq[1524]: false Jul 9 14:42:27.721696 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 14:42:27.729351 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing passwd entry cache Jul 9 14:42:27.729373 oslogin_cache_refresh[1527]: Refreshing passwd entry cache Jul 9 14:42:27.730543 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 14:42:27.732299 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 14:42:27.733181 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 14:42:27.736902 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 14:42:27.743517 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting users, quitting Jul 9 14:42:27.743517 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 14:42:27.743517 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Refreshing group entry cache Jul 9 14:42:27.743264 oslogin_cache_refresh[1527]: Failure getting users, quitting Jul 9 14:42:27.743293 oslogin_cache_refresh[1527]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 9 14:42:27.743355 oslogin_cache_refresh[1527]: Refreshing group entry cache Jul 9 14:42:27.744013 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 14:42:27.746577 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 14:42:27.746920 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 14:42:27.747905 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 14:42:27.762855 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Failure getting groups, quitting Jul 9 14:42:27.762855 google_oslogin_nss_cache[1527]: oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 14:42:27.762062 oslogin_cache_refresh[1527]: Failure getting groups, quitting Jul 9 14:42:27.762084 oslogin_cache_refresh[1527]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 9 14:42:27.768307 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 9 14:42:27.768536 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 9 14:42:27.777188 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 14:42:27.777513 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 14:42:27.786154 extend-filesystems[1525]: Found /dev/vda6 Jul 9 14:42:27.799284 extend-filesystems[1525]: Found /dev/vda9 Jul 9 14:42:27.807984 extend-filesystems[1525]: Checking size of /dev/vda9 Jul 9 14:42:27.813938 tar[1547]: linux-amd64/LICENSE Jul 9 14:42:27.814554 tar[1547]: linux-amd64/helm Jul 9 14:42:27.817724 jq[1539]: true Jul 9 14:42:27.830398 update_engine[1537]: I20250709 14:42:27.830294 1537 main.cc:92] Flatcar Update Engine starting Jul 9 14:42:27.833530 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 14:42:27.834111 (ntainerd)[1557]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 14:42:27.834998 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 14:42:27.849083 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 14:42:27.848883 dbus-daemon[1522]: [system] SELinux support is enabled Jul 9 14:42:27.854523 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 14:42:27.860473 extend-filesystems[1525]: Resized partition /dev/vda9 Jul 9 14:42:27.854558 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 14:42:27.854702 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 14:42:27.854727 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 14:42:27.871901 extend-filesystems[1568]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 14:42:27.872872 jq[1563]: true Jul 9 14:42:27.887855 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jul 9 14:42:27.888366 systemd[1]: Started update-engine.service - Update Engine. Jul 9 14:42:27.893060 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 14:42:27.895040 update_engine[1537]: I20250709 14:42:27.894147 1537 update_check_scheduler.cc:74] Next update check in 9m55s Jul 9 14:42:27.912872 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jul 9 14:42:27.931775 systemd-logind[1533]: New seat seat0. Jul 9 14:42:27.949418 systemd-logind[1533]: Watching system buttons on /dev/input/event2 (Power Button) Jul 9 14:42:27.949447 systemd-logind[1533]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 14:42:27.949792 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 14:42:27.959005 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 14:42:27.959005 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 14:42:27.959005 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jul 9 14:42:27.959367 extend-filesystems[1525]: Resized filesystem in /dev/vda9 Jul 9 14:42:27.960519 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 14:42:27.960791 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 14:42:28.105009 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Jul 9 14:42:28.105459 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 14:42:28.116229 systemd[1]: Starting sshkeys.service... Jul 9 14:42:28.177477 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 9 14:42:28.181740 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 9 14:42:28.222594 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:28.274970 locksmithd[1571]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 14:42:28.377986 containerd[1557]: time="2025-07-09T14:42:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 14:42:28.381017 containerd[1557]: time="2025-07-09T14:42:28.380956217Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 9 14:42:28.398669 containerd[1557]: time="2025-07-09T14:42:28.398615648Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.912µs" Jul 9 14:42:28.398669 containerd[1557]: time="2025-07-09T14:42:28.398649982Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 14:42:28.398669 containerd[1557]: time="2025-07-09T14:42:28.398672023Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 14:42:28.403459 containerd[1557]: time="2025-07-09T14:42:28.402073373Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 14:42:28.403459 containerd[1557]: time="2025-07-09T14:42:28.402126402Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 14:42:28.403459 containerd[1557]: time="2025-07-09T14:42:28.402157901Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 14:42:28.403459 containerd[1557]: time="2025-07-09T14:42:28.402237180Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 14:42:28.403459 containerd[1557]: time="2025-07-09T14:42:28.402254903Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 14:42:28.403459 containerd[1557]: time="2025-07-09T14:42:28.402531462Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 14:42:28.403459 containerd[1557]: time="2025-07-09T14:42:28.402552010Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 14:42:28.403459 containerd[1557]: time="2025-07-09T14:42:28.402564484Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 14:42:28.403459 containerd[1557]: time="2025-07-09T14:42:28.402574322Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 14:42:28.403459 containerd[1557]: time="2025-07-09T14:42:28.402668308Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 14:42:28.403459 containerd[1557]: time="2025-07-09T14:42:28.402925270Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 14:42:28.403785 containerd[1557]: time="2025-07-09T14:42:28.402960466Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 14:42:28.403785 containerd[1557]: time="2025-07-09T14:42:28.402972158Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 14:42:28.403785 containerd[1557]: time="2025-07-09T14:42:28.403005421Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 14:42:28.403785 containerd[1557]: time="2025-07-09T14:42:28.403240462Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 14:42:28.403785 containerd[1557]: time="2025-07-09T14:42:28.403311505Z" level=info msg="metadata content store policy set" policy=shared Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420507346Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420561938Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420579932Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420594138Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420613234Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420627401Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420641227Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420653600Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420664651Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420675912Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420686241Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420699346Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420816936Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 14:42:28.421861 containerd[1557]: time="2025-07-09T14:42:28.420862872Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.420881577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.420892578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.420959574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.420982958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.421006502Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.421019206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.421031378Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.421041948Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.421052758Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.421114394Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.421128711Z" level=info msg="Start snapshots syncer" Jul 9 14:42:28.422219 containerd[1557]: time="2025-07-09T14:42:28.421149059Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 14:42:28.422475 containerd[1557]: time="2025-07-09T14:42:28.421424426Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 14:42:28.422475 containerd[1557]: time="2025-07-09T14:42:28.421482725Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 14:42:28.422690 containerd[1557]: time="2025-07-09T14:42:28.421541024Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 14:42:28.422690 containerd[1557]: time="2025-07-09T14:42:28.421632125Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 14:42:28.422690 containerd[1557]: time="2025-07-09T14:42:28.421653926Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 14:42:28.422690 containerd[1557]: time="2025-07-09T14:42:28.421664195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 14:42:28.422690 containerd[1557]: time="2025-07-09T14:42:28.421677250Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 14:42:28.422690 containerd[1557]: time="2025-07-09T14:42:28.421728776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 14:42:28.422690 containerd[1557]: time="2025-07-09T14:42:28.421742572Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 14:42:28.422690 containerd[1557]: time="2025-07-09T14:42:28.421761468Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 14:42:28.422690 containerd[1557]: time="2025-07-09T14:42:28.421791955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 14:42:28.422690 containerd[1557]: time="2025-07-09T14:42:28.421803897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 14:42:28.422690 containerd[1557]: time="2025-07-09T14:42:28.421815529Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 14:42:28.423302 containerd[1557]: time="2025-07-09T14:42:28.423098485Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 14:42:28.423302 containerd[1557]: time="2025-07-09T14:42:28.423167945Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 14:42:28.423302 containerd[1557]: time="2025-07-09T14:42:28.423181010Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 14:42:28.423302 containerd[1557]: time="2025-07-09T14:42:28.423191840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 14:42:28.423302 containerd[1557]: time="2025-07-09T14:42:28.423220003Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 14:42:28.423302 containerd[1557]: time="2025-07-09T14:42:28.423232507Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 14:42:28.423302 containerd[1557]: time="2025-07-09T14:42:28.423243167Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 14:42:28.423302 containerd[1557]: time="2025-07-09T14:42:28.423266861Z" level=info msg="runtime interface created" Jul 9 14:42:28.423302 containerd[1557]: time="2025-07-09T14:42:28.423273894Z" level=info msg="created NRI interface" Jul 9 14:42:28.423614 containerd[1557]: time="2025-07-09T14:42:28.423530896Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 14:42:28.423614 containerd[1557]: time="2025-07-09T14:42:28.423554150Z" level=info msg="Connect containerd service" Jul 9 14:42:28.423614 containerd[1557]: time="2025-07-09T14:42:28.423581231Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 14:42:28.425643 containerd[1557]: time="2025-07-09T14:42:28.425621317Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 14:42:28.605588 sshd_keygen[1565]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 14:42:28.635254 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 14:42:28.638193 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 14:42:28.652986 systemd-networkd[1452]: eth0: Gained IPv6LL Jul 9 14:42:28.653949 systemd-timesyncd[1461]: Network configuration changed, trying to establish connection. Jul 9 14:42:28.656502 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 14:42:28.659409 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 14:42:28.674428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:42:28.677393 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 14:42:28.680095 containerd[1557]: time="2025-07-09T14:42:28.680015653Z" level=info msg="Start subscribing containerd event" Jul 9 14:42:28.680157 containerd[1557]: time="2025-07-09T14:42:28.680104891Z" level=info msg="Start recovering state" Jul 9 14:42:28.680694 containerd[1557]: time="2025-07-09T14:42:28.680257527Z" level=info msg="Start event monitor" Jul 9 14:42:28.680694 containerd[1557]: time="2025-07-09T14:42:28.680282354Z" level=info msg="Start cni network conf syncer for default" Jul 9 14:42:28.680694 containerd[1557]: time="2025-07-09T14:42:28.680293064Z" level=info msg="Start streaming server" Jul 9 14:42:28.680694 containerd[1557]: time="2025-07-09T14:42:28.680344129Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 14:42:28.680694 containerd[1557]: time="2025-07-09T14:42:28.680359228Z" level=info msg="runtime interface starting up..." Jul 9 14:42:28.680694 containerd[1557]: time="2025-07-09T14:42:28.680367163Z" level=info msg="starting plugins..." Jul 9 14:42:28.680694 containerd[1557]: time="2025-07-09T14:42:28.680420192Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 14:42:28.681073 containerd[1557]: time="2025-07-09T14:42:28.681046156Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 14:42:28.681190 containerd[1557]: time="2025-07-09T14:42:28.681173004Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 14:42:28.681418 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 14:42:28.683887 containerd[1557]: time="2025-07-09T14:42:28.683863750Z" level=info msg="containerd successfully booted in 0.306797s" Jul 9 14:42:28.695490 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 14:42:28.695973 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 14:42:28.700251 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 14:42:28.720876 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:28.725325 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 14:42:28.731331 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 14:42:28.735195 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 14:42:28.739122 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 9 14:42:28.739483 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 14:42:28.806205 tar[1547]: linux-amd64/README.md Jul 9 14:42:28.823091 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 14:42:29.253937 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:30.628051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:42:30.649494 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 14:42:30.741548 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:31.279092 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:31.507542 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 14:42:31.510224 systemd[1]: Started sshd@0-172.24.4.161:22-172.24.4.1:48244.service - OpenSSH per-connection server daemon (172.24.4.1:48244). Jul 9 14:42:31.950153 kubelet[1655]: E0709 14:42:31.950098 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 14:42:31.956163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 14:42:31.956514 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 14:42:31.957536 systemd[1]: kubelet.service: Consumed 2.141s CPU time, 264.8M memory peak. Jul 9 14:42:32.869672 sshd[1663]: Accepted publickey for core from 172.24.4.1 port 48244 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:42:32.874420 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:42:32.896705 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 14:42:32.900089 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 14:42:32.929412 systemd-logind[1533]: New session 1 of user core. Jul 9 14:42:32.958402 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 14:42:32.966291 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 14:42:32.988677 (systemd)[1670]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 14:42:32.995133 systemd-logind[1533]: New session c1 of user core. Jul 9 14:42:33.248854 systemd[1670]: Queued start job for default target default.target. Jul 9 14:42:33.267934 systemd[1670]: Created slice app.slice - User Application Slice. Jul 9 14:42:33.268104 systemd[1670]: Reached target paths.target - Paths. Jul 9 14:42:33.268232 systemd[1670]: Reached target timers.target - Timers. Jul 9 14:42:33.270001 systemd[1670]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 14:42:33.292647 systemd[1670]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 14:42:33.292776 systemd[1670]: Reached target sockets.target - Sockets. Jul 9 14:42:33.292960 systemd[1670]: Reached target basic.target - Basic System. Jul 9 14:42:33.293011 systemd[1670]: Reached target default.target - Main User Target. Jul 9 14:42:33.293051 systemd[1670]: Startup finished in 280ms. Jul 9 14:42:33.293148 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 14:42:33.301033 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 14:42:33.770183 systemd[1]: Started sshd@1-172.24.4.161:22-172.24.4.1:33962.service - OpenSSH per-connection server daemon (172.24.4.1:33962). Jul 9 14:42:33.946203 login[1644]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 9 14:42:33.948455 login[1645]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 9 14:42:33.962567 systemd-logind[1533]: New session 3 of user core. Jul 9 14:42:33.972294 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 14:42:33.983039 systemd-logind[1533]: New session 2 of user core. Jul 9 14:42:33.988976 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 14:42:34.772920 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:34.805966 coreos-metadata[1521]: Jul 09 14:42:34.805 WARN failed to locate config-drive, using the metadata service API instead Jul 9 14:42:34.871329 coreos-metadata[1521]: Jul 09 14:42:34.871 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 9 14:42:34.932459 sshd[1681]: Accepted publickey for core from 172.24.4.1 port 33962 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:42:34.935905 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:42:34.948473 systemd-logind[1533]: New session 4 of user core. Jul 9 14:42:34.972295 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 14:42:35.045503 coreos-metadata[1521]: Jul 09 14:42:35.045 INFO Fetch successful Jul 9 14:42:35.046714 coreos-metadata[1521]: Jul 09 14:42:35.046 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 9 14:42:35.061756 coreos-metadata[1521]: Jul 09 14:42:35.061 INFO Fetch successful Jul 9 14:42:35.062119 coreos-metadata[1521]: Jul 09 14:42:35.062 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 9 14:42:35.075804 coreos-metadata[1521]: Jul 09 14:42:35.075 INFO Fetch successful Jul 9 14:42:35.077168 coreos-metadata[1521]: Jul 09 14:42:35.077 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 9 14:42:35.093252 coreos-metadata[1521]: Jul 09 14:42:35.093 INFO Fetch successful Jul 9 14:42:35.093252 coreos-metadata[1521]: Jul 09 14:42:35.093 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 9 14:42:35.106091 coreos-metadata[1521]: Jul 09 14:42:35.106 INFO Fetch successful Jul 9 14:42:35.106091 coreos-metadata[1521]: Jul 09 14:42:35.106 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 9 14:42:35.114178 coreos-metadata[1521]: Jul 09 14:42:35.114 INFO Fetch successful Jul 9 14:42:35.166638 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 9 14:42:35.168130 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 14:42:35.394007 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 9 14:42:35.444185 coreos-metadata[1597]: Jul 09 14:42:35.443 WARN failed to locate config-drive, using the metadata service API instead Jul 9 14:42:35.461546 coreos-metadata[1597]: Jul 09 14:42:35.461 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 9 14:42:35.473986 coreos-metadata[1597]: Jul 09 14:42:35.473 INFO Fetch successful Jul 9 14:42:35.473986 coreos-metadata[1597]: Jul 09 14:42:35.473 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 9 14:42:35.487096 coreos-metadata[1597]: Jul 09 14:42:35.486 INFO Fetch successful Jul 9 14:42:35.494509 unknown[1597]: wrote ssh authorized keys file for user: core Jul 9 14:42:35.557358 update-ssh-keys[1725]: Updated "/home/core/.ssh/authorized_keys" Jul 9 14:42:35.559181 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 9 14:42:35.569496 systemd[1]: Finished sshkeys.service. Jul 9 14:42:35.573262 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 14:42:35.574805 systemd[1]: Startup finished in 3.744s (kernel) + 18.168s (initrd) + 11.746s (userspace) = 33.658s. Jul 9 14:42:35.682398 sshd[1715]: Connection closed by 172.24.4.1 port 33962 Jul 9 14:42:35.684946 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Jul 9 14:42:35.702137 systemd[1]: sshd@1-172.24.4.161:22-172.24.4.1:33962.service: Deactivated successfully. Jul 9 14:42:35.707600 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 14:42:35.710551 systemd-logind[1533]: Session 4 logged out. Waiting for processes to exit. Jul 9 14:42:35.717475 systemd[1]: Started sshd@2-172.24.4.161:22-172.24.4.1:33966.service - OpenSSH per-connection server daemon (172.24.4.1:33966). Jul 9 14:42:35.720482 systemd-logind[1533]: Removed session 4. Jul 9 14:42:36.885969 sshd[1732]: Accepted publickey for core from 172.24.4.1 port 33966 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:42:36.890083 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:42:36.911779 systemd-logind[1533]: New session 5 of user core. Jul 9 14:42:36.934279 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 14:42:37.523910 sshd[1735]: Connection closed by 172.24.4.1 port 33966 Jul 9 14:42:37.524063 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Jul 9 14:42:37.531143 systemd[1]: sshd@2-172.24.4.161:22-172.24.4.1:33966.service: Deactivated successfully. Jul 9 14:42:37.536768 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 14:42:37.541382 systemd-logind[1533]: Session 5 logged out. Waiting for processes to exit. Jul 9 14:42:37.544726 systemd-logind[1533]: Removed session 5. Jul 9 14:42:42.133488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 14:42:42.139786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:42:42.599948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:42:42.615370 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 14:42:42.746087 kubelet[1748]: E0709 14:42:42.745948 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 14:42:42.753466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 14:42:42.753764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 14:42:42.754931 systemd[1]: kubelet.service: Consumed 466ms CPU time, 108.3M memory peak. Jul 9 14:42:47.554751 systemd[1]: Started sshd@3-172.24.4.161:22-172.24.4.1:52562.service - OpenSSH per-connection server daemon (172.24.4.1:52562). Jul 9 14:42:48.934596 sshd[1756]: Accepted publickey for core from 172.24.4.1 port 52562 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:42:48.939973 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:42:48.958955 systemd-logind[1533]: New session 6 of user core. Jul 9 14:42:48.968154 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 14:42:49.720025 sshd[1759]: Connection closed by 172.24.4.1 port 52562 Jul 9 14:42:49.722689 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Jul 9 14:42:49.738256 systemd[1]: sshd@3-172.24.4.161:22-172.24.4.1:52562.service: Deactivated successfully. Jul 9 14:42:49.742763 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 14:42:49.745318 systemd-logind[1533]: Session 6 logged out. Waiting for processes to exit. Jul 9 14:42:49.752593 systemd[1]: Started sshd@4-172.24.4.161:22-172.24.4.1:52568.service - OpenSSH per-connection server daemon (172.24.4.1:52568). Jul 9 14:42:49.755887 systemd-logind[1533]: Removed session 6. Jul 9 14:42:51.103278 sshd[1765]: Accepted publickey for core from 172.24.4.1 port 52568 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:42:51.106560 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:42:51.128995 systemd-logind[1533]: New session 7 of user core. Jul 9 14:42:51.143152 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 14:42:51.741908 sshd[1768]: Connection closed by 172.24.4.1 port 52568 Jul 9 14:42:51.743438 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jul 9 14:42:51.762957 systemd[1]: sshd@4-172.24.4.161:22-172.24.4.1:52568.service: Deactivated successfully. Jul 9 14:42:51.767410 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 14:42:51.769642 systemd-logind[1533]: Session 7 logged out. Waiting for processes to exit. Jul 9 14:42:51.778278 systemd[1]: Started sshd@5-172.24.4.161:22-172.24.4.1:52570.service - OpenSSH per-connection server daemon (172.24.4.1:52570). Jul 9 14:42:51.780817 systemd-logind[1533]: Removed session 7. Jul 9 14:42:52.882797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 14:42:52.887023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:42:53.306548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:42:53.323514 (kubelet)[1785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 14:42:53.415671 sshd[1774]: Accepted publickey for core from 172.24.4.1 port 52570 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:42:53.419641 sshd-session[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:42:53.434893 kubelet[1785]: E0709 14:42:53.434224 1785 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 14:42:53.438227 systemd-logind[1533]: New session 8 of user core. Jul 9 14:42:53.439744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 14:42:53.440121 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 14:42:53.440732 systemd[1]: kubelet.service: Consumed 464ms CPU time, 108M memory peak. Jul 9 14:42:53.453162 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 14:42:54.056265 sshd[1792]: Connection closed by 172.24.4.1 port 52570 Jul 9 14:42:54.057611 sshd-session[1774]: pam_unix(sshd:session): session closed for user core Jul 9 14:42:54.074377 systemd[1]: sshd@5-172.24.4.161:22-172.24.4.1:52570.service: Deactivated successfully. Jul 9 14:42:54.079088 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 14:42:54.082980 systemd-logind[1533]: Session 8 logged out. Waiting for processes to exit. Jul 9 14:42:54.087471 systemd-logind[1533]: Removed session 8. Jul 9 14:42:54.091321 systemd[1]: Started sshd@6-172.24.4.161:22-172.24.4.1:51672.service - OpenSSH per-connection server daemon (172.24.4.1:51672). Jul 9 14:42:55.697177 sshd[1798]: Accepted publickey for core from 172.24.4.1 port 51672 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:42:55.701195 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:42:55.713087 systemd-logind[1533]: New session 9 of user core. Jul 9 14:42:55.727147 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 14:42:56.207746 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 14:42:56.208470 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 14:42:56.233806 sudo[1802]: pam_unix(sudo:session): session closed for user root Jul 9 14:42:56.442149 sshd[1801]: Connection closed by 172.24.4.1 port 51672 Jul 9 14:42:56.443515 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Jul 9 14:42:56.464967 systemd[1]: sshd@6-172.24.4.161:22-172.24.4.1:51672.service: Deactivated successfully. Jul 9 14:42:56.469666 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 14:42:56.471819 systemd-logind[1533]: Session 9 logged out. Waiting for processes to exit. Jul 9 14:42:56.478626 systemd[1]: Started sshd@7-172.24.4.161:22-172.24.4.1:51688.service - OpenSSH per-connection server daemon (172.24.4.1:51688). Jul 9 14:42:56.481074 systemd-logind[1533]: Removed session 9. Jul 9 14:42:57.610085 sshd[1808]: Accepted publickey for core from 172.24.4.1 port 51688 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:42:57.613291 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:42:57.625953 systemd-logind[1533]: New session 10 of user core. Jul 9 14:42:57.635216 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 14:42:58.041418 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 14:42:58.042185 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 14:42:58.054822 sudo[1813]: pam_unix(sudo:session): session closed for user root Jul 9 14:42:58.068297 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 14:42:58.069096 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 14:42:58.097155 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 14:42:58.197390 augenrules[1835]: No rules Jul 9 14:42:58.200478 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 14:42:58.201092 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 14:42:58.204066 sudo[1812]: pam_unix(sudo:session): session closed for user root Jul 9 14:42:58.354193 sshd[1811]: Connection closed by 172.24.4.1 port 51688 Jul 9 14:42:58.357250 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Jul 9 14:42:58.373300 systemd[1]: sshd@7-172.24.4.161:22-172.24.4.1:51688.service: Deactivated successfully. Jul 9 14:42:58.377716 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 14:42:58.380126 systemd-logind[1533]: Session 10 logged out. Waiting for processes to exit. Jul 9 14:42:58.387429 systemd[1]: Started sshd@8-172.24.4.161:22-172.24.4.1:51690.service - OpenSSH per-connection server daemon (172.24.4.1:51690). Jul 9 14:42:58.389605 systemd-logind[1533]: Removed session 10. Jul 9 14:42:59.700384 systemd-timesyncd[1461]: Contacted time server 23.186.168.123:123 (2.flatcar.pool.ntp.org). Jul 9 14:42:59.700702 systemd-timesyncd[1461]: Initial clock synchronization to Wed 2025-07-09 14:42:59.699630 UTC. Jul 9 14:42:59.702309 systemd-resolved[1453]: Clock change detected. Flushing caches. Jul 9 14:42:59.879503 sshd[1844]: Accepted publickey for core from 172.24.4.1 port 51690 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:42:59.882883 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:42:59.895861 systemd-logind[1533]: New session 11 of user core. Jul 9 14:42:59.909320 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 14:43:00.273515 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 14:43:00.275295 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 14:43:01.400324 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 14:43:01.431388 (dockerd)[1867]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 14:43:01.996777 dockerd[1867]: time="2025-07-09T14:43:01.996529589Z" level=info msg="Starting up" Jul 9 14:43:01.998623 dockerd[1867]: time="2025-07-09T14:43:01.998601956Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 14:43:02.045930 dockerd[1867]: time="2025-07-09T14:43:02.045841394Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 9 14:43:02.117426 systemd[1]: var-lib-docker-metacopy\x2dcheck2156885442-merged.mount: Deactivated successfully. Jul 9 14:43:02.160952 dockerd[1867]: time="2025-07-09T14:43:02.160721922Z" level=info msg="Loading containers: start." Jul 9 14:43:02.195301 kernel: Initializing XFRM netlink socket Jul 9 14:43:02.634626 systemd-networkd[1452]: docker0: Link UP Jul 9 14:43:02.640687 dockerd[1867]: time="2025-07-09T14:43:02.640537716Z" level=info msg="Loading containers: done." Jul 9 14:43:02.666655 dockerd[1867]: time="2025-07-09T14:43:02.666545063Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 14:43:02.666919 dockerd[1867]: time="2025-07-09T14:43:02.666693522Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 9 14:43:02.666919 dockerd[1867]: time="2025-07-09T14:43:02.666805331Z" level=info msg="Initializing buildkit" Jul 9 14:43:02.705659 dockerd[1867]: time="2025-07-09T14:43:02.705601393Z" level=info msg="Completed buildkit initialization" Jul 9 14:43:02.715242 dockerd[1867]: time="2025-07-09T14:43:02.715161233Z" level=info msg="Daemon has completed initialization" Jul 9 14:43:02.715452 dockerd[1867]: time="2025-07-09T14:43:02.715300564Z" level=info msg="API listen on /run/docker.sock" Jul 9 14:43:02.715728 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 14:43:04.114960 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 9 14:43:04.119258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:43:04.251764 containerd[1557]: time="2025-07-09T14:43:04.250689455Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 9 14:43:04.449957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:43:04.457008 (kubelet)[2087]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 14:43:04.536978 kubelet[2087]: E0709 14:43:04.536850 2087 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 14:43:04.542461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 14:43:04.543321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 14:43:04.544716 systemd[1]: kubelet.service: Consumed 311ms CPU time, 107.5M memory peak. Jul 9 14:43:05.192043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1948031977.mount: Deactivated successfully. Jul 9 14:43:07.043269 containerd[1557]: time="2025-07-09T14:43:07.043164412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:07.046100 containerd[1557]: time="2025-07-09T14:43:07.045944706Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jul 9 14:43:07.048859 containerd[1557]: time="2025-07-09T14:43:07.048810671Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:07.054034 containerd[1557]: time="2025-07-09T14:43:07.053947515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:07.058510 containerd[1557]: time="2025-07-09T14:43:07.058062904Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.807197077s" Jul 9 14:43:07.058510 containerd[1557]: time="2025-07-09T14:43:07.058162741Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 9 14:43:07.060803 containerd[1557]: time="2025-07-09T14:43:07.060775401Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 9 14:43:09.122424 containerd[1557]: time="2025-07-09T14:43:09.121839276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:09.125614 containerd[1557]: time="2025-07-09T14:43:09.125182245Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jul 9 14:43:09.126564 containerd[1557]: time="2025-07-09T14:43:09.126382456Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:09.130805 containerd[1557]: time="2025-07-09T14:43:09.130697239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:09.134181 containerd[1557]: time="2025-07-09T14:43:09.133655387Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 2.072666435s" Jul 9 14:43:09.134181 containerd[1557]: time="2025-07-09T14:43:09.133817301Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 9 14:43:09.138161 containerd[1557]: time="2025-07-09T14:43:09.138094011Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 9 14:43:10.929808 containerd[1557]: time="2025-07-09T14:43:10.929602315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:10.932049 containerd[1557]: time="2025-07-09T14:43:10.931990865Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jul 9 14:43:10.933523 containerd[1557]: time="2025-07-09T14:43:10.933465580Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:10.938788 containerd[1557]: time="2025-07-09T14:43:10.938538384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:10.940550 containerd[1557]: time="2025-07-09T14:43:10.939915668Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.801747647s" Jul 9 14:43:10.940550 containerd[1557]: time="2025-07-09T14:43:10.940008722Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 9 14:43:10.942406 containerd[1557]: time="2025-07-09T14:43:10.941993715Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 9 14:43:12.367909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968363366.mount: Deactivated successfully. Jul 9 14:43:12.996290 containerd[1557]: time="2025-07-09T14:43:12.996194728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:12.997819 containerd[1557]: time="2025-07-09T14:43:12.997723916Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jul 9 14:43:12.999643 containerd[1557]: time="2025-07-09T14:43:12.999603962Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:13.001975 containerd[1557]: time="2025-07-09T14:43:13.001933672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:13.002861 containerd[1557]: time="2025-07-09T14:43:13.002481119Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.05993623s" Jul 9 14:43:13.002861 containerd[1557]: time="2025-07-09T14:43:13.002518138Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 9 14:43:13.003695 containerd[1557]: time="2025-07-09T14:43:13.003673505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 14:43:13.410441 update_engine[1537]: I20250709 14:43:13.409960 1537 update_attempter.cc:509] Updating boot flags... Jul 9 14:43:13.684170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880170723.mount: Deactivated successfully. Jul 9 14:43:14.615597 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 9 14:43:14.624568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:43:15.360870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:43:15.369605 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 14:43:15.500534 kubelet[2243]: E0709 14:43:15.500461 2243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 14:43:15.504593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 14:43:15.504760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 14:43:15.505626 systemd[1]: kubelet.service: Consumed 740ms CPU time, 110.1M memory peak. Jul 9 14:43:15.729134 containerd[1557]: time="2025-07-09T14:43:15.727733649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:15.729134 containerd[1557]: time="2025-07-09T14:43:15.729099320Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 9 14:43:15.730348 containerd[1557]: time="2025-07-09T14:43:15.730319819Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:15.737113 containerd[1557]: time="2025-07-09T14:43:15.737057235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:15.739659 containerd[1557]: time="2025-07-09T14:43:15.739588352Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.735785906s" Jul 9 14:43:15.739818 containerd[1557]: time="2025-07-09T14:43:15.739798166Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 9 14:43:15.742239 containerd[1557]: time="2025-07-09T14:43:15.742220920Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 14:43:16.343437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount991295967.mount: Deactivated successfully. Jul 9 14:43:16.355432 containerd[1557]: time="2025-07-09T14:43:16.355239135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 14:43:16.357319 containerd[1557]: time="2025-07-09T14:43:16.357248263Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 9 14:43:16.359255 containerd[1557]: time="2025-07-09T14:43:16.359076813Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 14:43:16.364835 containerd[1557]: time="2025-07-09T14:43:16.364289960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 14:43:16.367013 containerd[1557]: time="2025-07-09T14:43:16.366256398Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 623.866672ms" Jul 9 14:43:16.367013 containerd[1557]: time="2025-07-09T14:43:16.366335687Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 9 14:43:16.368028 containerd[1557]: time="2025-07-09T14:43:16.367730493Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 9 14:43:17.081853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113742117.mount: Deactivated successfully. Jul 9 14:43:20.842733 containerd[1557]: time="2025-07-09T14:43:20.842013469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:20.846334 containerd[1557]: time="2025-07-09T14:43:20.845518964Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 9 14:43:20.848594 containerd[1557]: time="2025-07-09T14:43:20.848518490Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:20.859856 containerd[1557]: time="2025-07-09T14:43:20.859046755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:20.862247 containerd[1557]: time="2025-07-09T14:43:20.862182557Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.494344732s" Jul 9 14:43:20.863669 containerd[1557]: time="2025-07-09T14:43:20.862618845Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 9 14:43:24.885178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:43:24.885697 systemd[1]: kubelet.service: Consumed 740ms CPU time, 110.1M memory peak. Jul 9 14:43:24.893055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:43:24.935170 systemd[1]: Reload requested from client PID 2335 ('systemctl') (unit session-11.scope)... Jul 9 14:43:24.935224 systemd[1]: Reloading... Jul 9 14:43:25.069785 zram_generator::config[2380]: No configuration found. Jul 9 14:43:25.210836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 14:43:25.359494 systemd[1]: Reloading finished in 423 ms. Jul 9 14:43:25.435414 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 14:43:25.435499 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 14:43:25.435970 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:43:25.436013 systemd[1]: kubelet.service: Consumed 279ms CPU time, 98.1M memory peak. Jul 9 14:43:25.438338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:43:25.669734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:43:25.681014 (kubelet)[2447]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 14:43:25.988669 kubelet[2447]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 14:43:25.988669 kubelet[2447]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 14:43:25.988669 kubelet[2447]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 14:43:25.988669 kubelet[2447]: I0709 14:43:25.960427 2447 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 14:43:27.078169 kubelet[2447]: I0709 14:43:27.078128 2447 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 14:43:27.078715 kubelet[2447]: I0709 14:43:27.078700 2447 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 14:43:27.079267 kubelet[2447]: I0709 14:43:27.079253 2447 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 14:43:27.119942 kubelet[2447]: E0709 14:43:27.119705 2447 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.161:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:43:27.122397 kubelet[2447]: I0709 14:43:27.122335 2447 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 14:43:27.151395 kubelet[2447]: I0709 14:43:27.151343 2447 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 14:43:27.158606 kubelet[2447]: I0709 14:43:27.158550 2447 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 14:43:27.161051 kubelet[2447]: I0709 14:43:27.160975 2447 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 14:43:27.161457 kubelet[2447]: I0709 14:43:27.161020 2447 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-9-100-ea23d699c2.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 14:43:27.162406 kubelet[2447]: I0709 14:43:27.161481 2447 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 14:43:27.162406 kubelet[2447]: I0709 14:43:27.161497 2447 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 14:43:27.162406 kubelet[2447]: I0709 14:43:27.161871 2447 state_mem.go:36] "Initialized new in-memory state store" Jul 9 14:43:27.167291 kubelet[2447]: I0709 14:43:27.167216 2447 kubelet.go:446] "Attempting to sync node with API server" Jul 9 14:43:27.167291 kubelet[2447]: I0709 14:43:27.167288 2447 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 14:43:27.168804 kubelet[2447]: I0709 14:43:27.167345 2447 kubelet.go:352] "Adding apiserver pod source" Jul 9 14:43:27.168804 kubelet[2447]: I0709 14:43:27.167392 2447 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 14:43:27.176359 kubelet[2447]: W0709 14:43:27.176250 2447 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 9 14:43:27.176359 kubelet[2447]: E0709 14:43:27.176352 2447 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:43:27.176828 kubelet[2447]: W0709 14:43:27.176673 2447 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-9-100-ea23d699c2.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 9 14:43:27.176828 kubelet[2447]: E0709 14:43:27.176702 2447 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.161:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-9-100-ea23d699c2.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:43:27.177060 kubelet[2447]: I0709 14:43:27.177043 2447 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 14:43:27.177964 kubelet[2447]: I0709 14:43:27.177792 2447 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 14:43:27.179843 kubelet[2447]: W0709 14:43:27.178943 2447 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 14:43:27.184760 kubelet[2447]: I0709 14:43:27.184182 2447 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 14:43:27.184760 kubelet[2447]: I0709 14:43:27.184294 2447 server.go:1287] "Started kubelet" Jul 9 14:43:27.187086 kubelet[2447]: I0709 14:43:27.187068 2447 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 14:43:27.195476 kubelet[2447]: I0709 14:43:27.195374 2447 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 14:43:27.197007 kubelet[2447]: I0709 14:43:27.196988 2447 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 14:43:27.198129 kubelet[2447]: E0709 14:43:27.198109 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" Jul 9 14:43:27.200174 kubelet[2447]: I0709 14:43:27.200132 2447 server.go:479] "Adding debug handlers to kubelet server" Jul 9 14:43:27.201111 kubelet[2447]: I0709 14:43:27.201094 2447 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 14:43:27.201333 kubelet[2447]: I0709 14:43:27.201319 2447 reconciler.go:26] "Reconciler: start to sync state" Jul 9 14:43:27.205676 kubelet[2447]: I0709 14:43:27.205046 2447 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 14:43:27.205966 kubelet[2447]: I0709 14:43:27.205929 2447 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 14:43:27.216626 kubelet[2447]: I0709 14:43:27.215684 2447 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 14:43:27.219915 kubelet[2447]: I0709 14:43:27.219863 2447 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 14:43:27.220809 kubelet[2447]: I0709 14:43:27.220784 2447 factory.go:221] Registration of the systemd container factory successfully Jul 9 14:43:27.220949 kubelet[2447]: I0709 14:43:27.220919 2447 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 14:43:27.221843 kubelet[2447]: I0709 14:43:27.221797 2447 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 14:43:27.222041 kubelet[2447]: E0709 14:43:27.222001 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-100-ea23d699c2.novalocal?timeout=10s\": dial tcp 172.24.4.161:6443: connect: connection refused" interval="200ms" Jul 9 14:43:27.222239 kubelet[2447]: I0709 14:43:27.222128 2447 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 14:43:27.222455 kubelet[2447]: I0709 14:43:27.222413 2447 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 14:43:27.222653 kubelet[2447]: I0709 14:43:27.222550 2447 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 14:43:27.222870 kubelet[2447]: E0709 14:43:27.222842 2447 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 14:43:27.224719 kubelet[2447]: W0709 14:43:27.224669 2447 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 9 14:43:27.224792 kubelet[2447]: E0709 14:43:27.224725 2447 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:43:27.226551 kubelet[2447]: E0709 14:43:27.224812 2447 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.161:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.161:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-9999-9-100-ea23d699c2.novalocal.18509c63ba0cdbf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-9999-9-100-ea23d699c2.novalocal,UID:ci-9999-9-100-ea23d699c2.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-9999-9-100-ea23d699c2.novalocal,},FirstTimestamp:2025-07-09 14:43:27.184223222 +0000 UTC m=+1.488157890,LastTimestamp:2025-07-09 14:43:27.184223222 +0000 UTC m=+1.488157890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999-9-100-ea23d699c2.novalocal,}" Jul 9 14:43:27.227515 kubelet[2447]: W0709 14:43:27.227474 2447 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 9 14:43:27.228476 kubelet[2447]: E0709 14:43:27.228103 2447 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.161:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:43:27.228476 kubelet[2447]: I0709 14:43:27.228244 2447 factory.go:221] Registration of the containerd container factory successfully Jul 9 14:43:27.245100 kubelet[2447]: E0709 14:43:27.245050 2447 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 14:43:27.255193 kubelet[2447]: I0709 14:43:27.255167 2447 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 14:43:27.255193 kubelet[2447]: I0709 14:43:27.255185 2447 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 14:43:27.255352 kubelet[2447]: I0709 14:43:27.255228 2447 state_mem.go:36] "Initialized new in-memory state store" Jul 9 14:43:27.260840 kubelet[2447]: I0709 14:43:27.260820 2447 policy_none.go:49] "None policy: Start" Jul 9 14:43:27.260923 kubelet[2447]: I0709 14:43:27.260887 2447 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 14:43:27.260983 kubelet[2447]: I0709 14:43:27.260945 2447 state_mem.go:35] "Initializing new in-memory state store" Jul 9 14:43:27.272606 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 14:43:27.285523 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 14:43:27.289209 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 14:43:27.299390 kubelet[2447]: E0709 14:43:27.299356 2447 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" Jul 9 14:43:27.300034 kubelet[2447]: I0709 14:43:27.299996 2447 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 14:43:27.300585 kubelet[2447]: I0709 14:43:27.300285 2447 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 14:43:27.300585 kubelet[2447]: I0709 14:43:27.300316 2447 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 14:43:27.301126 kubelet[2447]: I0709 14:43:27.301099 2447 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 14:43:27.305686 kubelet[2447]: E0709 14:43:27.305492 2447 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 14:43:27.305686 kubelet[2447]: E0709 14:43:27.305641 2447 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" Jul 9 14:43:27.353241 systemd[1]: Created slice kubepods-burstable-podf3e9088081c3f5646c8d1da27a20ccc6.slice - libcontainer container kubepods-burstable-podf3e9088081c3f5646c8d1da27a20ccc6.slice. Jul 9 14:43:27.374818 kubelet[2447]: E0709 14:43:27.374270 2447 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.380639 systemd[1]: Created slice kubepods-burstable-pod0d438985711a5263cc221b2719bfe3f0.slice - libcontainer container kubepods-burstable-pod0d438985711a5263cc221b2719bfe3f0.slice. Jul 9 14:43:27.388335 kubelet[2447]: E0709 14:43:27.388229 2447 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.392275 systemd[1]: Created slice kubepods-burstable-pod04b42489adde25024c7836f026c2523b.slice - libcontainer container kubepods-burstable-pod04b42489adde25024c7836f026c2523b.slice. Jul 9 14:43:27.396905 kubelet[2447]: E0709 14:43:27.396734 2447 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.404770 kubelet[2447]: I0709 14:43:27.404649 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3e9088081c3f5646c8d1da27a20ccc6-ca-certs\") pod \"kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"f3e9088081c3f5646c8d1da27a20ccc6\") " pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.405564 kubelet[2447]: I0709 14:43:27.405525 2447 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.406683 kubelet[2447]: E0709 14:43:27.406599 2447 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.161:6443/api/v1/nodes\": dial tcp 172.24.4.161:6443: connect: connection refused" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.423638 kubelet[2447]: E0709 14:43:27.423578 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-100-ea23d699c2.novalocal?timeout=10s\": dial tcp 172.24.4.161:6443: connect: connection refused" interval="400ms" Jul 9 14:43:27.505768 kubelet[2447]: I0709 14:43:27.505675 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d438985711a5263cc221b2719bfe3f0-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"0d438985711a5263cc221b2719bfe3f0\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.505897 kubelet[2447]: I0709 14:43:27.505817 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04b42489adde25024c7836f026c2523b-kubeconfig\") pod \"kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"04b42489adde25024c7836f026c2523b\") " pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.505897 kubelet[2447]: I0709 14:43:27.505872 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3e9088081c3f5646c8d1da27a20ccc6-k8s-certs\") pod \"kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"f3e9088081c3f5646c8d1da27a20ccc6\") " pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.506093 kubelet[2447]: I0709 14:43:27.505925 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3e9088081c3f5646c8d1da27a20ccc6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"f3e9088081c3f5646c8d1da27a20ccc6\") " pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.506093 kubelet[2447]: I0709 14:43:27.505971 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d438985711a5263cc221b2719bfe3f0-ca-certs\") pod \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"0d438985711a5263cc221b2719bfe3f0\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.506093 kubelet[2447]: I0709 14:43:27.506014 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d438985711a5263cc221b2719bfe3f0-k8s-certs\") pod \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"0d438985711a5263cc221b2719bfe3f0\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.506093 kubelet[2447]: I0709 14:43:27.506056 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d438985711a5263cc221b2719bfe3f0-kubeconfig\") pod \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"0d438985711a5263cc221b2719bfe3f0\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.506633 kubelet[2447]: I0709 14:43:27.506104 2447 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d438985711a5263cc221b2719bfe3f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"0d438985711a5263cc221b2719bfe3f0\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.611774 kubelet[2447]: I0709 14:43:27.611504 2447 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.613104 kubelet[2447]: E0709 14:43:27.613038 2447 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.161:6443/api/v1/nodes\": dial tcp 172.24.4.161:6443: connect: connection refused" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:27.678483 containerd[1557]: time="2025-07-09T14:43:27.678134994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal,Uid:f3e9088081c3f5646c8d1da27a20ccc6,Namespace:kube-system,Attempt:0,}" Jul 9 14:43:27.691337 containerd[1557]: time="2025-07-09T14:43:27.691264779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal,Uid:0d438985711a5263cc221b2719bfe3f0,Namespace:kube-system,Attempt:0,}" Jul 9 14:43:27.701469 containerd[1557]: time="2025-07-09T14:43:27.699666977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal,Uid:04b42489adde25024c7836f026c2523b,Namespace:kube-system,Attempt:0,}" Jul 9 14:43:27.791176 containerd[1557]: time="2025-07-09T14:43:27.791111872Z" level=info msg="connecting to shim b3419234104350f845d071bfc374434219289932a90806bc559cdc8aff786883" address="unix:///run/containerd/s/3826b2d2d20e5500b2729cb793fa96661c3b37b6ec83136c9f8a99eb39afe5b1" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:43:27.824498 kubelet[2447]: E0709 14:43:27.824432 2447 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.161:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-9-100-ea23d699c2.novalocal?timeout=10s\": dial tcp 172.24.4.161:6443: connect: connection refused" interval="800ms" Jul 9 14:43:27.831165 containerd[1557]: time="2025-07-09T14:43:27.831112282Z" level=info msg="connecting to shim 5d7ee08d718b01949c9ed716f3780be608de6e6271e9147ce0ea0562bc676d89" address="unix:///run/containerd/s/6d663c815d8cf8c092279e6d6c0719ba72d2ae7b4527ed49842a133d812cc5c2" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:43:27.834619 containerd[1557]: time="2025-07-09T14:43:27.834578323Z" level=info msg="connecting to shim dde4e27c445ed580f1bed3a61a605b842e6223ab150f29440f85e133746c03c6" address="unix:///run/containerd/s/0f140283b9d6442f12227a4ecf503f7cc345e7418156bd70c44ab25e06acfcdd" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:43:27.861386 systemd[1]: Started cri-containerd-b3419234104350f845d071bfc374434219289932a90806bc559cdc8aff786883.scope - libcontainer container b3419234104350f845d071bfc374434219289932a90806bc559cdc8aff786883. Jul 9 14:43:27.898923 systemd[1]: Started cri-containerd-5d7ee08d718b01949c9ed716f3780be608de6e6271e9147ce0ea0562bc676d89.scope - libcontainer container 5d7ee08d718b01949c9ed716f3780be608de6e6271e9147ce0ea0562bc676d89. Jul 9 14:43:27.901187 systemd[1]: Started cri-containerd-dde4e27c445ed580f1bed3a61a605b842e6223ab150f29440f85e133746c03c6.scope - libcontainer container dde4e27c445ed580f1bed3a61a605b842e6223ab150f29440f85e133746c03c6. Jul 9 14:43:27.983295 containerd[1557]: time="2025-07-09T14:43:27.983213929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal,Uid:0d438985711a5263cc221b2719bfe3f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3419234104350f845d071bfc374434219289932a90806bc559cdc8aff786883\"" Jul 9 14:43:27.988782 containerd[1557]: time="2025-07-09T14:43:27.988703104Z" level=info msg="CreateContainer within sandbox \"b3419234104350f845d071bfc374434219289932a90806bc559cdc8aff786883\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 14:43:28.001591 containerd[1557]: time="2025-07-09T14:43:28.001540481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal,Uid:f3e9088081c3f5646c8d1da27a20ccc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d7ee08d718b01949c9ed716f3780be608de6e6271e9147ce0ea0562bc676d89\"" Jul 9 14:43:28.005495 containerd[1557]: time="2025-07-09T14:43:28.005442509Z" level=info msg="CreateContainer within sandbox \"5d7ee08d718b01949c9ed716f3780be608de6e6271e9147ce0ea0562bc676d89\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 14:43:28.006714 containerd[1557]: time="2025-07-09T14:43:28.006687283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal,Uid:04b42489adde25024c7836f026c2523b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dde4e27c445ed580f1bed3a61a605b842e6223ab150f29440f85e133746c03c6\"" Jul 9 14:43:28.011960 containerd[1557]: time="2025-07-09T14:43:28.011908005Z" level=info msg="Container 9a417a6909be136a750ecafbef8c2ffeba5587ca80dcf1fc3bb5b6adcb87ffa2: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:43:28.012436 containerd[1557]: time="2025-07-09T14:43:28.012212446Z" level=info msg="CreateContainer within sandbox \"dde4e27c445ed580f1bed3a61a605b842e6223ab150f29440f85e133746c03c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 14:43:28.017971 kubelet[2447]: I0709 14:43:28.017523 2447 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:28.018178 kubelet[2447]: E0709 14:43:28.018150 2447 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.161:6443/api/v1/nodes\": dial tcp 172.24.4.161:6443: connect: connection refused" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:28.031649 containerd[1557]: time="2025-07-09T14:43:28.031606630Z" level=info msg="CreateContainer within sandbox \"b3419234104350f845d071bfc374434219289932a90806bc559cdc8aff786883\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9a417a6909be136a750ecafbef8c2ffeba5587ca80dcf1fc3bb5b6adcb87ffa2\"" Jul 9 14:43:28.032812 containerd[1557]: time="2025-07-09T14:43:28.032346818Z" level=info msg="StartContainer for \"9a417a6909be136a750ecafbef8c2ffeba5587ca80dcf1fc3bb5b6adcb87ffa2\"" Jul 9 14:43:28.034488 containerd[1557]: time="2025-07-09T14:43:28.034460031Z" level=info msg="connecting to shim 9a417a6909be136a750ecafbef8c2ffeba5587ca80dcf1fc3bb5b6adcb87ffa2" address="unix:///run/containerd/s/3826b2d2d20e5500b2729cb793fa96661c3b37b6ec83136c9f8a99eb39afe5b1" protocol=ttrpc version=3 Jul 9 14:43:28.039513 containerd[1557]: time="2025-07-09T14:43:28.039458586Z" level=info msg="Container e85d4bc0e2116edfaca7bcfee1c7a8417e4adec2e1186c7fe88eaf3660b0c605: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:43:28.043841 containerd[1557]: time="2025-07-09T14:43:28.043646560Z" level=info msg="Container 79776c686dca41087ada1c9ba1f762d67cef18487f6481e745a264835d1a28ec: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:43:28.053713 containerd[1557]: time="2025-07-09T14:43:28.053650483Z" level=info msg="CreateContainer within sandbox \"5d7ee08d718b01949c9ed716f3780be608de6e6271e9147ce0ea0562bc676d89\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e85d4bc0e2116edfaca7bcfee1c7a8417e4adec2e1186c7fe88eaf3660b0c605\"" Jul 9 14:43:28.055943 containerd[1557]: time="2025-07-09T14:43:28.055900252Z" level=info msg="StartContainer for \"e85d4bc0e2116edfaca7bcfee1c7a8417e4adec2e1186c7fe88eaf3660b0c605\"" Jul 9 14:43:28.059833 containerd[1557]: time="2025-07-09T14:43:28.059789406Z" level=info msg="connecting to shim e85d4bc0e2116edfaca7bcfee1c7a8417e4adec2e1186c7fe88eaf3660b0c605" address="unix:///run/containerd/s/6d663c815d8cf8c092279e6d6c0719ba72d2ae7b4527ed49842a133d812cc5c2" protocol=ttrpc version=3 Jul 9 14:43:28.071044 systemd[1]: Started cri-containerd-9a417a6909be136a750ecafbef8c2ffeba5587ca80dcf1fc3bb5b6adcb87ffa2.scope - libcontainer container 9a417a6909be136a750ecafbef8c2ffeba5587ca80dcf1fc3bb5b6adcb87ffa2. Jul 9 14:43:28.072989 containerd[1557]: time="2025-07-09T14:43:28.072306512Z" level=info msg="CreateContainer within sandbox \"dde4e27c445ed580f1bed3a61a605b842e6223ab150f29440f85e133746c03c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"79776c686dca41087ada1c9ba1f762d67cef18487f6481e745a264835d1a28ec\"" Jul 9 14:43:28.073726 containerd[1557]: time="2025-07-09T14:43:28.073681621Z" level=info msg="StartContainer for \"79776c686dca41087ada1c9ba1f762d67cef18487f6481e745a264835d1a28ec\"" Jul 9 14:43:28.079209 containerd[1557]: time="2025-07-09T14:43:28.079107868Z" level=info msg="connecting to shim 79776c686dca41087ada1c9ba1f762d67cef18487f6481e745a264835d1a28ec" address="unix:///run/containerd/s/0f140283b9d6442f12227a4ecf503f7cc345e7418156bd70c44ab25e06acfcdd" protocol=ttrpc version=3 Jul 9 14:43:28.099947 systemd[1]: Started cri-containerd-e85d4bc0e2116edfaca7bcfee1c7a8417e4adec2e1186c7fe88eaf3660b0c605.scope - libcontainer container e85d4bc0e2116edfaca7bcfee1c7a8417e4adec2e1186c7fe88eaf3660b0c605. Jul 9 14:43:28.134204 systemd[1]: Started cri-containerd-79776c686dca41087ada1c9ba1f762d67cef18487f6481e745a264835d1a28ec.scope - libcontainer container 79776c686dca41087ada1c9ba1f762d67cef18487f6481e745a264835d1a28ec. Jul 9 14:43:28.179695 containerd[1557]: time="2025-07-09T14:43:28.179523167Z" level=info msg="StartContainer for \"9a417a6909be136a750ecafbef8c2ffeba5587ca80dcf1fc3bb5b6adcb87ffa2\" returns successfully" Jul 9 14:43:28.197250 kubelet[2447]: W0709 14:43:28.195398 2447 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 9 14:43:28.197250 kubelet[2447]: E0709 14:43:28.195470 2447 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.161:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:43:28.211446 kubelet[2447]: W0709 14:43:28.211293 2447 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.161:6443: connect: connection refused Jul 9 14:43:28.211446 kubelet[2447]: E0709 14:43:28.211401 2447 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.161:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.161:6443: connect: connection refused" logger="UnhandledError" Jul 9 14:43:28.227211 containerd[1557]: time="2025-07-09T14:43:28.227167404Z" level=info msg="StartContainer for \"e85d4bc0e2116edfaca7bcfee1c7a8417e4adec2e1186c7fe88eaf3660b0c605\" returns successfully" Jul 9 14:43:28.266064 kubelet[2447]: E0709 14:43:28.264537 2447 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:28.271601 kubelet[2447]: E0709 14:43:28.271564 2447 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:28.284903 containerd[1557]: time="2025-07-09T14:43:28.284819059Z" level=info msg="StartContainer for \"79776c686dca41087ada1c9ba1f762d67cef18487f6481e745a264835d1a28ec\" returns successfully" Jul 9 14:43:28.821398 kubelet[2447]: I0709 14:43:28.821359 2447 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:29.274665 kubelet[2447]: E0709 14:43:29.274622 2447 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:29.276110 kubelet[2447]: E0709 14:43:29.276087 2447 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:30.281648 kubelet[2447]: E0709 14:43:30.281590 2447 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:30.486163 kubelet[2447]: E0709 14:43:30.486074 2447 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-9999-9-100-ea23d699c2.novalocal\" not found" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:30.633128 kubelet[2447]: I0709 14:43:30.633058 2447 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:30.698987 kubelet[2447]: I0709 14:43:30.698916 2447 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:30.710059 kubelet[2447]: E0709 14:43:30.709964 2447 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:30.710059 kubelet[2447]: I0709 14:43:30.710003 2447 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:30.713921 kubelet[2447]: E0709 14:43:30.713843 2447 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:30.713921 kubelet[2447]: I0709 14:43:30.713895 2447 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:30.715434 kubelet[2447]: E0709 14:43:30.715410 2447 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:31.173923 kubelet[2447]: I0709 14:43:31.173825 2447 apiserver.go:52] "Watching apiserver" Jul 9 14:43:31.201854 kubelet[2447]: I0709 14:43:31.201291 2447 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 14:43:31.278589 kubelet[2447]: I0709 14:43:31.278513 2447 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:31.284982 kubelet[2447]: E0709 14:43:31.284837 2447 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:32.351778 kubelet[2447]: I0709 14:43:32.349283 2447 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:32.361571 kubelet[2447]: W0709 14:43:32.361511 2447 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:43:32.603400 kubelet[2447]: I0709 14:43:32.602407 2447 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:32.616048 kubelet[2447]: W0709 14:43:32.615864 2447 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:43:33.207893 systemd[1]: Reload requested from client PID 2720 ('systemctl') (unit session-11.scope)... Jul 9 14:43:33.207954 systemd[1]: Reloading... Jul 9 14:43:33.334825 zram_generator::config[2765]: No configuration found. Jul 9 14:43:33.486403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 14:43:33.665520 systemd[1]: Reloading finished in 456 ms. Jul 9 14:43:33.714185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:43:33.734763 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 14:43:33.735222 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:43:33.735321 systemd[1]: kubelet.service: Consumed 2.025s CPU time, 131.7M memory peak. Jul 9 14:43:33.738631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 14:43:33.983471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 14:43:33.993528 (kubelet)[2829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 14:43:34.247923 kubelet[2829]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 14:43:34.247923 kubelet[2829]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 14:43:34.247923 kubelet[2829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 14:43:34.250850 kubelet[2829]: I0709 14:43:34.249164 2829 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 14:43:34.264658 kubelet[2829]: I0709 14:43:34.264215 2829 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 14:43:34.264658 kubelet[2829]: I0709 14:43:34.264262 2829 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 14:43:34.265928 kubelet[2829]: I0709 14:43:34.265128 2829 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 14:43:34.267436 kubelet[2829]: I0709 14:43:34.267420 2829 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 14:43:34.279679 kubelet[2829]: I0709 14:43:34.279641 2829 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 14:43:34.288687 sudo[2843]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 14:43:34.289947 sudo[2843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 14:43:34.294540 kubelet[2829]: I0709 14:43:34.294486 2829 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 14:43:34.300027 kubelet[2829]: I0709 14:43:34.299995 2829 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 14:43:34.300441 kubelet[2829]: I0709 14:43:34.300398 2829 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 14:43:34.300731 kubelet[2829]: I0709 14:43:34.300435 2829 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-9-100-ea23d699c2.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 14:43:34.301018 kubelet[2829]: I0709 14:43:34.300798 2829 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 14:43:34.301018 kubelet[2829]: I0709 14:43:34.300836 2829 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 14:43:34.301018 kubelet[2829]: I0709 14:43:34.300974 2829 state_mem.go:36] "Initialized new in-memory state store" Jul 9 14:43:34.302227 kubelet[2829]: I0709 14:43:34.302144 2829 kubelet.go:446] "Attempting to sync node with API server" Jul 9 14:43:34.302227 kubelet[2829]: I0709 14:43:34.302217 2829 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 14:43:34.302341 kubelet[2829]: I0709 14:43:34.302255 2829 kubelet.go:352] "Adding apiserver pod source" Jul 9 14:43:34.302341 kubelet[2829]: I0709 14:43:34.302338 2829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 14:43:34.307124 kubelet[2829]: I0709 14:43:34.307077 2829 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 9 14:43:34.307783 kubelet[2829]: I0709 14:43:34.307762 2829 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 14:43:34.310890 kubelet[2829]: I0709 14:43:34.309603 2829 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 14:43:34.310890 kubelet[2829]: I0709 14:43:34.309652 2829 server.go:1287] "Started kubelet" Jul 9 14:43:34.320441 kubelet[2829]: I0709 14:43:34.320136 2829 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 14:43:34.328766 kubelet[2829]: I0709 14:43:34.328446 2829 server.go:479] "Adding debug handlers to kubelet server" Jul 9 14:43:34.334097 kubelet[2829]: I0709 14:43:34.332553 2829 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 14:43:34.336532 kubelet[2829]: I0709 14:43:34.336507 2829 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 14:43:34.337981 kubelet[2829]: I0709 14:43:34.337332 2829 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 14:43:34.346779 kubelet[2829]: I0709 14:43:34.343477 2829 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 14:43:34.346779 kubelet[2829]: I0709 14:43:34.345923 2829 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 14:43:34.346937 kubelet[2829]: E0709 14:43:34.346819 2829 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-9999-9-100-ea23d699c2.novalocal\" not found" Jul 9 14:43:34.348447 kubelet[2829]: I0709 14:43:34.348423 2829 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 14:43:34.348637 kubelet[2829]: I0709 14:43:34.348618 2829 reconciler.go:26] "Reconciler: start to sync state" Jul 9 14:43:34.362351 kubelet[2829]: I0709 14:43:34.362311 2829 factory.go:221] Registration of the systemd container factory successfully Jul 9 14:43:34.362511 kubelet[2829]: I0709 14:43:34.362477 2829 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 14:43:34.373996 kubelet[2829]: E0709 14:43:34.373950 2829 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 14:43:34.376357 kubelet[2829]: I0709 14:43:34.376328 2829 factory.go:221] Registration of the containerd container factory successfully Jul 9 14:43:34.392971 kubelet[2829]: I0709 14:43:34.392820 2829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 14:43:34.399846 kubelet[2829]: I0709 14:43:34.399108 2829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 14:43:34.399846 kubelet[2829]: I0709 14:43:34.399181 2829 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 14:43:34.399846 kubelet[2829]: I0709 14:43:34.399212 2829 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 14:43:34.399846 kubelet[2829]: I0709 14:43:34.399358 2829 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 14:43:34.399846 kubelet[2829]: E0709 14:43:34.399410 2829 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 14:43:34.463548 kubelet[2829]: I0709 14:43:34.463513 2829 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 14:43:34.463548 kubelet[2829]: I0709 14:43:34.463533 2829 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 14:43:34.463548 kubelet[2829]: I0709 14:43:34.463563 2829 state_mem.go:36] "Initialized new in-memory state store" Jul 9 14:43:34.464332 kubelet[2829]: I0709 14:43:34.463803 2829 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 14:43:34.464332 kubelet[2829]: I0709 14:43:34.463817 2829 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 14:43:34.464332 kubelet[2829]: I0709 14:43:34.463860 2829 policy_none.go:49] "None policy: Start" Jul 9 14:43:34.464332 kubelet[2829]: I0709 14:43:34.463891 2829 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 14:43:34.464332 kubelet[2829]: I0709 14:43:34.463934 2829 state_mem.go:35] "Initializing new in-memory state store" Jul 9 14:43:34.464332 kubelet[2829]: I0709 14:43:34.464211 2829 state_mem.go:75] "Updated machine memory state" Jul 9 14:43:34.471997 kubelet[2829]: I0709 14:43:34.471968 2829 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 14:43:34.472339 kubelet[2829]: I0709 14:43:34.472147 2829 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 14:43:34.472339 kubelet[2829]: I0709 14:43:34.472169 2829 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 14:43:34.473129 kubelet[2829]: I0709 14:43:34.472796 2829 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 14:43:34.488903 kubelet[2829]: E0709 14:43:34.488859 2829 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 14:43:34.501279 kubelet[2829]: I0709 14:43:34.500719 2829 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.501279 kubelet[2829]: I0709 14:43:34.501191 2829 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.501845 kubelet[2829]: I0709 14:43:34.501470 2829 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.518542 kubelet[2829]: W0709 14:43:34.516411 2829 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:43:34.518542 kubelet[2829]: W0709 14:43:34.518149 2829 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:43:34.518542 kubelet[2829]: E0709 14:43:34.518326 2829 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.518542 kubelet[2829]: W0709 14:43:34.518501 2829 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:43:34.518542 kubelet[2829]: E0709 14:43:34.518536 2829 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" already exists" pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.596130 kubelet[2829]: I0709 14:43:34.596078 2829 kubelet_node_status.go:75] "Attempting to register node" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.621840 kubelet[2829]: I0709 14:43:34.621789 2829 kubelet_node_status.go:124] "Node was previously registered" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.622005 kubelet[2829]: I0709 14:43:34.621983 2829 kubelet_node_status.go:78] "Successfully registered node" node="ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.650064 kubelet[2829]: I0709 14:43:34.650023 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0d438985711a5263cc221b2719bfe3f0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"0d438985711a5263cc221b2719bfe3f0\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.650064 kubelet[2829]: I0709 14:43:34.650063 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3e9088081c3f5646c8d1da27a20ccc6-ca-certs\") pod \"kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"f3e9088081c3f5646c8d1da27a20ccc6\") " pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.650251 kubelet[2829]: I0709 14:43:34.650085 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3e9088081c3f5646c8d1da27a20ccc6-k8s-certs\") pod \"kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"f3e9088081c3f5646c8d1da27a20ccc6\") " pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.650251 kubelet[2829]: I0709 14:43:34.650114 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0d438985711a5263cc221b2719bfe3f0-ca-certs\") pod \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"0d438985711a5263cc221b2719bfe3f0\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.650251 kubelet[2829]: I0709 14:43:34.650133 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0d438985711a5263cc221b2719bfe3f0-kubeconfig\") pod \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"0d438985711a5263cc221b2719bfe3f0\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.650251 kubelet[2829]: I0709 14:43:34.650152 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04b42489adde25024c7836f026c2523b-kubeconfig\") pod \"kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"04b42489adde25024c7836f026c2523b\") " pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.650251 kubelet[2829]: I0709 14:43:34.650170 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3e9088081c3f5646c8d1da27a20ccc6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"f3e9088081c3f5646c8d1da27a20ccc6\") " pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.650426 kubelet[2829]: I0709 14:43:34.650193 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0d438985711a5263cc221b2719bfe3f0-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"0d438985711a5263cc221b2719bfe3f0\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.650426 kubelet[2829]: I0709 14:43:34.650211 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0d438985711a5263cc221b2719bfe3f0-k8s-certs\") pod \"kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal\" (UID: \"0d438985711a5263cc221b2719bfe3f0\") " pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:34.872263 sudo[2843]: pam_unix(sudo:session): session closed for user root Jul 9 14:43:35.306317 kubelet[2829]: I0709 14:43:35.305503 2829 apiserver.go:52] "Watching apiserver" Jul 9 14:43:35.349479 kubelet[2829]: I0709 14:43:35.349373 2829 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 14:43:35.430857 kubelet[2829]: I0709 14:43:35.430707 2829 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:35.432169 kubelet[2829]: I0709 14:43:35.432100 2829 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:35.458846 kubelet[2829]: W0709 14:43:35.458042 2829 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:43:35.458846 kubelet[2829]: E0709 14:43:35.458237 2829 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:35.470985 kubelet[2829]: W0709 14:43:35.470840 2829 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 9 14:43:35.474785 kubelet[2829]: E0709 14:43:35.472350 2829 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" Jul 9 14:43:35.519843 kubelet[2829]: I0709 14:43:35.519417 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-9999-9-100-ea23d699c2.novalocal" podStartSLOduration=3.5193836530000002 podStartE2EDuration="3.519383653s" podCreationTimestamp="2025-07-09 14:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:43:35.507427273 +0000 UTC m=+1.486697769" watchObservedRunningTime="2025-07-09 14:43:35.519383653 +0000 UTC m=+1.498654149" Jul 9 14:43:35.533268 kubelet[2829]: I0709 14:43:35.532661 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-9999-9-100-ea23d699c2.novalocal" podStartSLOduration=1.5326402670000001 podStartE2EDuration="1.532640267s" podCreationTimestamp="2025-07-09 14:43:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:43:35.52103835 +0000 UTC m=+1.500308846" watchObservedRunningTime="2025-07-09 14:43:35.532640267 +0000 UTC m=+1.511910753" Jul 9 14:43:35.534025 kubelet[2829]: I0709 14:43:35.533822 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-9999-9-100-ea23d699c2.novalocal" podStartSLOduration=3.533811152 podStartE2EDuration="3.533811152s" podCreationTimestamp="2025-07-09 14:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:43:35.532447816 +0000 UTC m=+1.511718302" watchObservedRunningTime="2025-07-09 14:43:35.533811152 +0000 UTC m=+1.513081798" Jul 9 14:43:37.612867 sudo[1848]: pam_unix(sudo:session): session closed for user root Jul 9 14:43:37.797586 sshd[1847]: Connection closed by 172.24.4.1 port 51690 Jul 9 14:43:37.803553 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Jul 9 14:43:37.816835 systemd-logind[1533]: Session 11 logged out. Waiting for processes to exit. Jul 9 14:43:37.819278 systemd[1]: sshd@8-172.24.4.161:22-172.24.4.1:51690.service: Deactivated successfully. Jul 9 14:43:37.829434 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 14:43:37.831160 systemd[1]: session-11.scope: Consumed 7.924s CPU time, 271.7M memory peak. Jul 9 14:43:37.841035 systemd-logind[1533]: Removed session 11. Jul 9 14:43:39.353168 kubelet[2829]: I0709 14:43:39.353024 2829 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 14:43:39.354633 containerd[1557]: time="2025-07-09T14:43:39.354479317Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 14:43:39.355968 kubelet[2829]: I0709 14:43:39.355703 2829 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 14:43:40.077173 systemd[1]: Created slice kubepods-besteffort-pod3ac62fe4_f306_4a98_a767_c7f80406ba0d.slice - libcontainer container kubepods-besteffort-pod3ac62fe4_f306_4a98_a767_c7f80406ba0d.slice. Jul 9 14:43:40.090188 kubelet[2829]: I0709 14:43:40.090076 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ac62fe4-f306-4a98-a767-c7f80406ba0d-lib-modules\") pod \"kube-proxy-52p2r\" (UID: \"3ac62fe4-f306-4a98-a767-c7f80406ba0d\") " pod="kube-system/kube-proxy-52p2r" Jul 9 14:43:40.090188 kubelet[2829]: I0709 14:43:40.090126 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ac62fe4-f306-4a98-a767-c7f80406ba0d-kube-proxy\") pod \"kube-proxy-52p2r\" (UID: \"3ac62fe4-f306-4a98-a767-c7f80406ba0d\") " pod="kube-system/kube-proxy-52p2r" Jul 9 14:43:40.090188 kubelet[2829]: I0709 14:43:40.090150 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ac62fe4-f306-4a98-a767-c7f80406ba0d-xtables-lock\") pod \"kube-proxy-52p2r\" (UID: \"3ac62fe4-f306-4a98-a767-c7f80406ba0d\") " pod="kube-system/kube-proxy-52p2r" Jul 9 14:43:40.090978 kubelet[2829]: I0709 14:43:40.090242 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kch86\" (UniqueName: \"kubernetes.io/projected/3ac62fe4-f306-4a98-a767-c7f80406ba0d-kube-api-access-kch86\") pod \"kube-proxy-52p2r\" (UID: \"3ac62fe4-f306-4a98-a767-c7f80406ba0d\") " pod="kube-system/kube-proxy-52p2r" Jul 9 14:43:40.107133 systemd[1]: Created slice kubepods-burstable-podfb867bd6_4a60_418e_80c9_f381f7f3bbd0.slice - libcontainer container kubepods-burstable-podfb867bd6_4a60_418e_80c9_f381f7f3bbd0.slice. Jul 9 14:43:40.190641 kubelet[2829]: I0709 14:43:40.190576 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-clustermesh-secrets\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.190641 kubelet[2829]: I0709 14:43:40.190653 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-lib-modules\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.190879 kubelet[2829]: I0709 14:43:40.190673 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-xtables-lock\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.190879 kubelet[2829]: I0709 14:43:40.190695 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-config-path\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.190879 kubelet[2829]: I0709 14:43:40.190713 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-host-proc-sys-net\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.190879 kubelet[2829]: I0709 14:43:40.190764 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-hostproc\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.190879 kubelet[2829]: I0709 14:43:40.190785 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-cgroup\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.190879 kubelet[2829]: I0709 14:43:40.190805 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-hubble-tls\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.191145 kubelet[2829]: I0709 14:43:40.190840 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-etc-cni-netd\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.191145 kubelet[2829]: I0709 14:43:40.190861 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-host-proc-sys-kernel\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.191145 kubelet[2829]: I0709 14:43:40.190878 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-bpf-maps\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.191145 kubelet[2829]: I0709 14:43:40.190905 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-run\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.191145 kubelet[2829]: I0709 14:43:40.190926 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfwrk\" (UniqueName: \"kubernetes.io/projected/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-kube-api-access-jfwrk\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.191145 kubelet[2829]: I0709 14:43:40.190957 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cni-path\") pod \"cilium-gz68d\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " pod="kube-system/cilium-gz68d" Jul 9 14:43:40.389399 containerd[1557]: time="2025-07-09T14:43:40.389324049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-52p2r,Uid:3ac62fe4-f306-4a98-a767-c7f80406ba0d,Namespace:kube-system,Attempt:0,}" Jul 9 14:43:40.418135 containerd[1557]: time="2025-07-09T14:43:40.417925250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gz68d,Uid:fb867bd6-4a60-418e-80c9-f381f7f3bbd0,Namespace:kube-system,Attempt:0,}" Jul 9 14:43:40.486514 systemd[1]: Created slice kubepods-besteffort-pod493ebc7e_37bd_428d_874c_60437b167fa8.slice - libcontainer container kubepods-besteffort-pod493ebc7e_37bd_428d_874c_60437b167fa8.slice. Jul 9 14:43:40.496680 kubelet[2829]: I0709 14:43:40.496627 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjmcs\" (UniqueName: \"kubernetes.io/projected/493ebc7e-37bd-428d-874c-60437b167fa8-kube-api-access-bjmcs\") pod \"cilium-operator-6c4d7847fc-dkzk5\" (UID: \"493ebc7e-37bd-428d-874c-60437b167fa8\") " pod="kube-system/cilium-operator-6c4d7847fc-dkzk5" Jul 9 14:43:40.496680 kubelet[2829]: I0709 14:43:40.496685 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/493ebc7e-37bd-428d-874c-60437b167fa8-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dkzk5\" (UID: \"493ebc7e-37bd-428d-874c-60437b167fa8\") " pod="kube-system/cilium-operator-6c4d7847fc-dkzk5" Jul 9 14:43:40.515861 containerd[1557]: time="2025-07-09T14:43:40.515774806Z" level=info msg="connecting to shim 858ad603d065369a024bdb4fdd92fd5ad017d20c9f81c5505ac35a0418c1839c" address="unix:///run/containerd/s/ff907f32f2a9cd4991c906a5bde7c30ddd0f3b2dec673ad57cb6a0671d820922" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:43:40.539442 containerd[1557]: time="2025-07-09T14:43:40.536093179Z" level=info msg="connecting to shim f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b" address="unix:///run/containerd/s/7728d6221a7b817ee246051b0625808501a8b71f00d8817369c76d9cac9fa37d" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:43:40.649329 systemd[1]: Started cri-containerd-f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b.scope - libcontainer container f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b. Jul 9 14:43:40.661702 systemd[1]: Started cri-containerd-858ad603d065369a024bdb4fdd92fd5ad017d20c9f81c5505ac35a0418c1839c.scope - libcontainer container 858ad603d065369a024bdb4fdd92fd5ad017d20c9f81c5505ac35a0418c1839c. Jul 9 14:43:40.715910 containerd[1557]: time="2025-07-09T14:43:40.715851920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gz68d,Uid:fb867bd6-4a60-418e-80c9-f381f7f3bbd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\"" Jul 9 14:43:40.717217 containerd[1557]: time="2025-07-09T14:43:40.717189958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-52p2r,Uid:3ac62fe4-f306-4a98-a767-c7f80406ba0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"858ad603d065369a024bdb4fdd92fd5ad017d20c9f81c5505ac35a0418c1839c\"" Jul 9 14:43:40.718927 containerd[1557]: time="2025-07-09T14:43:40.718892153Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 14:43:40.723604 containerd[1557]: time="2025-07-09T14:43:40.723557654Z" level=info msg="CreateContainer within sandbox \"858ad603d065369a024bdb4fdd92fd5ad017d20c9f81c5505ac35a0418c1839c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 14:43:40.745816 containerd[1557]: time="2025-07-09T14:43:40.745765020Z" level=info msg="Container 8d6aa40a7da71c944cb758f34fc3c151c6b8b25cdc95b6410221825f0ba8194e: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:43:40.760817 containerd[1557]: time="2025-07-09T14:43:40.760771855Z" level=info msg="CreateContainer within sandbox \"858ad603d065369a024bdb4fdd92fd5ad017d20c9f81c5505ac35a0418c1839c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8d6aa40a7da71c944cb758f34fc3c151c6b8b25cdc95b6410221825f0ba8194e\"" Jul 9 14:43:40.761903 containerd[1557]: time="2025-07-09T14:43:40.761671716Z" level=info msg="StartContainer for \"8d6aa40a7da71c944cb758f34fc3c151c6b8b25cdc95b6410221825f0ba8194e\"" Jul 9 14:43:40.765274 containerd[1557]: time="2025-07-09T14:43:40.765224510Z" level=info msg="connecting to shim 8d6aa40a7da71c944cb758f34fc3c151c6b8b25cdc95b6410221825f0ba8194e" address="unix:///run/containerd/s/ff907f32f2a9cd4991c906a5bde7c30ddd0f3b2dec673ad57cb6a0671d820922" protocol=ttrpc version=3 Jul 9 14:43:40.787197 systemd[1]: Started cri-containerd-8d6aa40a7da71c944cb758f34fc3c151c6b8b25cdc95b6410221825f0ba8194e.scope - libcontainer container 8d6aa40a7da71c944cb758f34fc3c151c6b8b25cdc95b6410221825f0ba8194e. Jul 9 14:43:40.803860 containerd[1557]: time="2025-07-09T14:43:40.803786848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dkzk5,Uid:493ebc7e-37bd-428d-874c-60437b167fa8,Namespace:kube-system,Attempt:0,}" Jul 9 14:43:40.846908 containerd[1557]: time="2025-07-09T14:43:40.846805237Z" level=info msg="connecting to shim 3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed" address="unix:///run/containerd/s/d76fd673181c4dd37e0e749af3e8e3103b9e77affc9689b10afdf280d76be12f" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:43:40.858522 containerd[1557]: time="2025-07-09T14:43:40.858020843Z" level=info msg="StartContainer for \"8d6aa40a7da71c944cb758f34fc3c151c6b8b25cdc95b6410221825f0ba8194e\" returns successfully" Jul 9 14:43:40.889654 systemd[1]: Started cri-containerd-3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed.scope - libcontainer container 3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed. Jul 9 14:43:40.963677 containerd[1557]: time="2025-07-09T14:43:40.963478317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dkzk5,Uid:493ebc7e-37bd-428d-874c-60437b167fa8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\"" Jul 9 14:43:44.813006 kubelet[2829]: I0709 14:43:44.812912 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-52p2r" podStartSLOduration=4.812864584 podStartE2EDuration="4.812864584s" podCreationTimestamp="2025-07-09 14:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:43:41.49888021 +0000 UTC m=+7.478150816" watchObservedRunningTime="2025-07-09 14:43:44.812864584 +0000 UTC m=+10.792135070" Jul 9 14:43:47.393147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1818074978.mount: Deactivated successfully. Jul 9 14:43:50.772324 containerd[1557]: time="2025-07-09T14:43:50.772154546Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:50.775928 containerd[1557]: time="2025-07-09T14:43:50.775835891Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 9 14:43:50.776795 containerd[1557]: time="2025-07-09T14:43:50.776678627Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:50.803087 containerd[1557]: time="2025-07-09T14:43:50.802423829Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.083463576s" Jul 9 14:43:50.803087 containerd[1557]: time="2025-07-09T14:43:50.802839987Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 9 14:43:50.814429 containerd[1557]: time="2025-07-09T14:43:50.813187563Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 14:43:50.819959 containerd[1557]: time="2025-07-09T14:43:50.817067735Z" level=info msg="CreateContainer within sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 14:43:50.874811 containerd[1557]: time="2025-07-09T14:43:50.874200436Z" level=info msg="Container 5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:43:50.884544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958254479.mount: Deactivated successfully. Jul 9 14:43:50.897234 containerd[1557]: time="2025-07-09T14:43:50.897177692Z" level=info msg="CreateContainer within sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\"" Jul 9 14:43:50.898384 containerd[1557]: time="2025-07-09T14:43:50.898353350Z" level=info msg="StartContainer for \"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\"" Jul 9 14:43:50.900110 containerd[1557]: time="2025-07-09T14:43:50.899800252Z" level=info msg="connecting to shim 5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7" address="unix:///run/containerd/s/7728d6221a7b817ee246051b0625808501a8b71f00d8817369c76d9cac9fa37d" protocol=ttrpc version=3 Jul 9 14:43:50.947965 systemd[1]: Started cri-containerd-5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7.scope - libcontainer container 5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7. Jul 9 14:43:51.007110 containerd[1557]: time="2025-07-09T14:43:51.007050965Z" level=info msg="StartContainer for \"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\" returns successfully" Jul 9 14:43:51.053894 systemd[1]: cri-containerd-5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7.scope: Deactivated successfully. Jul 9 14:43:51.056763 containerd[1557]: time="2025-07-09T14:43:51.056593677Z" level=info msg="received exit event container_id:\"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\" id:\"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\" pid:3244 exited_at:{seconds:1752072231 nanos:55543318}" Jul 9 14:43:51.057897 containerd[1557]: time="2025-07-09T14:43:51.057852780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\" id:\"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\" pid:3244 exited_at:{seconds:1752072231 nanos:55543318}" Jul 9 14:43:51.851246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7-rootfs.mount: Deactivated successfully. Jul 9 14:43:52.549300 containerd[1557]: time="2025-07-09T14:43:52.549109571Z" level=info msg="CreateContainer within sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 14:43:52.589801 containerd[1557]: time="2025-07-09T14:43:52.587902242Z" level=info msg="Container 491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:43:52.623379 containerd[1557]: time="2025-07-09T14:43:52.623317953Z" level=info msg="CreateContainer within sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\"" Jul 9 14:43:52.624278 containerd[1557]: time="2025-07-09T14:43:52.624140670Z" level=info msg="StartContainer for \"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\"" Jul 9 14:43:52.625709 containerd[1557]: time="2025-07-09T14:43:52.625685313Z" level=info msg="connecting to shim 491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f" address="unix:///run/containerd/s/7728d6221a7b817ee246051b0625808501a8b71f00d8817369c76d9cac9fa37d" protocol=ttrpc version=3 Jul 9 14:43:52.659042 systemd[1]: Started cri-containerd-491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f.scope - libcontainer container 491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f. Jul 9 14:43:52.727789 containerd[1557]: time="2025-07-09T14:43:52.727541327Z" level=info msg="StartContainer for \"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\" returns successfully" Jul 9 14:43:52.747675 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 14:43:52.748033 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 14:43:52.750216 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 14:43:52.752934 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 14:43:52.756234 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 14:43:52.756596 containerd[1557]: time="2025-07-09T14:43:52.756250797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\" id:\"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\" pid:3293 exited_at:{seconds:1752072232 nanos:755506508}" Jul 9 14:43:52.756596 containerd[1557]: time="2025-07-09T14:43:52.756466394Z" level=info msg="received exit event container_id:\"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\" id:\"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\" pid:3293 exited_at:{seconds:1752072232 nanos:755506508}" Jul 9 14:43:52.758112 systemd[1]: cri-containerd-491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f.scope: Deactivated successfully. Jul 9 14:43:52.790118 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 14:43:52.847934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f-rootfs.mount: Deactivated successfully. Jul 9 14:43:53.554852 containerd[1557]: time="2025-07-09T14:43:53.553576739Z" level=info msg="CreateContainer within sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 14:43:53.659873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount271909673.mount: Deactivated successfully. Jul 9 14:43:53.671866 containerd[1557]: time="2025-07-09T14:43:53.671356833Z" level=info msg="Container a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:43:53.703921 containerd[1557]: time="2025-07-09T14:43:53.703577212Z" level=info msg="CreateContainer within sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\"" Jul 9 14:43:53.706789 containerd[1557]: time="2025-07-09T14:43:53.705262430Z" level=info msg="StartContainer for \"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\"" Jul 9 14:43:53.709749 containerd[1557]: time="2025-07-09T14:43:53.709601026Z" level=info msg="connecting to shim a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82" address="unix:///run/containerd/s/7728d6221a7b817ee246051b0625808501a8b71f00d8817369c76d9cac9fa37d" protocol=ttrpc version=3 Jul 9 14:43:53.744969 systemd[1]: Started cri-containerd-a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82.scope - libcontainer container a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82. Jul 9 14:43:53.811801 systemd[1]: cri-containerd-a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82.scope: Deactivated successfully. Jul 9 14:43:53.816014 containerd[1557]: time="2025-07-09T14:43:53.815978243Z" level=info msg="received exit event container_id:\"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\" id:\"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\" pid:3350 exited_at:{seconds:1752072233 nanos:815481243}" Jul 9 14:43:53.817858 containerd[1557]: time="2025-07-09T14:43:53.817833512Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\" id:\"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\" pid:3350 exited_at:{seconds:1752072233 nanos:815481243}" Jul 9 14:43:53.822363 containerd[1557]: time="2025-07-09T14:43:53.822329815Z" level=info msg="StartContainer for \"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\" returns successfully" Jul 9 14:43:53.866128 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82-rootfs.mount: Deactivated successfully. Jul 9 14:43:54.510874 containerd[1557]: time="2025-07-09T14:43:54.510734125Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:54.512344 containerd[1557]: time="2025-07-09T14:43:54.512307369Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 9 14:43:54.519356 containerd[1557]: time="2025-07-09T14:43:54.517394235Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 14:43:54.526314 containerd[1557]: time="2025-07-09T14:43:54.526272909Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.71299749s" Jul 9 14:43:54.526497 containerd[1557]: time="2025-07-09T14:43:54.526446317Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 9 14:43:54.534044 containerd[1557]: time="2025-07-09T14:43:54.533837629Z" level=info msg="CreateContainer within sandbox \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 14:43:54.568207 containerd[1557]: time="2025-07-09T14:43:54.568126357Z" level=info msg="Container b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:43:54.588320 containerd[1557]: time="2025-07-09T14:43:54.588238827Z" level=info msg="CreateContainer within sandbox \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\"" Jul 9 14:43:54.590795 containerd[1557]: time="2025-07-09T14:43:54.590448504Z" level=info msg="StartContainer for \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\"" Jul 9 14:43:54.595045 containerd[1557]: time="2025-07-09T14:43:54.594975131Z" level=info msg="connecting to shim b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b" address="unix:///run/containerd/s/d76fd673181c4dd37e0e749af3e8e3103b9e77affc9689b10afdf280d76be12f" protocol=ttrpc version=3 Jul 9 14:43:54.595342 containerd[1557]: time="2025-07-09T14:43:54.595059791Z" level=info msg="CreateContainer within sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 14:43:54.638948 systemd[1]: Started cri-containerd-b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b.scope - libcontainer container b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b. Jul 9 14:43:54.662797 containerd[1557]: time="2025-07-09T14:43:54.661564880Z" level=info msg="Container 4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:43:54.678208 containerd[1557]: time="2025-07-09T14:43:54.678148569Z" level=info msg="CreateContainer within sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\"" Jul 9 14:43:54.680631 containerd[1557]: time="2025-07-09T14:43:54.680593812Z" level=info msg="StartContainer for \"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\"" Jul 9 14:43:54.682869 containerd[1557]: time="2025-07-09T14:43:54.682784342Z" level=info msg="connecting to shim 4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3" address="unix:///run/containerd/s/7728d6221a7b817ee246051b0625808501a8b71f00d8817369c76d9cac9fa37d" protocol=ttrpc version=3 Jul 9 14:43:54.718131 systemd[1]: Started cri-containerd-4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3.scope - libcontainer container 4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3. Jul 9 14:43:54.728996 containerd[1557]: time="2025-07-09T14:43:54.728917250Z" level=info msg="StartContainer for \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\" returns successfully" Jul 9 14:43:54.764821 containerd[1557]: time="2025-07-09T14:43:54.763467582Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\" id:\"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\" pid:3425 exited_at:{seconds:1752072234 nanos:763098675}" Jul 9 14:43:54.763727 systemd[1]: cri-containerd-4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3.scope: Deactivated successfully. Jul 9 14:43:54.802125 containerd[1557]: time="2025-07-09T14:43:54.801935501Z" level=info msg="received exit event container_id:\"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\" id:\"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\" pid:3425 exited_at:{seconds:1752072234 nanos:763098675}" Jul 9 14:43:54.821275 containerd[1557]: time="2025-07-09T14:43:54.821187955Z" level=info msg="StartContainer for \"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\" returns successfully" Jul 9 14:43:54.871861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3-rootfs.mount: Deactivated successfully. Jul 9 14:43:55.620773 containerd[1557]: time="2025-07-09T14:43:55.617985237Z" level=info msg="CreateContainer within sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 14:43:55.653465 containerd[1557]: time="2025-07-09T14:43:55.653396634Z" level=info msg="Container fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:43:55.663184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1090608716.mount: Deactivated successfully. Jul 9 14:43:55.682233 containerd[1557]: time="2025-07-09T14:43:55.682068247Z" level=info msg="CreateContainer within sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\"" Jul 9 14:43:55.683449 containerd[1557]: time="2025-07-09T14:43:55.683392470Z" level=info msg="StartContainer for \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\"" Jul 9 14:43:55.687387 containerd[1557]: time="2025-07-09T14:43:55.687350668Z" level=info msg="connecting to shim fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e" address="unix:///run/containerd/s/7728d6221a7b817ee246051b0625808501a8b71f00d8817369c76d9cac9fa37d" protocol=ttrpc version=3 Jul 9 14:43:55.732836 kubelet[2829]: I0709 14:43:55.731421 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dkzk5" podStartSLOduration=2.170835764 podStartE2EDuration="15.731209075s" podCreationTimestamp="2025-07-09 14:43:40 +0000 UTC" firstStartedPulling="2025-07-09 14:43:40.96734472 +0000 UTC m=+6.946615206" lastFinishedPulling="2025-07-09 14:43:54.527718001 +0000 UTC m=+20.506988517" observedRunningTime="2025-07-09 14:43:55.72966119 +0000 UTC m=+21.708931686" watchObservedRunningTime="2025-07-09 14:43:55.731209075 +0000 UTC m=+21.710479561" Jul 9 14:43:55.749000 systemd[1]: Started cri-containerd-fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e.scope - libcontainer container fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e. Jul 9 14:43:55.850910 containerd[1557]: time="2025-07-09T14:43:55.850854988Z" level=info msg="StartContainer for \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" returns successfully" Jul 9 14:43:56.178533 containerd[1557]: time="2025-07-09T14:43:56.178476021Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" id:\"dcd9059f07070cf64a8fb1b48a4fdb71f5772aefbe7bec85995dc515a618423a\" pid:3496 exited_at:{seconds:1752072236 nanos:177731365}" Jul 9 14:43:56.250771 kubelet[2829]: I0709 14:43:56.250248 2829 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 14:43:56.297589 systemd[1]: Created slice kubepods-burstable-pod152ea29f_4d07_412e_9b26_8e7ff843d140.slice - libcontainer container kubepods-burstable-pod152ea29f_4d07_412e_9b26_8e7ff843d140.slice. Jul 9 14:43:56.309119 systemd[1]: Created slice kubepods-burstable-pod3255eb24_38a1_4c68_adfc_0f86715b1115.slice - libcontainer container kubepods-burstable-pod3255eb24_38a1_4c68_adfc_0f86715b1115.slice. Jul 9 14:43:56.372853 kubelet[2829]: I0709 14:43:56.372703 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/152ea29f-4d07-412e-9b26-8e7ff843d140-config-volume\") pod \"coredns-668d6bf9bc-krx7d\" (UID: \"152ea29f-4d07-412e-9b26-8e7ff843d140\") " pod="kube-system/coredns-668d6bf9bc-krx7d" Jul 9 14:43:56.372853 kubelet[2829]: I0709 14:43:56.372795 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc445\" (UniqueName: \"kubernetes.io/projected/152ea29f-4d07-412e-9b26-8e7ff843d140-kube-api-access-gc445\") pod \"coredns-668d6bf9bc-krx7d\" (UID: \"152ea29f-4d07-412e-9b26-8e7ff843d140\") " pod="kube-system/coredns-668d6bf9bc-krx7d" Jul 9 14:43:56.372853 kubelet[2829]: I0709 14:43:56.372818 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j5fk\" (UniqueName: \"kubernetes.io/projected/3255eb24-38a1-4c68-adfc-0f86715b1115-kube-api-access-8j5fk\") pod \"coredns-668d6bf9bc-p9dhj\" (UID: \"3255eb24-38a1-4c68-adfc-0f86715b1115\") " pod="kube-system/coredns-668d6bf9bc-p9dhj" Jul 9 14:43:56.373119 kubelet[2829]: I0709 14:43:56.372876 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3255eb24-38a1-4c68-adfc-0f86715b1115-config-volume\") pod \"coredns-668d6bf9bc-p9dhj\" (UID: \"3255eb24-38a1-4c68-adfc-0f86715b1115\") " pod="kube-system/coredns-668d6bf9bc-p9dhj" Jul 9 14:43:56.604268 containerd[1557]: time="2025-07-09T14:43:56.604101150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-krx7d,Uid:152ea29f-4d07-412e-9b26-8e7ff843d140,Namespace:kube-system,Attempt:0,}" Jul 9 14:43:56.615278 containerd[1557]: time="2025-07-09T14:43:56.615223396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p9dhj,Uid:3255eb24-38a1-4c68-adfc-0f86715b1115,Namespace:kube-system,Attempt:0,}" Jul 9 14:43:58.821422 systemd-networkd[1452]: cilium_host: Link UP Jul 9 14:43:58.825823 systemd-networkd[1452]: cilium_net: Link UP Jul 9 14:43:58.826943 systemd-networkd[1452]: cilium_host: Gained carrier Jul 9 14:43:58.827148 systemd-networkd[1452]: cilium_net: Gained carrier Jul 9 14:43:58.949868 systemd-networkd[1452]: cilium_vxlan: Link UP Jul 9 14:43:58.950053 systemd-networkd[1452]: cilium_vxlan: Gained carrier Jul 9 14:43:59.280844 kernel: NET: Registered PF_ALG protocol family Jul 9 14:43:59.503260 systemd-networkd[1452]: cilium_net: Gained IPv6LL Jul 9 14:43:59.503652 systemd-networkd[1452]: cilium_host: Gained IPv6LL Jul 9 14:44:00.078065 systemd-networkd[1452]: cilium_vxlan: Gained IPv6LL Jul 9 14:44:00.275907 systemd-networkd[1452]: lxc_health: Link UP Jul 9 14:44:00.286282 systemd-networkd[1452]: lxc_health: Gained carrier Jul 9 14:44:00.445781 kubelet[2829]: I0709 14:44:00.444717 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gz68d" podStartSLOduration=10.353972399 podStartE2EDuration="20.444687598s" podCreationTimestamp="2025-07-09 14:43:40 +0000 UTC" firstStartedPulling="2025-07-09 14:43:40.718119466 +0000 UTC m=+6.697389952" lastFinishedPulling="2025-07-09 14:43:50.808834615 +0000 UTC m=+16.788105151" observedRunningTime="2025-07-09 14:43:56.760430408 +0000 UTC m=+22.739700904" watchObservedRunningTime="2025-07-09 14:44:00.444687598 +0000 UTC m=+26.423958084" Jul 9 14:44:00.849232 systemd-networkd[1452]: lxce8d1f7a970a9: Link UP Jul 9 14:44:00.851876 kernel: eth0: renamed from tmpc7e16 Jul 9 14:44:00.857007 systemd-networkd[1452]: lxc03b624aaa2b4: Link UP Jul 9 14:44:00.870482 kernel: eth0: renamed from tmp2dc21 Jul 9 14:44:00.872484 systemd-networkd[1452]: lxce8d1f7a970a9: Gained carrier Jul 9 14:44:00.874230 systemd-networkd[1452]: lxc03b624aaa2b4: Gained carrier Jul 9 14:44:02.510038 systemd-networkd[1452]: lxc_health: Gained IPv6LL Jul 9 14:44:02.638090 systemd-networkd[1452]: lxce8d1f7a970a9: Gained IPv6LL Jul 9 14:44:02.639434 systemd-networkd[1452]: lxc03b624aaa2b4: Gained IPv6LL Jul 9 14:44:06.020801 containerd[1557]: time="2025-07-09T14:44:06.019969103Z" level=info msg="connecting to shim 2dc219edd866310d29b2f5efe93629ecb7445e5ac947bf3a19245e6cb6015e5f" address="unix:///run/containerd/s/3f2e57ba7ac8239b58f216ed9bf643140f84385513127f076b57efe6e11f1cf6" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:44:06.036963 containerd[1557]: time="2025-07-09T14:44:06.036896025Z" level=info msg="connecting to shim c7e163bce3929e6b118af290143bc2d809149510b1a3ebeef16b0ffb00c69074" address="unix:///run/containerd/s/e79d8685eb5187960d2e0984e4697737bb4198be914835a32d780a2cb8f271df" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:44:06.083995 systemd[1]: Started cri-containerd-2dc219edd866310d29b2f5efe93629ecb7445e5ac947bf3a19245e6cb6015e5f.scope - libcontainer container 2dc219edd866310d29b2f5efe93629ecb7445e5ac947bf3a19245e6cb6015e5f. Jul 9 14:44:06.094003 systemd[1]: Started cri-containerd-c7e163bce3929e6b118af290143bc2d809149510b1a3ebeef16b0ffb00c69074.scope - libcontainer container c7e163bce3929e6b118af290143bc2d809149510b1a3ebeef16b0ffb00c69074. Jul 9 14:44:06.179611 containerd[1557]: time="2025-07-09T14:44:06.179545251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p9dhj,Uid:3255eb24-38a1-4c68-adfc-0f86715b1115,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dc219edd866310d29b2f5efe93629ecb7445e5ac947bf3a19245e6cb6015e5f\"" Jul 9 14:44:06.184278 containerd[1557]: time="2025-07-09T14:44:06.184144158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-krx7d,Uid:152ea29f-4d07-412e-9b26-8e7ff843d140,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7e163bce3929e6b118af290143bc2d809149510b1a3ebeef16b0ffb00c69074\"" Jul 9 14:44:06.185837 containerd[1557]: time="2025-07-09T14:44:06.185610248Z" level=info msg="CreateContainer within sandbox \"2dc219edd866310d29b2f5efe93629ecb7445e5ac947bf3a19245e6cb6015e5f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 14:44:06.190035 containerd[1557]: time="2025-07-09T14:44:06.190003037Z" level=info msg="CreateContainer within sandbox \"c7e163bce3929e6b118af290143bc2d809149510b1a3ebeef16b0ffb00c69074\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 14:44:06.213670 containerd[1557]: time="2025-07-09T14:44:06.212905357Z" level=info msg="Container 6bff7ea1c21059a6e8d72f4e0f2280fbf73ae83f0fb10bd421303dc12f5d94af: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:44:06.216370 containerd[1557]: time="2025-07-09T14:44:06.216341896Z" level=info msg="Container 76fdc6d783c3df9acf9fa33f700341de9c978788b60285890d8b4b1859ce75ee: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:44:06.221727 containerd[1557]: time="2025-07-09T14:44:06.221696706Z" level=info msg="CreateContainer within sandbox \"2dc219edd866310d29b2f5efe93629ecb7445e5ac947bf3a19245e6cb6015e5f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6bff7ea1c21059a6e8d72f4e0f2280fbf73ae83f0fb10bd421303dc12f5d94af\"" Jul 9 14:44:06.224325 containerd[1557]: time="2025-07-09T14:44:06.223168205Z" level=info msg="StartContainer for \"6bff7ea1c21059a6e8d72f4e0f2280fbf73ae83f0fb10bd421303dc12f5d94af\"" Jul 9 14:44:06.225050 containerd[1557]: time="2025-07-09T14:44:06.225002167Z" level=info msg="connecting to shim 6bff7ea1c21059a6e8d72f4e0f2280fbf73ae83f0fb10bd421303dc12f5d94af" address="unix:///run/containerd/s/3f2e57ba7ac8239b58f216ed9bf643140f84385513127f076b57efe6e11f1cf6" protocol=ttrpc version=3 Jul 9 14:44:06.227700 containerd[1557]: time="2025-07-09T14:44:06.227668927Z" level=info msg="CreateContainer within sandbox \"c7e163bce3929e6b118af290143bc2d809149510b1a3ebeef16b0ffb00c69074\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76fdc6d783c3df9acf9fa33f700341de9c978788b60285890d8b4b1859ce75ee\"" Jul 9 14:44:06.229770 containerd[1557]: time="2025-07-09T14:44:06.229693989Z" level=info msg="StartContainer for \"76fdc6d783c3df9acf9fa33f700341de9c978788b60285890d8b4b1859ce75ee\"" Jul 9 14:44:06.234169 containerd[1557]: time="2025-07-09T14:44:06.234091317Z" level=info msg="connecting to shim 76fdc6d783c3df9acf9fa33f700341de9c978788b60285890d8b4b1859ce75ee" address="unix:///run/containerd/s/e79d8685eb5187960d2e0984e4697737bb4198be914835a32d780a2cb8f271df" protocol=ttrpc version=3 Jul 9 14:44:06.260953 systemd[1]: Started cri-containerd-6bff7ea1c21059a6e8d72f4e0f2280fbf73ae83f0fb10bd421303dc12f5d94af.scope - libcontainer container 6bff7ea1c21059a6e8d72f4e0f2280fbf73ae83f0fb10bd421303dc12f5d94af. Jul 9 14:44:06.268921 systemd[1]: Started cri-containerd-76fdc6d783c3df9acf9fa33f700341de9c978788b60285890d8b4b1859ce75ee.scope - libcontainer container 76fdc6d783c3df9acf9fa33f700341de9c978788b60285890d8b4b1859ce75ee. Jul 9 14:44:06.317859 containerd[1557]: time="2025-07-09T14:44:06.317521521Z" level=info msg="StartContainer for \"6bff7ea1c21059a6e8d72f4e0f2280fbf73ae83f0fb10bd421303dc12f5d94af\" returns successfully" Jul 9 14:44:06.334040 containerd[1557]: time="2025-07-09T14:44:06.333992525Z" level=info msg="StartContainer for \"76fdc6d783c3df9acf9fa33f700341de9c978788b60285890d8b4b1859ce75ee\" returns successfully" Jul 9 14:44:06.890811 kubelet[2829]: I0709 14:44:06.888394 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-krx7d" podStartSLOduration=26.88256418 podStartE2EDuration="26.88256418s" podCreationTimestamp="2025-07-09 14:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:44:06.880363157 +0000 UTC m=+32.859633743" watchObservedRunningTime="2025-07-09 14:44:06.88256418 +0000 UTC m=+32.861834746" Jul 9 14:44:07.001490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321856138.mount: Deactivated successfully. Jul 9 14:45:21.838408 systemd[1]: Started sshd@9-172.24.4.161:22-172.24.4.1:50898.service - OpenSSH per-connection server daemon (172.24.4.1:50898). Jul 9 14:45:23.244763 sshd[4156]: Accepted publickey for core from 172.24.4.1 port 50898 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:45:23.247962 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:45:23.265868 systemd-logind[1533]: New session 12 of user core. Jul 9 14:45:23.274010 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 14:45:23.996277 sshd[4159]: Connection closed by 172.24.4.1 port 50898 Jul 9 14:45:23.996938 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Jul 9 14:45:24.007302 systemd[1]: sshd@9-172.24.4.161:22-172.24.4.1:50898.service: Deactivated successfully. Jul 9 14:45:24.016538 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 14:45:24.022356 systemd-logind[1533]: Session 12 logged out. Waiting for processes to exit. Jul 9 14:45:24.025476 systemd-logind[1533]: Removed session 12. Jul 9 14:45:29.010693 systemd[1]: Started sshd@10-172.24.4.161:22-172.24.4.1:33208.service - OpenSSH per-connection server daemon (172.24.4.1:33208). Jul 9 14:45:30.353173 sshd[4173]: Accepted publickey for core from 172.24.4.1 port 33208 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:45:30.357538 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:45:30.388028 systemd-logind[1533]: New session 13 of user core. Jul 9 14:45:30.403004 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 14:45:31.239930 sshd[4176]: Connection closed by 172.24.4.1 port 33208 Jul 9 14:45:31.242377 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Jul 9 14:45:31.254974 systemd-logind[1533]: Session 13 logged out. Waiting for processes to exit. Jul 9 14:45:31.256458 systemd[1]: sshd@10-172.24.4.161:22-172.24.4.1:33208.service: Deactivated successfully. Jul 9 14:45:31.267115 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 14:45:31.274555 systemd-logind[1533]: Removed session 13. Jul 9 14:45:36.253513 systemd[1]: Started sshd@11-172.24.4.161:22-172.24.4.1:40234.service - OpenSSH per-connection server daemon (172.24.4.1:40234). Jul 9 14:45:37.531868 sshd[4191]: Accepted publickey for core from 172.24.4.1 port 40234 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:45:37.535050 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:45:37.552856 systemd-logind[1533]: New session 14 of user core. Jul 9 14:45:37.560171 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 14:45:38.359327 sshd[4194]: Connection closed by 172.24.4.1 port 40234 Jul 9 14:45:38.361196 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Jul 9 14:45:38.371053 systemd-logind[1533]: Session 14 logged out. Waiting for processes to exit. Jul 9 14:45:38.374466 systemd[1]: sshd@11-172.24.4.161:22-172.24.4.1:40234.service: Deactivated successfully. Jul 9 14:45:38.383384 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 14:45:38.394613 systemd-logind[1533]: Removed session 14. Jul 9 14:45:43.380535 systemd[1]: Started sshd@12-172.24.4.161:22-172.24.4.1:40250.service - OpenSSH per-connection server daemon (172.24.4.1:40250). Jul 9 14:45:44.546297 sshd[4208]: Accepted publickey for core from 172.24.4.1 port 40250 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:45:44.554918 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:45:44.579929 systemd-logind[1533]: New session 15 of user core. Jul 9 14:45:44.586088 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 14:45:45.428943 sshd[4211]: Connection closed by 172.24.4.1 port 40250 Jul 9 14:45:45.427896 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jul 9 14:45:45.467719 systemd[1]: sshd@12-172.24.4.161:22-172.24.4.1:40250.service: Deactivated successfully. Jul 9 14:45:45.477324 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 14:45:45.480472 systemd-logind[1533]: Session 15 logged out. Waiting for processes to exit. Jul 9 14:45:45.492183 systemd[1]: Started sshd@13-172.24.4.161:22-172.24.4.1:41676.service - OpenSSH per-connection server daemon (172.24.4.1:41676). Jul 9 14:45:45.497258 systemd-logind[1533]: Removed session 15. Jul 9 14:45:46.847987 sshd[4224]: Accepted publickey for core from 172.24.4.1 port 41676 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:45:46.856180 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:45:46.883946 systemd-logind[1533]: New session 16 of user core. Jul 9 14:45:46.893145 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 14:45:47.660817 sshd[4227]: Connection closed by 172.24.4.1 port 41676 Jul 9 14:45:47.661189 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Jul 9 14:45:47.683578 systemd[1]: sshd@13-172.24.4.161:22-172.24.4.1:41676.service: Deactivated successfully. Jul 9 14:45:47.692281 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 14:45:47.698133 systemd-logind[1533]: Session 16 logged out. Waiting for processes to exit. Jul 9 14:45:47.706163 systemd[1]: Started sshd@14-172.24.4.161:22-172.24.4.1:41690.service - OpenSSH per-connection server daemon (172.24.4.1:41690). Jul 9 14:45:47.713216 systemd-logind[1533]: Removed session 16. Jul 9 14:45:49.078376 sshd[4237]: Accepted publickey for core from 172.24.4.1 port 41690 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:45:49.081661 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:45:49.095264 systemd-logind[1533]: New session 17 of user core. Jul 9 14:45:49.105106 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 14:45:49.839795 sshd[4240]: Connection closed by 172.24.4.1 port 41690 Jul 9 14:45:49.841543 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Jul 9 14:45:49.852320 systemd[1]: sshd@14-172.24.4.161:22-172.24.4.1:41690.service: Deactivated successfully. Jul 9 14:45:49.858924 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 14:45:49.862241 systemd-logind[1533]: Session 17 logged out. Waiting for processes to exit. Jul 9 14:45:49.866706 systemd-logind[1533]: Removed session 17. Jul 9 14:45:54.871224 systemd[1]: Started sshd@15-172.24.4.161:22-172.24.4.1:46158.service - OpenSSH per-connection server daemon (172.24.4.1:46158). Jul 9 14:45:56.180892 sshd[4251]: Accepted publickey for core from 172.24.4.1 port 46158 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:45:56.183366 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:45:56.196595 systemd-logind[1533]: New session 18 of user core. Jul 9 14:45:56.208173 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 14:45:57.124814 sshd[4254]: Connection closed by 172.24.4.1 port 46158 Jul 9 14:45:57.123942 sshd-session[4251]: pam_unix(sshd:session): session closed for user core Jul 9 14:45:57.133680 systemd[1]: sshd@15-172.24.4.161:22-172.24.4.1:46158.service: Deactivated successfully. Jul 9 14:45:57.142323 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 14:45:57.148308 systemd-logind[1533]: Session 18 logged out. Waiting for processes to exit. Jul 9 14:45:57.151170 systemd-logind[1533]: Removed session 18. Jul 9 14:46:02.170918 systemd[1]: Started sshd@16-172.24.4.161:22-172.24.4.1:46168.service - OpenSSH per-connection server daemon (172.24.4.1:46168). Jul 9 14:46:03.322036 sshd[4268]: Accepted publickey for core from 172.24.4.1 port 46168 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:46:03.325727 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:46:03.340210 systemd-logind[1533]: New session 19 of user core. Jul 9 14:46:03.352174 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 14:46:04.300603 sshd[4271]: Connection closed by 172.24.4.1 port 46168 Jul 9 14:46:04.301411 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Jul 9 14:46:04.316907 systemd[1]: sshd@16-172.24.4.161:22-172.24.4.1:46168.service: Deactivated successfully. Jul 9 14:46:04.325413 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 14:46:04.332877 systemd-logind[1533]: Session 19 logged out. Waiting for processes to exit. Jul 9 14:46:04.337835 systemd-logind[1533]: Removed session 19. Jul 9 14:46:09.348102 systemd[1]: Started sshd@17-172.24.4.161:22-172.24.4.1:52598.service - OpenSSH per-connection server daemon (172.24.4.1:52598). Jul 9 14:46:10.459859 sshd[4282]: Accepted publickey for core from 172.24.4.1 port 52598 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:46:10.463260 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:46:10.487942 systemd-logind[1533]: New session 20 of user core. Jul 9 14:46:10.498145 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 14:46:11.283097 sshd[4285]: Connection closed by 172.24.4.1 port 52598 Jul 9 14:46:11.285314 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Jul 9 14:46:11.301951 systemd[1]: sshd@17-172.24.4.161:22-172.24.4.1:52598.service: Deactivated successfully. Jul 9 14:46:11.311903 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 14:46:11.314647 systemd-logind[1533]: Session 20 logged out. Waiting for processes to exit. Jul 9 14:46:11.319030 systemd-logind[1533]: Removed session 20. Jul 9 14:46:16.316651 systemd[1]: Started sshd@18-172.24.4.161:22-172.24.4.1:39842.service - OpenSSH per-connection server daemon (172.24.4.1:39842). Jul 9 14:46:17.427159 sshd[4299]: Accepted publickey for core from 172.24.4.1 port 39842 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:46:17.437048 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:46:17.450436 systemd-logind[1533]: New session 21 of user core. Jul 9 14:46:17.469207 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 14:46:18.082212 sshd[4302]: Connection closed by 172.24.4.1 port 39842 Jul 9 14:46:18.081859 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Jul 9 14:46:18.093559 systemd[1]: sshd@18-172.24.4.161:22-172.24.4.1:39842.service: Deactivated successfully. Jul 9 14:46:18.101635 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 14:46:18.107006 systemd-logind[1533]: Session 21 logged out. Waiting for processes to exit. Jul 9 14:46:18.111543 systemd-logind[1533]: Removed session 21. Jul 9 14:46:23.115404 systemd[1]: Started sshd@19-172.24.4.161:22-172.24.4.1:39854.service - OpenSSH per-connection server daemon (172.24.4.1:39854). Jul 9 14:46:24.319249 sshd[4314]: Accepted publickey for core from 172.24.4.1 port 39854 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:46:24.322928 sshd-session[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:46:24.343903 systemd-logind[1533]: New session 22 of user core. Jul 9 14:46:24.351181 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 14:46:25.107013 sshd[4317]: Connection closed by 172.24.4.1 port 39854 Jul 9 14:46:25.109922 sshd-session[4314]: pam_unix(sshd:session): session closed for user core Jul 9 14:46:25.135704 systemd[1]: sshd@19-172.24.4.161:22-172.24.4.1:39854.service: Deactivated successfully. Jul 9 14:46:25.147301 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 14:46:25.151229 systemd-logind[1533]: Session 22 logged out. Waiting for processes to exit. Jul 9 14:46:25.156245 systemd-logind[1533]: Removed session 22. Jul 9 14:46:30.149645 systemd[1]: Started sshd@20-172.24.4.161:22-172.24.4.1:43306.service - OpenSSH per-connection server daemon (172.24.4.1:43306). Jul 9 14:46:31.286181 sshd[4329]: Accepted publickey for core from 172.24.4.1 port 43306 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:46:31.290627 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:46:31.311914 systemd-logind[1533]: New session 23 of user core. Jul 9 14:46:31.325128 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 14:46:32.029976 sshd[4332]: Connection closed by 172.24.4.1 port 43306 Jul 9 14:46:32.031384 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Jul 9 14:46:32.065599 systemd[1]: sshd@20-172.24.4.161:22-172.24.4.1:43306.service: Deactivated successfully. Jul 9 14:46:32.078277 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 14:46:32.088183 systemd-logind[1533]: Session 23 logged out. Waiting for processes to exit. Jul 9 14:46:32.093456 systemd-logind[1533]: Removed session 23. Jul 9 14:46:37.058327 systemd[1]: Started sshd@21-172.24.4.161:22-172.24.4.1:54024.service - OpenSSH per-connection server daemon (172.24.4.1:54024). Jul 9 14:46:38.310123 sshd[4346]: Accepted publickey for core from 172.24.4.1 port 54024 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:46:38.324702 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:46:38.342876 systemd-logind[1533]: New session 24 of user core. Jul 9 14:46:38.350239 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 14:46:39.118539 sshd[4349]: Connection closed by 172.24.4.1 port 54024 Jul 9 14:46:39.120182 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Jul 9 14:46:39.130727 systemd[1]: sshd@21-172.24.4.161:22-172.24.4.1:54024.service: Deactivated successfully. Jul 9 14:46:39.137576 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 14:46:39.142920 systemd-logind[1533]: Session 24 logged out. Waiting for processes to exit. Jul 9 14:46:39.147126 systemd-logind[1533]: Removed session 24. Jul 9 14:46:44.167634 systemd[1]: Started sshd@22-172.24.4.161:22-172.24.4.1:34522.service - OpenSSH per-connection server daemon (172.24.4.1:34522). Jul 9 14:46:45.287859 sshd[4363]: Accepted publickey for core from 172.24.4.1 port 34522 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:46:45.292008 sshd-session[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:46:45.317031 systemd-logind[1533]: New session 25 of user core. Jul 9 14:46:45.344273 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 14:46:46.171508 sshd[4366]: Connection closed by 172.24.4.1 port 34522 Jul 9 14:46:46.173918 sshd-session[4363]: pam_unix(sshd:session): session closed for user core Jul 9 14:46:46.208001 systemd[1]: sshd@22-172.24.4.161:22-172.24.4.1:34522.service: Deactivated successfully. Jul 9 14:46:46.211191 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 14:46:46.213630 systemd-logind[1533]: Session 25 logged out. Waiting for processes to exit. Jul 9 14:46:46.217189 systemd[1]: Started sshd@23-172.24.4.161:22-172.24.4.1:34530.service - OpenSSH per-connection server daemon (172.24.4.1:34530). Jul 9 14:46:46.220818 systemd-logind[1533]: Removed session 25. Jul 9 14:46:47.466030 sshd[4378]: Accepted publickey for core from 172.24.4.1 port 34530 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:46:47.469929 sshd-session[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:46:47.485315 systemd-logind[1533]: New session 26 of user core. Jul 9 14:46:47.495020 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 14:46:48.292519 sshd[4381]: Connection closed by 172.24.4.1 port 34530 Jul 9 14:46:48.295543 sshd-session[4378]: pam_unix(sshd:session): session closed for user core Jul 9 14:46:48.315705 systemd[1]: sshd@23-172.24.4.161:22-172.24.4.1:34530.service: Deactivated successfully. Jul 9 14:46:48.321244 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 14:46:48.323486 systemd-logind[1533]: Session 26 logged out. Waiting for processes to exit. Jul 9 14:46:48.330523 systemd[1]: Started sshd@24-172.24.4.161:22-172.24.4.1:34540.service - OpenSSH per-connection server daemon (172.24.4.1:34540). Jul 9 14:46:48.335322 systemd-logind[1533]: Removed session 26. Jul 9 14:46:49.520838 sshd[4391]: Accepted publickey for core from 172.24.4.1 port 34540 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:46:49.523065 sshd-session[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:46:49.532141 systemd-logind[1533]: New session 27 of user core. Jul 9 14:46:49.539890 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 14:46:51.754806 sshd[4394]: Connection closed by 172.24.4.1 port 34540 Jul 9 14:46:51.757891 sshd-session[4391]: pam_unix(sshd:session): session closed for user core Jul 9 14:46:51.787783 systemd[1]: sshd@24-172.24.4.161:22-172.24.4.1:34540.service: Deactivated successfully. Jul 9 14:46:51.795185 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 14:46:51.805360 systemd-logind[1533]: Session 27 logged out. Waiting for processes to exit. Jul 9 14:46:51.819849 systemd[1]: Started sshd@25-172.24.4.161:22-172.24.4.1:34548.service - OpenSSH per-connection server daemon (172.24.4.1:34548). Jul 9 14:46:51.825370 systemd-logind[1533]: Removed session 27. Jul 9 14:46:53.094441 sshd[4412]: Accepted publickey for core from 172.24.4.1 port 34548 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:46:53.099101 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:46:53.114867 systemd-logind[1533]: New session 28 of user core. Jul 9 14:46:53.128055 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 9 14:46:54.119672 sshd[4415]: Connection closed by 172.24.4.1 port 34548 Jul 9 14:46:54.121221 sshd-session[4412]: pam_unix(sshd:session): session closed for user core Jul 9 14:46:54.141854 systemd[1]: sshd@25-172.24.4.161:22-172.24.4.1:34548.service: Deactivated successfully. Jul 9 14:46:54.149184 systemd[1]: session-28.scope: Deactivated successfully. Jul 9 14:46:54.153442 systemd-logind[1533]: Session 28 logged out. Waiting for processes to exit. Jul 9 14:46:54.162007 systemd[1]: Started sshd@26-172.24.4.161:22-172.24.4.1:34610.service - OpenSSH per-connection server daemon (172.24.4.1:34610). Jul 9 14:46:54.164366 systemd-logind[1533]: Removed session 28. Jul 9 14:46:55.472050 sshd[4425]: Accepted publickey for core from 172.24.4.1 port 34610 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:46:55.475597 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:46:55.488222 systemd-logind[1533]: New session 29 of user core. Jul 9 14:46:55.499438 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 9 14:46:56.277803 sshd[4428]: Connection closed by 172.24.4.1 port 34610 Jul 9 14:46:56.279228 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Jul 9 14:46:56.296946 systemd[1]: sshd@26-172.24.4.161:22-172.24.4.1:34610.service: Deactivated successfully. Jul 9 14:46:56.308239 systemd[1]: session-29.scope: Deactivated successfully. Jul 9 14:46:56.313318 systemd-logind[1533]: Session 29 logged out. Waiting for processes to exit. Jul 9 14:46:56.322656 systemd-logind[1533]: Removed session 29. Jul 9 14:47:01.325333 systemd[1]: Started sshd@27-172.24.4.161:22-172.24.4.1:34620.service - OpenSSH per-connection server daemon (172.24.4.1:34620). Jul 9 14:47:02.497955 sshd[4441]: Accepted publickey for core from 172.24.4.1 port 34620 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:47:02.502639 sshd-session[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:47:02.540535 systemd-logind[1533]: New session 30 of user core. Jul 9 14:47:02.556303 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 9 14:47:03.141858 sshd[4447]: Connection closed by 172.24.4.1 port 34620 Jul 9 14:47:03.143456 sshd-session[4441]: pam_unix(sshd:session): session closed for user core Jul 9 14:47:03.155933 systemd[1]: sshd@27-172.24.4.161:22-172.24.4.1:34620.service: Deactivated successfully. Jul 9 14:47:03.164167 systemd[1]: session-30.scope: Deactivated successfully. Jul 9 14:47:03.166595 systemd-logind[1533]: Session 30 logged out. Waiting for processes to exit. Jul 9 14:47:03.170630 systemd-logind[1533]: Removed session 30. Jul 9 14:47:08.176598 systemd[1]: Started sshd@28-172.24.4.161:22-172.24.4.1:51680.service - OpenSSH per-connection server daemon (172.24.4.1:51680). Jul 9 14:47:09.309193 sshd[4461]: Accepted publickey for core from 172.24.4.1 port 51680 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:47:09.311424 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:47:09.324676 systemd-logind[1533]: New session 31 of user core. Jul 9 14:47:09.333986 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 9 14:47:10.289061 sshd[4464]: Connection closed by 172.24.4.1 port 51680 Jul 9 14:47:10.290613 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Jul 9 14:47:10.300206 systemd[1]: sshd@28-172.24.4.161:22-172.24.4.1:51680.service: Deactivated successfully. Jul 9 14:47:10.308682 systemd[1]: session-31.scope: Deactivated successfully. Jul 9 14:47:10.311847 systemd-logind[1533]: Session 31 logged out. Waiting for processes to exit. Jul 9 14:47:10.317272 systemd-logind[1533]: Removed session 31. Jul 9 14:47:15.323369 systemd[1]: Started sshd@29-172.24.4.161:22-172.24.4.1:44664.service - OpenSSH per-connection server daemon (172.24.4.1:44664). Jul 9 14:47:16.851866 sshd[4478]: Accepted publickey for core from 172.24.4.1 port 44664 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:47:16.855079 sshd-session[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:47:16.874833 systemd-logind[1533]: New session 32 of user core. Jul 9 14:47:16.891073 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 9 14:47:17.674490 sshd[4481]: Connection closed by 172.24.4.1 port 44664 Jul 9 14:47:17.673519 sshd-session[4478]: pam_unix(sshd:session): session closed for user core Jul 9 14:47:17.691588 systemd[1]: sshd@29-172.24.4.161:22-172.24.4.1:44664.service: Deactivated successfully. Jul 9 14:47:17.698513 systemd[1]: session-32.scope: Deactivated successfully. Jul 9 14:47:17.702670 systemd-logind[1533]: Session 32 logged out. Waiting for processes to exit. Jul 9 14:47:17.711417 systemd[1]: Started sshd@30-172.24.4.161:22-172.24.4.1:44678.service - OpenSSH per-connection server daemon (172.24.4.1:44678). Jul 9 14:47:17.715087 systemd-logind[1533]: Removed session 32. Jul 9 14:47:19.076828 sshd[4493]: Accepted publickey for core from 172.24.4.1 port 44678 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:47:19.079492 sshd-session[4493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:47:19.093257 systemd-logind[1533]: New session 33 of user core. Jul 9 14:47:19.106348 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 9 14:47:21.351690 kubelet[2829]: I0709 14:47:21.351231 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p9dhj" podStartSLOduration=221.351121091 podStartE2EDuration="3m41.351121091s" podCreationTimestamp="2025-07-09 14:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:44:06.990461159 +0000 UTC m=+32.969731665" watchObservedRunningTime="2025-07-09 14:47:21.351121091 +0000 UTC m=+227.330391587" Jul 9 14:47:21.367054 containerd[1557]: time="2025-07-09T14:47:21.366724814Z" level=info msg="StopContainer for \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\" with timeout 30 (s)" Jul 9 14:47:21.370023 containerd[1557]: time="2025-07-09T14:47:21.368984566Z" level=info msg="Stop container \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\" with signal terminated" Jul 9 14:47:21.422413 systemd[1]: cri-containerd-b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b.scope: Deactivated successfully. Jul 9 14:47:21.424101 systemd[1]: cri-containerd-b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b.scope: Consumed 1.303s CPU time, 25.3M memory peak, 4K written to disk. Jul 9 14:47:21.431501 containerd[1557]: time="2025-07-09T14:47:21.431238469Z" level=info msg="received exit event container_id:\"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\" id:\"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\" pid:3398 exited_at:{seconds:1752072441 nanos:428388935}" Jul 9 14:47:21.432874 containerd[1557]: time="2025-07-09T14:47:21.432638986Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\" id:\"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\" pid:3398 exited_at:{seconds:1752072441 nanos:428388935}" Jul 9 14:47:21.462111 containerd[1557]: time="2025-07-09T14:47:21.462005462Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 14:47:21.473118 containerd[1557]: time="2025-07-09T14:47:21.473046520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" id:\"9fe18f7422411b2b7a1f4b542c9f613a261a5acf1b0663c001e1688fc23e8cc5\" pid:4522 exited_at:{seconds:1752072441 nanos:471529657}" Jul 9 14:47:21.475306 containerd[1557]: time="2025-07-09T14:47:21.475260037Z" level=info msg="StopContainer for \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" with timeout 2 (s)" Jul 9 14:47:21.475813 containerd[1557]: time="2025-07-09T14:47:21.475783244Z" level=info msg="Stop container \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" with signal terminated" Jul 9 14:47:21.500358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b-rootfs.mount: Deactivated successfully. Jul 9 14:47:21.518086 systemd-networkd[1452]: lxc_health: Link DOWN Jul 9 14:47:21.519309 systemd-networkd[1452]: lxc_health: Lost carrier Jul 9 14:47:21.538865 containerd[1557]: time="2025-07-09T14:47:21.538676271Z" level=info msg="StopContainer for \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\" returns successfully" Jul 9 14:47:21.543777 containerd[1557]: time="2025-07-09T14:47:21.543686722Z" level=info msg="StopPodSandbox for \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\"" Jul 9 14:47:21.544084 containerd[1557]: time="2025-07-09T14:47:21.544053939Z" level=info msg="Container to stop \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 14:47:21.552982 systemd[1]: cri-containerd-fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e.scope: Deactivated successfully. Jul 9 14:47:21.553884 systemd[1]: cri-containerd-fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e.scope: Consumed 11.586s CPU time, 123.8M memory peak, 136K read from disk, 13.3M written to disk. Jul 9 14:47:21.565406 containerd[1557]: time="2025-07-09T14:47:21.565074162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" id:\"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" pid:3467 exited_at:{seconds:1752072441 nanos:563586412}" Jul 9 14:47:21.565406 containerd[1557]: time="2025-07-09T14:47:21.565249950Z" level=info msg="received exit event container_id:\"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" id:\"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" pid:3467 exited_at:{seconds:1752072441 nanos:563586412}" Jul 9 14:47:21.571082 systemd[1]: cri-containerd-3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed.scope: Deactivated successfully. Jul 9 14:47:21.574335 containerd[1557]: time="2025-07-09T14:47:21.574252911Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\" id:\"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\" pid:3062 exit_status:137 exited_at:{seconds:1752072441 nanos:573597346}" Jul 9 14:47:21.614794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e-rootfs.mount: Deactivated successfully. Jul 9 14:47:21.650609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed-rootfs.mount: Deactivated successfully. Jul 9 14:47:21.652099 containerd[1557]: time="2025-07-09T14:47:21.652051807Z" level=info msg="shim disconnected" id=3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed namespace=k8s.io Jul 9 14:47:21.652216 containerd[1557]: time="2025-07-09T14:47:21.652100519Z" level=warning msg="cleaning up after shim disconnected" id=3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed namespace=k8s.io Jul 9 14:47:21.652216 containerd[1557]: time="2025-07-09T14:47:21.652116909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 14:47:21.654930 containerd[1557]: time="2025-07-09T14:47:21.654799651Z" level=info msg="StopContainer for \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" returns successfully" Jul 9 14:47:21.656067 containerd[1557]: time="2025-07-09T14:47:21.655480965Z" level=info msg="StopPodSandbox for \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\"" Jul 9 14:47:21.656067 containerd[1557]: time="2025-07-09T14:47:21.655588495Z" level=info msg="Container to stop \"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 14:47:21.656067 containerd[1557]: time="2025-07-09T14:47:21.655604696Z" level=info msg="Container to stop \"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 14:47:21.656067 containerd[1557]: time="2025-07-09T14:47:21.655633339Z" level=info msg="Container to stop \"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 14:47:21.656067 containerd[1557]: time="2025-07-09T14:47:21.655645842Z" level=info msg="Container to stop \"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 14:47:21.656067 containerd[1557]: time="2025-07-09T14:47:21.655660019Z" level=info msg="Container to stop \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 14:47:21.665404 systemd[1]: cri-containerd-f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b.scope: Deactivated successfully. Jul 9 14:47:21.690435 containerd[1557]: time="2025-07-09T14:47:21.689836713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" id:\"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" pid:2970 exit_status:137 exited_at:{seconds:1752072441 nanos:668684092}" Jul 9 14:47:21.693669 containerd[1557]: time="2025-07-09T14:47:21.693566081Z" level=info msg="received exit event sandbox_id:\"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\" exit_status:137 exited_at:{seconds:1752072441 nanos:573597346}" Jul 9 14:47:21.695097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed-shm.mount: Deactivated successfully. Jul 9 14:47:21.695252 containerd[1557]: time="2025-07-09T14:47:21.695086331Z" level=info msg="TearDown network for sandbox \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\" successfully" Jul 9 14:47:21.696472 containerd[1557]: time="2025-07-09T14:47:21.695353180Z" level=info msg="StopPodSandbox for \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\" returns successfully" Jul 9 14:47:21.710374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b-rootfs.mount: Deactivated successfully. Jul 9 14:47:21.731152 containerd[1557]: time="2025-07-09T14:47:21.731065053Z" level=info msg="received exit event sandbox_id:\"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" exit_status:137 exited_at:{seconds:1752072441 nanos:668684092}" Jul 9 14:47:21.731760 containerd[1557]: time="2025-07-09T14:47:21.731532526Z" level=info msg="TearDown network for sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" successfully" Jul 9 14:47:21.731760 containerd[1557]: time="2025-07-09T14:47:21.731573723Z" level=info msg="StopPodSandbox for \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" returns successfully" Jul 9 14:47:21.732448 containerd[1557]: time="2025-07-09T14:47:21.732384939Z" level=info msg="shim disconnected" id=f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b namespace=k8s.io Jul 9 14:47:21.732448 containerd[1557]: time="2025-07-09T14:47:21.732414844Z" level=warning msg="cleaning up after shim disconnected" id=f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b namespace=k8s.io Jul 9 14:47:21.732545 containerd[1557]: time="2025-07-09T14:47:21.732424152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 14:47:21.815773 kubelet[2829]: I0709 14:47:21.815454 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-cgroup\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.815773 kubelet[2829]: I0709 14:47:21.815527 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjmcs\" (UniqueName: \"kubernetes.io/projected/493ebc7e-37bd-428d-874c-60437b167fa8-kube-api-access-bjmcs\") pod \"493ebc7e-37bd-428d-874c-60437b167fa8\" (UID: \"493ebc7e-37bd-428d-874c-60437b167fa8\") " Jul 9 14:47:21.815773 kubelet[2829]: I0709 14:47:21.815560 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-lib-modules\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.815773 kubelet[2829]: I0709 14:47:21.815592 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-hubble-tls\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.815773 kubelet[2829]: I0709 14:47:21.815612 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-etc-cni-netd\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.815773 kubelet[2829]: I0709 14:47:21.815636 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-bpf-maps\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.816210 kubelet[2829]: I0709 14:47:21.815669 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-clustermesh-secrets\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.816210 kubelet[2829]: I0709 14:47:21.815690 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-hostproc\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.816210 kubelet[2829]: I0709 14:47:21.815708 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-run\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.817767 kubelet[2829]: I0709 14:47:21.817033 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfwrk\" (UniqueName: \"kubernetes.io/projected/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-kube-api-access-jfwrk\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.817767 kubelet[2829]: I0709 14:47:21.817061 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-host-proc-sys-net\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.817767 kubelet[2829]: I0709 14:47:21.817079 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cni-path\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.817767 kubelet[2829]: I0709 14:47:21.817107 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-config-path\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.817767 kubelet[2829]: I0709 14:47:21.817126 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-host-proc-sys-kernel\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.817767 kubelet[2829]: I0709 14:47:21.817158 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-xtables-lock\") pod \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\" (UID: \"fb867bd6-4a60-418e-80c9-f381f7f3bbd0\") " Jul 9 14:47:21.818096 kubelet[2829]: I0709 14:47:21.817183 2829 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/493ebc7e-37bd-428d-874c-60437b167fa8-cilium-config-path\") pod \"493ebc7e-37bd-428d-874c-60437b167fa8\" (UID: \"493ebc7e-37bd-428d-874c-60437b167fa8\") " Jul 9 14:47:21.818385 kubelet[2829]: I0709 14:47:21.818333 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 14:47:21.818550 kubelet[2829]: I0709 14:47:21.818518 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 14:47:21.819911 kubelet[2829]: I0709 14:47:21.819889 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 14:47:21.820178 kubelet[2829]: I0709 14:47:21.820120 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-hostproc" (OuterVolumeSpecName: "hostproc") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 14:47:21.820245 kubelet[2829]: I0709 14:47:21.820202 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 14:47:21.822173 kubelet[2829]: I0709 14:47:21.822139 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 14:47:21.824789 kubelet[2829]: I0709 14:47:21.823784 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 14:47:21.824789 kubelet[2829]: I0709 14:47:21.824100 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-kube-api-access-jfwrk" (OuterVolumeSpecName: "kube-api-access-jfwrk") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "kube-api-access-jfwrk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 14:47:21.824789 kubelet[2829]: I0709 14:47:21.824161 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 14:47:21.824789 kubelet[2829]: I0709 14:47:21.824182 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 14:47:21.824789 kubelet[2829]: I0709 14:47:21.824206 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 14:47:21.825021 kubelet[2829]: I0709 14:47:21.824224 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cni-path" (OuterVolumeSpecName: "cni-path") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 14:47:21.826790 kubelet[2829]: I0709 14:47:21.826685 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 14:47:21.828999 kubelet[2829]: I0709 14:47:21.828954 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fb867bd6-4a60-418e-80c9-f381f7f3bbd0" (UID: "fb867bd6-4a60-418e-80c9-f381f7f3bbd0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 14:47:21.829981 kubelet[2829]: I0709 14:47:21.829889 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/493ebc7e-37bd-428d-874c-60437b167fa8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "493ebc7e-37bd-428d-874c-60437b167fa8" (UID: "493ebc7e-37bd-428d-874c-60437b167fa8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 14:47:21.830567 kubelet[2829]: I0709 14:47:21.830525 2829 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/493ebc7e-37bd-428d-874c-60437b167fa8-kube-api-access-bjmcs" (OuterVolumeSpecName: "kube-api-access-bjmcs") pod "493ebc7e-37bd-428d-874c-60437b167fa8" (UID: "493ebc7e-37bd-428d-874c-60437b167fa8"). InnerVolumeSpecName "kube-api-access-bjmcs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 14:47:21.919273 kubelet[2829]: I0709 14:47:21.918462 2829 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bjmcs\" (UniqueName: \"kubernetes.io/projected/493ebc7e-37bd-428d-874c-60437b167fa8-kube-api-access-bjmcs\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.919273 kubelet[2829]: I0709 14:47:21.918544 2829 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-lib-modules\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.919273 kubelet[2829]: I0709 14:47:21.918592 2829 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-clustermesh-secrets\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.919273 kubelet[2829]: I0709 14:47:21.918637 2829 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-hubble-tls\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.919273 kubelet[2829]: I0709 14:47:21.918665 2829 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-etc-cni-netd\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.919273 kubelet[2829]: I0709 14:47:21.918692 2829 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-bpf-maps\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.919273 kubelet[2829]: I0709 14:47:21.918716 2829 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-host-proc-sys-net\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.920176 kubelet[2829]: I0709 14:47:21.918829 2829 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-hostproc\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.920176 kubelet[2829]: I0709 14:47:21.918864 2829 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-run\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.920176 kubelet[2829]: I0709 14:47:21.918969 2829 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jfwrk\" (UniqueName: \"kubernetes.io/projected/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-kube-api-access-jfwrk\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.920176 kubelet[2829]: I0709 14:47:21.919000 2829 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cni-path\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.920176 kubelet[2829]: I0709 14:47:21.919048 2829 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-config-path\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.920176 kubelet[2829]: I0709 14:47:21.919086 2829 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-host-proc-sys-kernel\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.920176 kubelet[2829]: I0709 14:47:21.919113 2829 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/493ebc7e-37bd-428d-874c-60437b167fa8-cilium-config-path\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.920941 kubelet[2829]: I0709 14:47:21.919161 2829 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-xtables-lock\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:21.920941 kubelet[2829]: I0709 14:47:21.919208 2829 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb867bd6-4a60-418e-80c9-f381f7f3bbd0-cilium-cgroup\") on node \"ci-9999-9-100-ea23d699c2.novalocal\" DevicePath \"\"" Jul 9 14:47:22.038210 kubelet[2829]: I0709 14:47:22.038080 2829 scope.go:117] "RemoveContainer" containerID="b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b" Jul 9 14:47:22.051508 systemd[1]: Removed slice kubepods-besteffort-pod493ebc7e_37bd_428d_874c_60437b167fa8.slice - libcontainer container kubepods-besteffort-pod493ebc7e_37bd_428d_874c_60437b167fa8.slice. Jul 9 14:47:22.051847 systemd[1]: kubepods-besteffort-pod493ebc7e_37bd_428d_874c_60437b167fa8.slice: Consumed 1.338s CPU time, 25.5M memory peak, 4K written to disk. Jul 9 14:47:22.062508 containerd[1557]: time="2025-07-09T14:47:22.061832667Z" level=info msg="RemoveContainer for \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\"" Jul 9 14:47:22.096402 systemd[1]: Removed slice kubepods-burstable-podfb867bd6_4a60_418e_80c9_f381f7f3bbd0.slice - libcontainer container kubepods-burstable-podfb867bd6_4a60_418e_80c9_f381f7f3bbd0.slice. Jul 9 14:47:22.096692 systemd[1]: kubepods-burstable-podfb867bd6_4a60_418e_80c9_f381f7f3bbd0.slice: Consumed 11.743s CPU time, 124.3M memory peak, 136K read from disk, 13.3M written to disk. Jul 9 14:47:22.126398 containerd[1557]: time="2025-07-09T14:47:22.124534271Z" level=info msg="RemoveContainer for \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\" returns successfully" Jul 9 14:47:22.129697 kubelet[2829]: I0709 14:47:22.127625 2829 scope.go:117] "RemoveContainer" containerID="b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b" Jul 9 14:47:22.130069 containerd[1557]: time="2025-07-09T14:47:22.129411507Z" level=error msg="ContainerStatus for \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\": not found" Jul 9 14:47:22.131725 kubelet[2829]: E0709 14:47:22.131574 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\": not found" containerID="b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b" Jul 9 14:47:22.132395 kubelet[2829]: I0709 14:47:22.131804 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b"} err="failed to get container status \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0306d4dcd6cea2c07bcb79a6a7ff21e4f1ffbf85f4ef639426fbcac1139322b\": not found" Jul 9 14:47:22.132395 kubelet[2829]: I0709 14:47:22.132358 2829 scope.go:117] "RemoveContainer" containerID="fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e" Jul 9 14:47:22.173638 containerd[1557]: time="2025-07-09T14:47:22.173289802Z" level=info msg="RemoveContainer for \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\"" Jul 9 14:47:22.191376 containerd[1557]: time="2025-07-09T14:47:22.191324694Z" level=info msg="RemoveContainer for \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" returns successfully" Jul 9 14:47:22.193615 kubelet[2829]: I0709 14:47:22.193464 2829 scope.go:117] "RemoveContainer" containerID="4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3" Jul 9 14:47:22.208796 containerd[1557]: time="2025-07-09T14:47:22.208719465Z" level=info msg="RemoveContainer for \"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\"" Jul 9 14:47:22.217897 containerd[1557]: time="2025-07-09T14:47:22.217775377Z" level=info msg="RemoveContainer for \"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\" returns successfully" Jul 9 14:47:22.218424 kubelet[2829]: I0709 14:47:22.218331 2829 scope.go:117] "RemoveContainer" containerID="a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82" Jul 9 14:47:22.221384 containerd[1557]: time="2025-07-09T14:47:22.221350661Z" level=info msg="RemoveContainer for \"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\"" Jul 9 14:47:22.241481 containerd[1557]: time="2025-07-09T14:47:22.241342514Z" level=info msg="RemoveContainer for \"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\" returns successfully" Jul 9 14:47:22.241876 kubelet[2829]: I0709 14:47:22.241823 2829 scope.go:117] "RemoveContainer" containerID="491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f" Jul 9 14:47:22.243548 containerd[1557]: time="2025-07-09T14:47:22.243524294Z" level=info msg="RemoveContainer for \"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\"" Jul 9 14:47:22.255513 containerd[1557]: time="2025-07-09T14:47:22.255485734Z" level=info msg="RemoveContainer for \"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\" returns successfully" Jul 9 14:47:22.255919 kubelet[2829]: I0709 14:47:22.255863 2829 scope.go:117] "RemoveContainer" containerID="5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7" Jul 9 14:47:22.258017 containerd[1557]: time="2025-07-09T14:47:22.257963806Z" level=info msg="RemoveContainer for \"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\"" Jul 9 14:47:22.266388 containerd[1557]: time="2025-07-09T14:47:22.266347677Z" level=info msg="RemoveContainer for \"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\" returns successfully" Jul 9 14:47:22.266590 kubelet[2829]: I0709 14:47:22.266562 2829 scope.go:117] "RemoveContainer" containerID="fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e" Jul 9 14:47:22.266919 containerd[1557]: time="2025-07-09T14:47:22.266803386Z" level=error msg="ContainerStatus for \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\": not found" Jul 9 14:47:22.267325 kubelet[2829]: E0709 14:47:22.267113 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\": not found" containerID="fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e" Jul 9 14:47:22.267325 kubelet[2829]: I0709 14:47:22.267172 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e"} err="failed to get container status \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc5a7a75bdfd9eb5a6aa4d3e3cb2442860b5553c445ce2198a7e7ca02d92183e\": not found" Jul 9 14:47:22.267325 kubelet[2829]: I0709 14:47:22.267240 2829 scope.go:117] "RemoveContainer" containerID="4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3" Jul 9 14:47:22.267542 containerd[1557]: time="2025-07-09T14:47:22.267436374Z" level=error msg="ContainerStatus for \"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\": not found" Jul 9 14:47:22.267585 kubelet[2829]: E0709 14:47:22.267568 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\": not found" containerID="4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3" Jul 9 14:47:22.267638 kubelet[2829]: I0709 14:47:22.267596 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3"} err="failed to get container status \"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a270b040c9eb9e77ae85f3798863751f6105955f8e148925545ab3fab10a9a3\": not found" Jul 9 14:47:22.267638 kubelet[2829]: I0709 14:47:22.267621 2829 scope.go:117] "RemoveContainer" containerID="a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82" Jul 9 14:47:22.267868 containerd[1557]: time="2025-07-09T14:47:22.267838853Z" level=error msg="ContainerStatus for \"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\": not found" Jul 9 14:47:22.267986 kubelet[2829]: E0709 14:47:22.267966 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\": not found" containerID="a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82" Jul 9 14:47:22.268023 kubelet[2829]: I0709 14:47:22.267987 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82"} err="failed to get container status \"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\": rpc error: code = NotFound desc = an error occurred when try to find container \"a185c78c123aa543eb3d5352d814b6d451ab60ebd29cc05f6b68c8c9973f1a82\": not found" Jul 9 14:47:22.268023 kubelet[2829]: I0709 14:47:22.268007 2829 scope.go:117] "RemoveContainer" containerID="491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f" Jul 9 14:47:22.268183 containerd[1557]: time="2025-07-09T14:47:22.268151274Z" level=error msg="ContainerStatus for \"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\": not found" Jul 9 14:47:22.268484 kubelet[2829]: E0709 14:47:22.268329 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\": not found" containerID="491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f" Jul 9 14:47:22.268484 kubelet[2829]: I0709 14:47:22.268396 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f"} err="failed to get container status \"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"491fe9495c38b1bb96e3de9516f6a2bf863e4313ee7a2c80b919d0308ceb0b3f\": not found" Jul 9 14:47:22.268484 kubelet[2829]: I0709 14:47:22.268419 2829 scope.go:117] "RemoveContainer" containerID="5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7" Jul 9 14:47:22.268659 containerd[1557]: time="2025-07-09T14:47:22.268568270Z" level=error msg="ContainerStatus for \"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\": not found" Jul 9 14:47:22.268856 kubelet[2829]: E0709 14:47:22.268778 2829 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\": not found" containerID="5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7" Jul 9 14:47:22.268856 kubelet[2829]: I0709 14:47:22.268826 2829 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7"} err="failed to get container status \"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\": rpc error: code = NotFound desc = an error occurred when try to find container \"5fe0868ed381e3b5811d33f49252c0e505d977bbee66287c8c4205cea61abce7\": not found" Jul 9 14:47:22.411835 kubelet[2829]: I0709 14:47:22.411132 2829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="493ebc7e-37bd-428d-874c-60437b167fa8" path="/var/lib/kubelet/pods/493ebc7e-37bd-428d-874c-60437b167fa8/volumes" Jul 9 14:47:22.412832 kubelet[2829]: I0709 14:47:22.412724 2829 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb867bd6-4a60-418e-80c9-f381f7f3bbd0" path="/var/lib/kubelet/pods/fb867bd6-4a60-418e-80c9-f381f7f3bbd0/volumes" Jul 9 14:47:22.503580 systemd[1]: var-lib-kubelet-pods-493ebc7e\x2d37bd\x2d428d\x2d874c\x2d60437b167fa8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbjmcs.mount: Deactivated successfully. Jul 9 14:47:22.504444 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b-shm.mount: Deactivated successfully. Jul 9 14:47:22.506102 systemd[1]: var-lib-kubelet-pods-fb867bd6\x2d4a60\x2d418e\x2d80c9\x2df381f7f3bbd0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djfwrk.mount: Deactivated successfully. Jul 9 14:47:22.506346 systemd[1]: var-lib-kubelet-pods-fb867bd6\x2d4a60\x2d418e\x2d80c9\x2df381f7f3bbd0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 14:47:22.506533 systemd[1]: var-lib-kubelet-pods-fb867bd6\x2d4a60\x2d418e\x2d80c9\x2df381f7f3bbd0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 14:47:23.392158 sshd[4496]: Connection closed by 172.24.4.1 port 44678 Jul 9 14:47:23.395379 sshd-session[4493]: pam_unix(sshd:session): session closed for user core Jul 9 14:47:23.427504 systemd[1]: sshd@30-172.24.4.161:22-172.24.4.1:44678.service: Deactivated successfully. Jul 9 14:47:23.439576 systemd[1]: session-33.scope: Deactivated successfully. Jul 9 14:47:23.440805 systemd[1]: session-33.scope: Consumed 1.300s CPU time, 25.8M memory peak. Jul 9 14:47:23.445165 systemd-logind[1533]: Session 33 logged out. Waiting for processes to exit. Jul 9 14:47:23.457588 systemd[1]: Started sshd@31-172.24.4.161:22-172.24.4.1:44694.service - OpenSSH per-connection server daemon (172.24.4.1:44694). Jul 9 14:47:23.462686 systemd-logind[1533]: Removed session 33. Jul 9 14:47:24.646301 kubelet[2829]: E0709 14:47:24.646103 2829 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 14:47:24.796401 sshd[4646]: Accepted publickey for core from 172.24.4.1 port 44694 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:47:24.800190 sshd-session[4646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:47:24.815865 systemd-logind[1533]: New session 34 of user core. Jul 9 14:47:24.823084 systemd[1]: Started session-34.scope - Session 34 of User core. Jul 9 14:47:26.292145 kubelet[2829]: I0709 14:47:26.292076 2829 memory_manager.go:355] "RemoveStaleState removing state" podUID="fb867bd6-4a60-418e-80c9-f381f7f3bbd0" containerName="cilium-agent" Jul 9 14:47:26.292145 kubelet[2829]: I0709 14:47:26.292119 2829 memory_manager.go:355] "RemoveStaleState removing state" podUID="493ebc7e-37bd-428d-874c-60437b167fa8" containerName="cilium-operator" Jul 9 14:47:26.309670 systemd[1]: Created slice kubepods-burstable-podae76115a_9c71_4b38_86a5_781d9d161691.slice - libcontainer container kubepods-burstable-podae76115a_9c71_4b38_86a5_781d9d161691.slice. Jul 9 14:47:26.353274 kubelet[2829]: I0709 14:47:26.353211 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ae76115a-9c71-4b38-86a5-781d9d161691-cilium-cgroup\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353274 kubelet[2829]: I0709 14:47:26.353268 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ae76115a-9c71-4b38-86a5-781d9d161691-hostproc\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353274 kubelet[2829]: I0709 14:47:26.353305 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae76115a-9c71-4b38-86a5-781d9d161691-lib-modules\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353572 kubelet[2829]: I0709 14:47:26.353348 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ae76115a-9c71-4b38-86a5-781d9d161691-cilium-ipsec-secrets\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353572 kubelet[2829]: I0709 14:47:26.353390 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ae76115a-9c71-4b38-86a5-781d9d161691-bpf-maps\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353572 kubelet[2829]: I0709 14:47:26.353411 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ae76115a-9c71-4b38-86a5-781d9d161691-etc-cni-netd\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353572 kubelet[2829]: I0709 14:47:26.353440 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ae76115a-9c71-4b38-86a5-781d9d161691-clustermesh-secrets\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353572 kubelet[2829]: I0709 14:47:26.353487 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ae76115a-9c71-4b38-86a5-781d9d161691-host-proc-sys-kernel\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353572 kubelet[2829]: I0709 14:47:26.353528 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ae76115a-9c71-4b38-86a5-781d9d161691-cilium-run\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353797 kubelet[2829]: I0709 14:47:26.353562 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ae76115a-9c71-4b38-86a5-781d9d161691-host-proc-sys-net\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353797 kubelet[2829]: I0709 14:47:26.353592 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ae76115a-9c71-4b38-86a5-781d9d161691-cni-path\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353797 kubelet[2829]: I0709 14:47:26.353621 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ae76115a-9c71-4b38-86a5-781d9d161691-cilium-config-path\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353797 kubelet[2829]: I0709 14:47:26.353662 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfnm5\" (UniqueName: \"kubernetes.io/projected/ae76115a-9c71-4b38-86a5-781d9d161691-kube-api-access-vfnm5\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353797 kubelet[2829]: I0709 14:47:26.353715 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae76115a-9c71-4b38-86a5-781d9d161691-xtables-lock\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.353797 kubelet[2829]: I0709 14:47:26.353756 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ae76115a-9c71-4b38-86a5-781d9d161691-hubble-tls\") pod \"cilium-rktdg\" (UID: \"ae76115a-9c71-4b38-86a5-781d9d161691\") " pod="kube-system/cilium-rktdg" Jul 9 14:47:26.451555 sshd[4649]: Connection closed by 172.24.4.1 port 44694 Jul 9 14:47:26.450195 sshd-session[4646]: pam_unix(sshd:session): session closed for user core Jul 9 14:47:26.462071 systemd[1]: sshd@31-172.24.4.161:22-172.24.4.1:44694.service: Deactivated successfully. Jul 9 14:47:26.490923 systemd[1]: session-34.scope: Deactivated successfully. Jul 9 14:47:26.493823 systemd-logind[1533]: Session 34 logged out. Waiting for processes to exit. Jul 9 14:47:26.502094 systemd[1]: Started sshd@32-172.24.4.161:22-172.24.4.1:43828.service - OpenSSH per-connection server daemon (172.24.4.1:43828). Jul 9 14:47:26.504463 systemd-logind[1533]: Removed session 34. Jul 9 14:47:26.616020 containerd[1557]: time="2025-07-09T14:47:26.615958770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rktdg,Uid:ae76115a-9c71-4b38-86a5-781d9d161691,Namespace:kube-system,Attempt:0,}" Jul 9 14:47:26.654610 containerd[1557]: time="2025-07-09T14:47:26.654486722Z" level=info msg="connecting to shim 1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add" address="unix:///run/containerd/s/7d0debce9272aacf2e3a0ea6d8a9475f01d3369e9a8e10ecf6ebfaa9fa31dc04" namespace=k8s.io protocol=ttrpc version=3 Jul 9 14:47:26.694018 systemd[1]: Started cri-containerd-1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add.scope - libcontainer container 1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add. Jul 9 14:47:26.733694 containerd[1557]: time="2025-07-09T14:47:26.733607922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rktdg,Uid:ae76115a-9c71-4b38-86a5-781d9d161691,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add\"" Jul 9 14:47:26.738795 containerd[1557]: time="2025-07-09T14:47:26.738605785Z" level=info msg="CreateContainer within sandbox \"1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 14:47:26.749649 containerd[1557]: time="2025-07-09T14:47:26.749570759Z" level=info msg="Container 76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:47:26.760619 containerd[1557]: time="2025-07-09T14:47:26.760561652Z" level=info msg="CreateContainer within sandbox \"1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d\"" Jul 9 14:47:26.762360 containerd[1557]: time="2025-07-09T14:47:26.761973480Z" level=info msg="StartContainer for \"76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d\"" Jul 9 14:47:26.764601 containerd[1557]: time="2025-07-09T14:47:26.764552148Z" level=info msg="connecting to shim 76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d" address="unix:///run/containerd/s/7d0debce9272aacf2e3a0ea6d8a9475f01d3369e9a8e10ecf6ebfaa9fa31dc04" protocol=ttrpc version=3 Jul 9 14:47:26.786926 systemd[1]: Started cri-containerd-76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d.scope - libcontainer container 76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d. Jul 9 14:47:26.827349 containerd[1557]: time="2025-07-09T14:47:26.827194564Z" level=info msg="StartContainer for \"76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d\" returns successfully" Jul 9 14:47:26.839982 systemd[1]: cri-containerd-76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d.scope: Deactivated successfully. Jul 9 14:47:26.844253 containerd[1557]: time="2025-07-09T14:47:26.844171311Z" level=info msg="received exit event container_id:\"76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d\" id:\"76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d\" pid:4726 exited_at:{seconds:1752072446 nanos:843383486}" Jul 9 14:47:26.845759 containerd[1557]: time="2025-07-09T14:47:26.845698440Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d\" id:\"76479ae2339446980452cd7604d9b02a5b433f97ef18c0df189abd1e0114495d\" pid:4726 exited_at:{seconds:1752072446 nanos:843383486}" Jul 9 14:47:27.160268 containerd[1557]: time="2025-07-09T14:47:27.160167243Z" level=info msg="CreateContainer within sandbox \"1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 14:47:27.192073 containerd[1557]: time="2025-07-09T14:47:27.191948123Z" level=info msg="Container 729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:47:27.210988 containerd[1557]: time="2025-07-09T14:47:27.209703805Z" level=info msg="CreateContainer within sandbox \"1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff\"" Jul 9 14:47:27.213775 containerd[1557]: time="2025-07-09T14:47:27.212991583Z" level=info msg="StartContainer for \"729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff\"" Jul 9 14:47:27.215489 containerd[1557]: time="2025-07-09T14:47:27.215351255Z" level=info msg="connecting to shim 729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff" address="unix:///run/containerd/s/7d0debce9272aacf2e3a0ea6d8a9475f01d3369e9a8e10ecf6ebfaa9fa31dc04" protocol=ttrpc version=3 Jul 9 14:47:27.252078 systemd[1]: Started cri-containerd-729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff.scope - libcontainer container 729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff. Jul 9 14:47:27.298899 containerd[1557]: time="2025-07-09T14:47:27.298835806Z" level=info msg="StartContainer for \"729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff\" returns successfully" Jul 9 14:47:27.310136 systemd[1]: cri-containerd-729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff.scope: Deactivated successfully. Jul 9 14:47:27.312327 containerd[1557]: time="2025-07-09T14:47:27.312239146Z" level=info msg="received exit event container_id:\"729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff\" id:\"729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff\" pid:4771 exited_at:{seconds:1752072447 nanos:311421853}" Jul 9 14:47:27.313429 containerd[1557]: time="2025-07-09T14:47:27.313259430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff\" id:\"729e2e065d6359aeb626efcb76db28429367287a49765f2280f52e8484af10ff\" pid:4771 exited_at:{seconds:1752072447 nanos:311421853}" Jul 9 14:47:27.682134 sshd[4664]: Accepted publickey for core from 172.24.4.1 port 43828 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:47:27.684496 sshd-session[4664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:47:27.697874 systemd-logind[1533]: New session 35 of user core. Jul 9 14:47:27.710061 systemd[1]: Started session-35.scope - Session 35 of User core. Jul 9 14:47:28.154178 containerd[1557]: time="2025-07-09T14:47:28.153912058Z" level=info msg="CreateContainer within sandbox \"1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 14:47:28.212795 containerd[1557]: time="2025-07-09T14:47:28.210721039Z" level=info msg="Container 75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:47:28.214726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659491877.mount: Deactivated successfully. Jul 9 14:47:28.231111 containerd[1557]: time="2025-07-09T14:47:28.230630836Z" level=info msg="CreateContainer within sandbox \"1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4\"" Jul 9 14:47:28.236136 containerd[1557]: time="2025-07-09T14:47:28.234587673Z" level=info msg="StartContainer for \"75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4\"" Jul 9 14:47:28.241354 containerd[1557]: time="2025-07-09T14:47:28.240680628Z" level=info msg="connecting to shim 75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4" address="unix:///run/containerd/s/7d0debce9272aacf2e3a0ea6d8a9475f01d3369e9a8e10ecf6ebfaa9fa31dc04" protocol=ttrpc version=3 Jul 9 14:47:28.257480 sshd[4802]: Connection closed by 172.24.4.1 port 43828 Jul 9 14:47:28.256072 sshd-session[4664]: pam_unix(sshd:session): session closed for user core Jul 9 14:47:28.269535 systemd[1]: sshd@32-172.24.4.161:22-172.24.4.1:43828.service: Deactivated successfully. Jul 9 14:47:28.272546 systemd[1]: session-35.scope: Deactivated successfully. Jul 9 14:47:28.275309 systemd-logind[1533]: Session 35 logged out. Waiting for processes to exit. Jul 9 14:47:28.280116 systemd[1]: Started sshd@33-172.24.4.161:22-172.24.4.1:43834.service - OpenSSH per-connection server daemon (172.24.4.1:43834). Jul 9 14:47:28.294394 systemd[1]: Started cri-containerd-75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4.scope - libcontainer container 75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4. Jul 9 14:47:28.296861 systemd-logind[1533]: Removed session 35. Jul 9 14:47:28.357641 systemd[1]: cri-containerd-75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4.scope: Deactivated successfully. Jul 9 14:47:28.359331 containerd[1557]: time="2025-07-09T14:47:28.359262641Z" level=info msg="StartContainer for \"75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4\" returns successfully" Jul 9 14:47:28.361135 containerd[1557]: time="2025-07-09T14:47:28.361088376Z" level=info msg="received exit event container_id:\"75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4\" id:\"75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4\" pid:4822 exited_at:{seconds:1752072448 nanos:360660998}" Jul 9 14:47:28.361667 containerd[1557]: time="2025-07-09T14:47:28.361593987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4\" id:\"75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4\" pid:4822 exited_at:{seconds:1752072448 nanos:360660998}" Jul 9 14:47:28.473237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75a4aa01266db9b6535610237b6c0dc906664aead69a6a7bc3a2f56c57f58fd4-rootfs.mount: Deactivated successfully. Jul 9 14:47:29.168225 containerd[1557]: time="2025-07-09T14:47:29.168130828Z" level=info msg="CreateContainer within sandbox \"1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 14:47:29.193907 containerd[1557]: time="2025-07-09T14:47:29.193040782Z" level=info msg="Container 758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:47:29.205591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447312920.mount: Deactivated successfully. Jul 9 14:47:29.235421 containerd[1557]: time="2025-07-09T14:47:29.235354947Z" level=info msg="CreateContainer within sandbox \"1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa\"" Jul 9 14:47:29.244032 containerd[1557]: time="2025-07-09T14:47:29.243970125Z" level=info msg="StartContainer for \"758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa\"" Jul 9 14:47:29.245921 containerd[1557]: time="2025-07-09T14:47:29.245880152Z" level=info msg="connecting to shim 758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa" address="unix:///run/containerd/s/7d0debce9272aacf2e3a0ea6d8a9475f01d3369e9a8e10ecf6ebfaa9fa31dc04" protocol=ttrpc version=3 Jul 9 14:47:29.271994 systemd[1]: Started cri-containerd-758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa.scope - libcontainer container 758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa. Jul 9 14:47:29.307209 systemd[1]: cri-containerd-758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa.scope: Deactivated successfully. Jul 9 14:47:29.310401 containerd[1557]: time="2025-07-09T14:47:29.310353097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa\" id:\"758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa\" pid:4864 exited_at:{seconds:1752072449 nanos:309806946}" Jul 9 14:47:29.311659 containerd[1557]: time="2025-07-09T14:47:29.311626158Z" level=info msg="received exit event container_id:\"758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa\" id:\"758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa\" pid:4864 exited_at:{seconds:1752072449 nanos:309806946}" Jul 9 14:47:29.322479 containerd[1557]: time="2025-07-09T14:47:29.322417595Z" level=info msg="StartContainer for \"758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa\" returns successfully" Jul 9 14:47:29.341011 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-758d8c559c2233a453a09dbc0aadbbafc3e8fb9d3d73280ca69bc9082e0cb0fa-rootfs.mount: Deactivated successfully. Jul 9 14:47:29.605723 sshd[4820]: Accepted publickey for core from 172.24.4.1 port 43834 ssh2: RSA SHA256:RpjbNjJETt8jSicFeEb5c+P1rhb51pihPiw0RoN+r6E Jul 9 14:47:29.606841 sshd-session[4820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 14:47:29.616240 systemd-logind[1533]: New session 36 of user core. Jul 9 14:47:29.629294 systemd[1]: Started session-36.scope - Session 36 of User core. Jul 9 14:47:29.648012 kubelet[2829]: E0709 14:47:29.647880 2829 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 14:47:30.011258 kubelet[2829]: I0709 14:47:30.011038 2829 setters.go:602] "Node became not ready" node="ci-9999-9-100-ea23d699c2.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-09T14:47:30Z","lastTransitionTime":"2025-07-09T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 9 14:47:30.175066 containerd[1557]: time="2025-07-09T14:47:30.174972020Z" level=info msg="CreateContainer within sandbox \"1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 14:47:30.203947 containerd[1557]: time="2025-07-09T14:47:30.203889662Z" level=info msg="Container b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef: CDI devices from CRI Config.CDIDevices: []" Jul 9 14:47:30.231649 containerd[1557]: time="2025-07-09T14:47:30.231590333Z" level=info msg="CreateContainer within sandbox \"1fa97adbeac06f4d8b846239b071cb06bd0d240c740b0bf0ac5fe0cca3013add\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef\"" Jul 9 14:47:30.234081 containerd[1557]: time="2025-07-09T14:47:30.234045776Z" level=info msg="StartContainer for \"b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef\"" Jul 9 14:47:30.237732 containerd[1557]: time="2025-07-09T14:47:30.237686443Z" level=info msg="connecting to shim b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef" address="unix:///run/containerd/s/7d0debce9272aacf2e3a0ea6d8a9475f01d3369e9a8e10ecf6ebfaa9fa31dc04" protocol=ttrpc version=3 Jul 9 14:47:30.274930 systemd[1]: Started cri-containerd-b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef.scope - libcontainer container b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef. Jul 9 14:47:30.320892 containerd[1557]: time="2025-07-09T14:47:30.320793663Z" level=info msg="StartContainer for \"b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef\" returns successfully" Jul 9 14:47:30.474541 containerd[1557]: time="2025-07-09T14:47:30.474456733Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef\" id:\"61b9698c9ea53c98f9826af319956a2ddd7788fe6c6a34b4a435f08715e95f8c\" pid:4936 exited_at:{seconds:1752072450 nanos:473110036}" Jul 9 14:47:31.010794 kernel: cryptd: max_cpu_qlen set to 1000 Jul 9 14:47:31.080846 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jul 9 14:47:31.224225 kubelet[2829]: I0709 14:47:31.224134 2829 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rktdg" podStartSLOduration=5.2241075 podStartE2EDuration="5.2241075s" podCreationTimestamp="2025-07-09 14:47:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 14:47:31.222303165 +0000 UTC m=+237.201573681" watchObservedRunningTime="2025-07-09 14:47:31.2241075 +0000 UTC m=+237.203378006" Jul 9 14:47:32.536936 containerd[1557]: time="2025-07-09T14:47:32.536865825Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef\" id:\"08df4c409fe626e81556b7319dfbf5d8c18e0a414517220e3b25ffb66ee0c8f6\" pid:5034 exit_status:1 exited_at:{seconds:1752072452 nanos:535141024}" Jul 9 14:47:34.392824 containerd[1557]: time="2025-07-09T14:47:34.392500833Z" level=info msg="StopPodSandbox for \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\"" Jul 9 14:47:34.396697 containerd[1557]: time="2025-07-09T14:47:34.393673309Z" level=info msg="TearDown network for sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" successfully" Jul 9 14:47:34.396697 containerd[1557]: time="2025-07-09T14:47:34.393695188Z" level=info msg="StopPodSandbox for \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" returns successfully" Jul 9 14:47:34.398127 containerd[1557]: time="2025-07-09T14:47:34.397168639Z" level=info msg="RemovePodSandbox for \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\"" Jul 9 14:47:34.398127 containerd[1557]: time="2025-07-09T14:47:34.397282192Z" level=info msg="Forcibly stopping sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\"" Jul 9 14:47:34.398127 containerd[1557]: time="2025-07-09T14:47:34.397472172Z" level=info msg="TearDown network for sandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" successfully" Jul 9 14:47:34.401497 containerd[1557]: time="2025-07-09T14:47:34.401446087Z" level=info msg="Ensure that sandbox f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b in task-service has been cleanup successfully" Jul 9 14:47:34.407452 containerd[1557]: time="2025-07-09T14:47:34.407373837Z" level=info msg="RemovePodSandbox \"f5c9e9ddff34387ba414a2dc09d5a118974854be743cd958368979c297c5c41b\" returns successfully" Jul 9 14:47:34.409101 containerd[1557]: time="2025-07-09T14:47:34.409046036Z" level=info msg="StopPodSandbox for \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\"" Jul 9 14:47:34.409649 containerd[1557]: time="2025-07-09T14:47:34.409623069Z" level=info msg="TearDown network for sandbox \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\" successfully" Jul 9 14:47:34.409729 containerd[1557]: time="2025-07-09T14:47:34.409714131Z" level=info msg="StopPodSandbox for \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\" returns successfully" Jul 9 14:47:34.411122 containerd[1557]: time="2025-07-09T14:47:34.411088329Z" level=info msg="RemovePodSandbox for \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\"" Jul 9 14:47:34.411219 containerd[1557]: time="2025-07-09T14:47:34.411124363Z" level=info msg="Forcibly stopping sandbox \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\"" Jul 9 14:47:34.411804 containerd[1557]: time="2025-07-09T14:47:34.411767643Z" level=info msg="TearDown network for sandbox \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\" successfully" Jul 9 14:47:34.414710 containerd[1557]: time="2025-07-09T14:47:34.414670534Z" level=info msg="Ensure that sandbox 3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed in task-service has been cleanup successfully" Jul 9 14:47:34.418947 containerd[1557]: time="2025-07-09T14:47:34.418824812Z" level=info msg="RemovePodSandbox \"3754e5c66e20099363ac20377b9dfc0c80835e2ea1137be92c5567b5f3f190ed\" returns successfully" Jul 9 14:47:34.799645 containerd[1557]: time="2025-07-09T14:47:34.798506560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef\" id:\"fb05d59188e06a8c1d7d5e3736855271773dd90bf4ccbdd078675e2c4b9037d5\" pid:5439 exit_status:1 exited_at:{seconds:1752072454 nanos:797875291}" Jul 9 14:47:34.854034 systemd-networkd[1452]: lxc_health: Link UP Jul 9 14:47:34.854435 systemd-networkd[1452]: lxc_health: Gained carrier Jul 9 14:47:36.142294 systemd-networkd[1452]: lxc_health: Gained IPv6LL Jul 9 14:47:36.994203 containerd[1557]: time="2025-07-09T14:47:36.994051779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef\" id:\"3f9620cae477c684751035455f414b29456edd50f34443928efa10ad66da298f\" pid:5514 exited_at:{seconds:1752072456 nanos:991302679}" Jul 9 14:47:39.286793 containerd[1557]: time="2025-07-09T14:47:39.285871370Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef\" id:\"927befb5424ea31f29109f4f4d9479e1ac27c3c137827c755bf01abe3b119f17\" pid:5540 exited_at:{seconds:1752072459 nanos:284068979}" Jul 9 14:47:41.480275 containerd[1557]: time="2025-07-09T14:47:41.480202438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3be449fd407ea218b8507a5c0b7c43e52447681a5b75930ee4a06ae159802ef\" id:\"e9438d1e9f15d07f767aacaea95101c19dcc9cec295020d70d546b39f9e400d2\" pid:5572 exited_at:{seconds:1752072461 nanos:479468559}" Jul 9 14:47:41.849228 sshd[4889]: Connection closed by 172.24.4.1 port 43834 Jul 9 14:47:41.852266 sshd-session[4820]: pam_unix(sshd:session): session closed for user core Jul 9 14:47:41.872102 systemd[1]: sshd@33-172.24.4.161:22-172.24.4.1:43834.service: Deactivated successfully. Jul 9 14:47:41.883223 systemd[1]: session-36.scope: Deactivated successfully. Jul 9 14:47:41.890324 systemd-logind[1533]: Session 36 logged out. Waiting for processes to exit. Jul 9 14:47:41.895209 systemd-logind[1533]: Removed session 36.