Jun 20 19:35:39.947126 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:35:39.947153 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:35:39.947163 kernel: BIOS-provided physical RAM map: Jun 20 19:35:39.947173 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 20 19:35:39.947180 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 20 19:35:39.947188 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 20 19:35:39.947196 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Jun 20 19:35:39.947204 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Jun 20 19:35:39.947211 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 19:35:39.947219 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 20 19:35:39.947226 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Jun 20 19:35:39.947234 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 19:35:39.947244 kernel: NX (Execute Disable) protection: active Jun 20 19:35:39.947251 kernel: APIC: Static calls initialized Jun 20 19:35:39.947260 kernel: SMBIOS 3.0.0 present. Jun 20 19:35:39.947268 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Jun 20 19:35:39.947276 kernel: DMI: Memory slots populated: 1/1 Jun 20 19:35:39.947286 kernel: Hypervisor detected: KVM Jun 20 19:35:39.947294 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:35:39.947302 kernel: kvm-clock: using sched offset of 4595189080 cycles Jun 20 19:35:39.947310 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:35:39.947318 kernel: tsc: Detected 1996.249 MHz processor Jun 20 19:35:39.947327 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:35:39.947335 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:35:39.947343 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Jun 20 19:35:39.947352 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 20 19:35:39.947363 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:35:39.947371 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Jun 20 19:35:39.947379 kernel: ACPI: Early table checksum verification disabled Jun 20 19:35:39.947387 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Jun 20 19:35:39.947395 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:35:39.947403 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:35:39.947412 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:35:39.947420 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Jun 20 19:35:39.947428 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:35:39.947438 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:35:39.947446 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Jun 20 19:35:39.947454 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Jun 20 19:35:39.947462 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Jun 20 19:35:39.947486 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Jun 20 19:35:39.947509 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Jun 20 19:35:39.947518 kernel: No NUMA configuration found Jun 20 19:35:39.947528 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Jun 20 19:35:39.947537 kernel: NODE_DATA(0) allocated [mem 0x13fff5dc0-0x13fffcfff] Jun 20 19:35:39.947545 kernel: Zone ranges: Jun 20 19:35:39.947554 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:35:39.947562 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jun 20 19:35:39.947571 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Jun 20 19:35:39.947579 kernel: Device empty Jun 20 19:35:39.947587 kernel: Movable zone start for each node Jun 20 19:35:39.947598 kernel: Early memory node ranges Jun 20 19:35:39.947607 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 20 19:35:39.947615 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Jun 20 19:35:39.947623 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Jun 20 19:35:39.947632 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Jun 20 19:35:39.947640 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:35:39.947648 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 19:35:39.947657 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Jun 20 19:35:39.947665 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 19:35:39.947675 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:35:39.947684 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:35:39.947693 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 19:35:39.947702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:35:39.947710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:35:39.947719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:35:39.947727 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:35:39.947736 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:35:39.947744 kernel: CPU topo: Max. logical packages: 2 Jun 20 19:35:39.947755 kernel: CPU topo: Max. logical dies: 2 Jun 20 19:35:39.947764 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:35:39.947772 kernel: CPU topo: Max. threads per core: 1 Jun 20 19:35:39.947780 kernel: CPU topo: Num. cores per package: 1 Jun 20 19:35:39.947788 kernel: CPU topo: Num. threads per package: 1 Jun 20 19:35:39.947797 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jun 20 19:35:39.947805 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 19:35:39.947814 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Jun 20 19:35:39.947822 kernel: Booting paravirtualized kernel on KVM Jun 20 19:35:39.947832 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:35:39.947841 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 19:35:39.947849 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jun 20 19:35:39.947858 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jun 20 19:35:39.947866 kernel: pcpu-alloc: [0] 0 1 Jun 20 19:35:39.947874 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 20 19:35:39.947884 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:35:39.947893 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:35:39.947904 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:35:39.947912 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:35:39.947921 kernel: Fallback order for Node 0: 0 Jun 20 19:35:39.947929 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Jun 20 19:35:39.947938 kernel: Policy zone: Normal Jun 20 19:35:39.947946 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:35:39.947954 kernel: software IO TLB: area num 2. Jun 20 19:35:39.947963 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 19:35:39.947971 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:35:39.947982 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:35:39.947990 kernel: Dynamic Preempt: voluntary Jun 20 19:35:39.947999 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:35:39.948008 kernel: rcu: RCU event tracing is enabled. Jun 20 19:35:39.948017 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 19:35:39.948026 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:35:39.948034 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:35:39.948042 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:35:39.948051 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:35:39.948059 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 19:35:39.948071 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:35:39.948079 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:35:39.948088 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 19:35:39.948096 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 19:35:39.948105 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:35:39.948113 kernel: Console: colour VGA+ 80x25 Jun 20 19:35:39.948122 kernel: printk: legacy console [tty0] enabled Jun 20 19:35:39.948130 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:35:39.948138 kernel: ACPI: Core revision 20240827 Jun 20 19:35:39.948149 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:35:39.948157 kernel: x2apic enabled Jun 20 19:35:39.948166 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:35:39.948174 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:35:39.948182 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 20 19:35:39.948198 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Jun 20 19:35:39.948209 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jun 20 19:35:39.948218 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jun 20 19:35:39.948227 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:35:39.948236 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:35:39.948245 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:35:39.948256 kernel: Speculative Store Bypass: Vulnerable Jun 20 19:35:39.948266 kernel: x86/fpu: x87 FPU will use FXSAVE Jun 20 19:35:39.948274 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:35:39.948283 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:35:39.948292 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:35:39.948303 kernel: landlock: Up and running. Jun 20 19:35:39.948312 kernel: SELinux: Initializing. Jun 20 19:35:39.948321 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:35:39.948330 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:35:39.948339 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Jun 20 19:35:39.948348 kernel: Performance Events: AMD PMU driver. Jun 20 19:35:39.948357 kernel: ... version: 0 Jun 20 19:35:39.948365 kernel: ... bit width: 48 Jun 20 19:35:39.948374 kernel: ... generic registers: 4 Jun 20 19:35:39.948386 kernel: ... value mask: 0000ffffffffffff Jun 20 19:35:39.948394 kernel: ... max period: 00007fffffffffff Jun 20 19:35:39.948403 kernel: ... fixed-purpose events: 0 Jun 20 19:35:39.948412 kernel: ... event mask: 000000000000000f Jun 20 19:35:39.948421 kernel: signal: max sigframe size: 1440 Jun 20 19:35:39.948430 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:35:39.948439 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:35:39.948448 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:35:39.948457 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:35:39.948480 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:35:39.948489 kernel: .... node #0, CPUs: #1 Jun 20 19:35:39.948498 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 19:35:39.948507 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Jun 20 19:35:39.948517 kernel: Memory: 3961272K/4193772K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 227296K reserved, 0K cma-reserved) Jun 20 19:35:39.948526 kernel: devtmpfs: initialized Jun 20 19:35:39.948535 kernel: x86/mm: Memory block size: 128MB Jun 20 19:35:39.948544 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:35:39.948553 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 19:35:39.948565 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:35:39.948574 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:35:39.948583 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:35:39.948592 kernel: audit: type=2000 audit(1750448135.710:1): state=initialized audit_enabled=0 res=1 Jun 20 19:35:39.948600 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:35:39.948609 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:35:39.948618 kernel: cpuidle: using governor menu Jun 20 19:35:39.948627 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:35:39.948636 kernel: dca service started, version 1.12.1 Jun 20 19:35:39.948648 kernel: PCI: Using configuration type 1 for base access Jun 20 19:35:39.948656 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:35:39.948665 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:35:39.948674 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:35:39.948683 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:35:39.948692 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:35:39.948700 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:35:39.948709 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:35:39.948718 kernel: ACPI: Interpreter enabled Jun 20 19:35:39.948727 kernel: ACPI: PM: (supports S0 S3 S5) Jun 20 19:35:39.948738 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:35:39.948747 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:35:39.948756 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:35:39.948765 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jun 20 19:35:39.948773 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:35:39.948915 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:35:39.949006 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 20 19:35:39.949096 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 20 19:35:39.949109 kernel: acpiphp: Slot [3] registered Jun 20 19:35:39.949118 kernel: acpiphp: Slot [4] registered Jun 20 19:35:39.949127 kernel: acpiphp: Slot [5] registered Jun 20 19:35:39.949136 kernel: acpiphp: Slot [6] registered Jun 20 19:35:39.949144 kernel: acpiphp: Slot [7] registered Jun 20 19:35:39.949153 kernel: acpiphp: Slot [8] registered Jun 20 19:35:39.949162 kernel: acpiphp: Slot [9] registered Jun 20 19:35:39.949170 kernel: acpiphp: Slot [10] registered Jun 20 19:35:39.949184 kernel: acpiphp: Slot [11] registered Jun 20 19:35:39.949192 kernel: acpiphp: Slot [12] registered Jun 20 19:35:39.949201 kernel: acpiphp: Slot [13] registered Jun 20 19:35:39.949210 kernel: acpiphp: Slot [14] registered Jun 20 19:35:39.949219 kernel: acpiphp: Slot [15] registered Jun 20 19:35:39.949227 kernel: acpiphp: Slot [16] registered Jun 20 19:35:39.949236 kernel: acpiphp: Slot [17] registered Jun 20 19:35:39.949245 kernel: acpiphp: Slot [18] registered Jun 20 19:35:39.949253 kernel: acpiphp: Slot [19] registered Jun 20 19:35:39.949264 kernel: acpiphp: Slot [20] registered Jun 20 19:35:39.949273 kernel: acpiphp: Slot [21] registered Jun 20 19:35:39.949281 kernel: acpiphp: Slot [22] registered Jun 20 19:35:39.949290 kernel: acpiphp: Slot [23] registered Jun 20 19:35:39.949299 kernel: acpiphp: Slot [24] registered Jun 20 19:35:39.949308 kernel: acpiphp: Slot [25] registered Jun 20 19:35:39.949316 kernel: acpiphp: Slot [26] registered Jun 20 19:35:39.949325 kernel: acpiphp: Slot [27] registered Jun 20 19:35:39.949334 kernel: acpiphp: Slot [28] registered Jun 20 19:35:39.949343 kernel: acpiphp: Slot [29] registered Jun 20 19:35:39.949354 kernel: acpiphp: Slot [30] registered Jun 20 19:35:39.949362 kernel: acpiphp: Slot [31] registered Jun 20 19:35:39.949371 kernel: PCI host bridge to bus 0000:00 Jun 20 19:35:39.949464 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:35:39.949569 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:35:39.949647 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:35:39.949723 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 19:35:39.949814 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Jun 20 19:35:39.949893 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:35:39.949998 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:35:39.950103 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:35:39.950202 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Jun 20 19:35:39.950291 kernel: pci 0000:00:01.1: BAR 4 [io 0xc120-0xc12f] Jun 20 19:35:39.950384 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Jun 20 19:35:39.950488 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Jun 20 19:35:39.950581 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Jun 20 19:35:39.950667 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Jun 20 19:35:39.950763 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Jun 20 19:35:39.950851 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jun 20 19:35:39.950937 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jun 20 19:35:39.951038 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:35:39.951128 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Jun 20 19:35:39.951215 kernel: pci 0000:00:02.0: BAR 2 [mem 0xc000000000-0xc000003fff 64bit pref] Jun 20 19:35:39.951303 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff] Jun 20 19:35:39.951390 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref] Jun 20 19:35:39.951494 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:35:39.951598 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 20 19:35:39.951696 kernel: pci 0000:00:03.0: BAR 0 [io 0xc080-0xc0bf] Jun 20 19:35:39.951783 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff] Jun 20 19:35:39.951872 kernel: pci 0000:00:03.0: BAR 4 [mem 0xc000004000-0xc000007fff 64bit pref] Jun 20 19:35:39.951961 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref] Jun 20 19:35:39.952057 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 20 19:35:39.952146 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jun 20 19:35:39.952233 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff] Jun 20 19:35:39.952324 kernel: pci 0000:00:04.0: BAR 4 [mem 0xc000008000-0xc00000bfff 64bit pref] Jun 20 19:35:39.952422 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Jun 20 19:35:39.953255 kernel: pci 0000:00:05.0: BAR 0 [io 0xc0c0-0xc0ff] Jun 20 19:35:39.953360 kernel: pci 0000:00:05.0: BAR 4 [mem 0xc00000c000-0xc00000ffff 64bit pref] Jun 20 19:35:39.953460 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 20 19:35:39.953578 kernel: pci 0000:00:06.0: BAR 0 [io 0xc100-0xc11f] Jun 20 19:35:39.953672 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfeb93000-0xfeb93fff] Jun 20 19:35:39.953759 kernel: pci 0000:00:06.0: BAR 4 [mem 0xc000010000-0xc000013fff 64bit pref] Jun 20 19:35:39.953773 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:35:39.953782 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:35:39.953791 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:35:39.953800 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:35:39.953820 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 20 19:35:39.953829 kernel: iommu: Default domain type: Translated Jun 20 19:35:39.953838 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:35:39.953851 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:35:39.953860 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:35:39.953869 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 20 19:35:39.953878 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Jun 20 19:35:39.953968 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jun 20 19:35:39.954057 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jun 20 19:35:39.954145 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:35:39.954158 kernel: vgaarb: loaded Jun 20 19:35:39.954167 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:35:39.954181 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:35:39.954190 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:35:39.954198 kernel: pnp: PnP ACPI init Jun 20 19:35:39.954292 kernel: pnp 00:03: [dma 2] Jun 20 19:35:39.954307 kernel: pnp: PnP ACPI: found 5 devices Jun 20 19:35:39.954316 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:35:39.954325 kernel: NET: Registered PF_INET protocol family Jun 20 19:35:39.954335 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:35:39.954346 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 19:35:39.954356 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:35:39.954365 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:35:39.954374 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 19:35:39.954383 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 19:35:39.954392 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:35:39.954401 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:35:39.954410 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:35:39.954418 kernel: NET: Registered PF_XDP protocol family Jun 20 19:35:39.954515 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:35:39.954593 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:35:39.954669 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:35:39.954745 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Jun 20 19:35:39.954820 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Jun 20 19:35:39.954908 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jun 20 19:35:39.954998 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 20 19:35:39.955012 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:35:39.955025 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jun 20 19:35:39.955035 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Jun 20 19:35:39.955044 kernel: Initialise system trusted keyrings Jun 20 19:35:39.955053 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 19:35:39.955062 kernel: Key type asymmetric registered Jun 20 19:35:39.955071 kernel: Asymmetric key parser 'x509' registered Jun 20 19:35:39.955080 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:35:39.955089 kernel: io scheduler mq-deadline registered Jun 20 19:35:39.955098 kernel: io scheduler kyber registered Jun 20 19:35:39.955110 kernel: io scheduler bfq registered Jun 20 19:35:39.955119 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:35:39.955128 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jun 20 19:35:39.955137 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jun 20 19:35:39.955146 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 20 19:35:39.955156 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jun 20 19:35:39.955165 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:35:39.955174 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:35:39.955183 kernel: random: crng init done Jun 20 19:35:39.955195 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:35:39.955204 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:35:39.955213 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:35:39.955299 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 20 19:35:39.955316 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:35:39.955393 kernel: rtc_cmos 00:04: registered as rtc0 Jun 20 19:35:39.956874 kernel: rtc_cmos 00:04: setting system clock to 2025-06-20T19:35:39 UTC (1750448139) Jun 20 19:35:39.956974 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 20 19:35:39.956993 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 20 19:35:39.957002 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:35:39.957011 kernel: Segment Routing with IPv6 Jun 20 19:35:39.957020 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:35:39.957029 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:35:39.957038 kernel: Key type dns_resolver registered Jun 20 19:35:39.957047 kernel: IPI shorthand broadcast: enabled Jun 20 19:35:39.957056 kernel: sched_clock: Marking stable (3647154594, 183252748)->(3859735754, -29328412) Jun 20 19:35:39.957065 kernel: registered taskstats version 1 Jun 20 19:35:39.957077 kernel: Loading compiled-in X.509 certificates Jun 20 19:35:39.957086 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:35:39.957095 kernel: Demotion targets for Node 0: null Jun 20 19:35:39.957104 kernel: Key type .fscrypt registered Jun 20 19:35:39.957113 kernel: Key type fscrypt-provisioning registered Jun 20 19:35:39.957122 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:35:39.957131 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:35:39.957140 kernel: ima: No architecture policies found Jun 20 19:35:39.957152 kernel: clk: Disabling unused clocks Jun 20 19:35:39.957161 kernel: Warning: unable to open an initial console. Jun 20 19:35:39.957170 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:35:39.957179 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:35:39.957188 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:35:39.957197 kernel: Run /init as init process Jun 20 19:35:39.957206 kernel: with arguments: Jun 20 19:35:39.957215 kernel: /init Jun 20 19:35:39.957223 kernel: with environment: Jun 20 19:35:39.957235 kernel: HOME=/ Jun 20 19:35:39.957243 kernel: TERM=linux Jun 20 19:35:39.957252 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:35:39.957262 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:35:39.957275 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:35:39.957285 systemd[1]: Detected virtualization kvm. Jun 20 19:35:39.957295 systemd[1]: Detected architecture x86-64. Jun 20 19:35:39.957318 systemd[1]: Running in initrd. Jun 20 19:35:39.957330 systemd[1]: No hostname configured, using default hostname. Jun 20 19:35:39.957340 systemd[1]: Hostname set to . Jun 20 19:35:39.957350 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:35:39.957360 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:35:39.957369 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:35:39.957382 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:35:39.957393 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:35:39.957403 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:35:39.957413 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:35:39.957424 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:35:39.957435 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:35:39.957445 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:35:39.957457 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:35:39.957558 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:35:39.957570 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:35:39.957580 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:35:39.957590 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:35:39.957600 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:35:39.957610 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:35:39.957620 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:35:39.957630 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:35:39.957644 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:35:39.957654 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:35:39.957664 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:35:39.957674 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:35:39.957683 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:35:39.957693 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:35:39.957703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:35:39.957713 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:35:39.957726 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:35:39.957736 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:35:39.957748 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:35:39.957758 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:35:39.957768 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:35:39.957781 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:35:39.957821 systemd-journald[214]: Collecting audit messages is disabled. Jun 20 19:35:39.957849 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:35:39.957859 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:35:39.957871 systemd-journald[214]: Journal started Jun 20 19:35:39.957897 systemd-journald[214]: Runtime Journal (/run/log/journal/6d4f581c1e564a83baec68a8abbccbdd) is 8M, max 78.5M, 70.5M free. Jun 20 19:35:39.917526 systemd-modules-load[216]: Inserted module 'overlay' Jun 20 19:35:40.004342 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:35:40.004375 kernel: Bridge firewalling registered Jun 20 19:35:40.004398 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:35:39.962341 systemd-modules-load[216]: Inserted module 'br_netfilter' Jun 20 19:35:40.004999 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:35:40.006082 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:35:40.009681 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:35:40.012525 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:35:40.027580 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:35:40.031231 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:35:40.038551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:35:40.049923 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:35:40.051130 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:35:40.052668 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:35:40.054566 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:35:40.056562 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:35:40.058519 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:35:40.070339 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:35:40.079145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:35:40.086427 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:35:40.121702 systemd-resolved[253]: Positive Trust Anchors: Jun 20 19:35:40.121717 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:35:40.121759 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:35:40.128271 systemd-resolved[253]: Defaulting to hostname 'linux'. Jun 20 19:35:40.129257 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:35:40.130061 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:35:40.168528 kernel: SCSI subsystem initialized Jun 20 19:35:40.179506 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:35:40.190507 kernel: iscsi: registered transport (tcp) Jun 20 19:35:40.213854 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:35:40.213884 kernel: QLogic iSCSI HBA Driver Jun 20 19:35:40.233069 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:35:40.257905 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:35:40.260092 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:35:40.305347 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:35:40.307349 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:35:40.360502 kernel: raid6: sse2x4 gen() 12752 MB/s Jun 20 19:35:40.378508 kernel: raid6: sse2x2 gen() 14941 MB/s Jun 20 19:35:40.396849 kernel: raid6: sse2x1 gen() 10163 MB/s Jun 20 19:35:40.396877 kernel: raid6: using algorithm sse2x2 gen() 14941 MB/s Jun 20 19:35:40.415868 kernel: raid6: .... xor() 9446 MB/s, rmw enabled Jun 20 19:35:40.415904 kernel: raid6: using ssse3x2 recovery algorithm Jun 20 19:35:40.438799 kernel: xor: measuring software checksum speed Jun 20 19:35:40.438863 kernel: prefetch64-sse : 17197 MB/sec Jun 20 19:35:40.440851 kernel: generic_sse : 16866 MB/sec Jun 20 19:35:40.440926 kernel: xor: using function: prefetch64-sse (17197 MB/sec) Jun 20 19:35:40.639533 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:35:40.648824 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:35:40.654754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:35:40.683036 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jun 20 19:35:40.688143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:35:40.695530 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:35:40.719327 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Jun 20 19:35:40.757880 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:35:40.762412 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:35:40.837586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:35:40.842165 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:35:40.934544 kernel: libata version 3.00 loaded. Jun 20 19:35:40.938158 kernel: ata_piix 0000:00:01.1: version 2.13 Jun 20 19:35:40.938329 kernel: scsi host0: ata_piix Jun 20 19:35:40.939863 kernel: scsi host1: ata_piix Jun 20 19:35:40.943875 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 lpm-pol 0 Jun 20 19:35:40.943900 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 lpm-pol 0 Jun 20 19:35:40.952523 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Jun 20 19:35:40.956812 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Jun 20 19:35:40.967495 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:35:40.967530 kernel: GPT:17805311 != 20971519 Jun 20 19:35:40.967542 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:35:40.968179 kernel: GPT:17805311 != 20971519 Jun 20 19:35:40.969097 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:35:40.970353 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:35:40.974397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:35:40.974582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:35:40.977685 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:35:40.980297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:35:40.981190 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:35:41.044552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:35:41.150384 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 20 19:35:41.197076 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 20 19:35:41.198639 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:35:41.224572 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 20 19:35:41.233824 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 20 19:35:41.236283 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 20 19:35:41.263738 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:35:41.265303 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:35:41.267909 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:35:41.270625 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:35:41.276692 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:35:41.281703 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:35:41.318849 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:35:41.341238 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:35:41.341285 disk-uuid[569]: Primary Header is updated. Jun 20 19:35:41.341285 disk-uuid[569]: Secondary Entries is updated. Jun 20 19:35:41.341285 disk-uuid[569]: Secondary Header is updated. Jun 20 19:35:42.356576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:35:42.356917 disk-uuid[577]: The operation has completed successfully. Jun 20 19:35:42.439538 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:35:42.439651 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:35:42.486210 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:35:42.507080 sh[588]: Success Jun 20 19:35:42.531956 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:35:42.532053 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:35:42.534795 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:35:42.548529 kernel: device-mapper: verity: sha256 using shash "sha256-ssse3" Jun 20 19:35:42.625260 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:35:42.628596 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:35:42.641294 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:35:42.660519 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:35:42.666578 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (600) Jun 20 19:35:42.672178 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:35:42.672249 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:35:42.676191 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:35:42.690951 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:35:42.694517 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:35:42.696764 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:35:42.698578 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:35:42.701870 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:35:42.753597 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (643) Jun 20 19:35:42.767562 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:35:42.767629 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:35:42.767660 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:35:42.780546 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:35:42.780816 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:35:42.783573 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:35:42.814487 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:35:42.816902 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:35:42.855781 systemd-networkd[770]: lo: Link UP Jun 20 19:35:42.855790 systemd-networkd[770]: lo: Gained carrier Jun 20 19:35:42.856844 systemd-networkd[770]: Enumeration completed Jun 20 19:35:42.857619 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:35:42.857781 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:35:42.857785 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:35:42.858204 systemd[1]: Reached target network.target - Network. Jun 20 19:35:42.858207 systemd-networkd[770]: eth0: Link UP Jun 20 19:35:42.858214 systemd-networkd[770]: eth0: Gained carrier Jun 20 19:35:42.858225 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:35:42.873509 systemd-networkd[770]: eth0: DHCPv4 address 172.24.4.217/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 20 19:35:43.022891 ignition[725]: Ignition 2.21.0 Jun 20 19:35:43.022923 ignition[725]: Stage: fetch-offline Jun 20 19:35:43.022993 ignition[725]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:35:43.025719 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:35:43.023014 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:35:43.026572 systemd-resolved[253]: Detected conflict on linux IN A 172.24.4.217 Jun 20 19:35:43.023204 ignition[725]: parsed url from cmdline: "" Jun 20 19:35:43.026590 systemd-resolved[253]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Jun 20 19:35:43.023212 ignition[725]: no config URL provided Jun 20 19:35:43.030570 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 19:35:43.023224 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:35:43.023240 ignition[725]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:35:43.023250 ignition[725]: failed to fetch config: resource requires networking Jun 20 19:35:43.024064 ignition[725]: Ignition finished successfully Jun 20 19:35:43.055850 ignition[781]: Ignition 2.21.0 Jun 20 19:35:43.055860 ignition[781]: Stage: fetch Jun 20 19:35:43.055997 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:35:43.056007 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:35:43.056094 ignition[781]: parsed url from cmdline: "" Jun 20 19:35:43.056097 ignition[781]: no config URL provided Jun 20 19:35:43.056102 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:35:43.056109 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:35:43.056213 ignition[781]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jun 20 19:35:43.056370 ignition[781]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jun 20 19:35:43.056425 ignition[781]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jun 20 19:35:43.230542 ignition[781]: GET result: OK Jun 20 19:35:43.230946 ignition[781]: parsing config with SHA512: a3f78dc2017a33b798a88f12b282f5d992b068d3bc6c18119c442fab76fe70534730e4c0f17e616896d158a1a2096cc8be7b1a7866ab032adb9c9526f6053163 Jun 20 19:35:43.251693 unknown[781]: fetched base config from "system" Jun 20 19:35:43.251722 unknown[781]: fetched base config from "system" Jun 20 19:35:43.252590 ignition[781]: fetch: fetch complete Jun 20 19:35:43.251735 unknown[781]: fetched user config from "openstack" Jun 20 19:35:43.252603 ignition[781]: fetch: fetch passed Jun 20 19:35:43.257925 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 19:35:43.252688 ignition[781]: Ignition finished successfully Jun 20 19:35:43.261744 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:35:43.325568 ignition[787]: Ignition 2.21.0 Jun 20 19:35:43.325598 ignition[787]: Stage: kargs Jun 20 19:35:43.325932 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:35:43.325957 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:35:43.327920 ignition[787]: kargs: kargs passed Jun 20 19:35:43.331850 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:35:43.328012 ignition[787]: Ignition finished successfully Jun 20 19:35:43.337857 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:35:43.381214 ignition[793]: Ignition 2.21.0 Jun 20 19:35:43.381247 ignition[793]: Stage: disks Jun 20 19:35:43.381673 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:35:43.381698 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:35:43.383929 ignition[793]: disks: disks passed Jun 20 19:35:43.386770 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:35:43.384020 ignition[793]: Ignition finished successfully Jun 20 19:35:43.390752 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:35:43.392663 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:35:43.395249 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:35:43.398135 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:35:43.401137 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:35:43.406701 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:35:43.465121 systemd-fsck[801]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jun 20 19:35:43.479553 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:35:43.484043 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:35:43.681573 kernel: EXT4-fs (vda9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:35:43.683536 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:35:43.685421 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:35:43.689701 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:35:43.706622 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:35:43.711152 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:35:43.725536 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (809) Jun 20 19:35:43.731516 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:35:43.731595 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jun 20 19:35:43.735586 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:35:43.735663 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:35:43.745232 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:35:43.752121 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:35:43.752167 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:35:43.752594 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:35:43.767561 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:35:43.881360 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:35:43.894373 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:35:43.904277 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:35:43.909507 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:35:43.910350 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:35:44.004564 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:35:44.006067 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:35:44.007142 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:35:44.024679 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:35:44.028360 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:35:44.047517 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:35:44.054751 ignition[927]: INFO : Ignition 2.21.0 Jun 20 19:35:44.055531 ignition[927]: INFO : Stage: mount Jun 20 19:35:44.055531 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:35:44.056625 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:35:44.057305 ignition[927]: INFO : mount: mount passed Jun 20 19:35:44.057305 ignition[927]: INFO : Ignition finished successfully Jun 20 19:35:44.058133 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:35:44.928092 systemd-networkd[770]: eth0: Gained IPv6LL Jun 20 19:35:44.945524 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:35:46.958543 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:35:50.969543 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:35:50.976714 coreos-metadata[811]: Jun 20 19:35:50.976 WARN failed to locate config-drive, using the metadata service API instead Jun 20 19:35:51.016804 coreos-metadata[811]: Jun 20 19:35:51.016 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 20 19:35:51.032046 coreos-metadata[811]: Jun 20 19:35:51.031 INFO Fetch successful Jun 20 19:35:51.034030 coreos-metadata[811]: Jun 20 19:35:51.032 INFO wrote hostname ci-4344-1-0-9-7ac33d8391.novalocal to /sysroot/etc/hostname Jun 20 19:35:51.040968 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jun 20 19:35:51.041253 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jun 20 19:35:51.047878 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:35:51.095105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:35:51.129656 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (943) Jun 20 19:35:51.141668 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:35:51.141752 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:35:51.141831 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:35:51.156536 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:35:51.213347 ignition[961]: INFO : Ignition 2.21.0 Jun 20 19:35:51.213347 ignition[961]: INFO : Stage: files Jun 20 19:35:51.218191 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:35:51.218191 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:35:51.222781 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:35:51.225565 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:35:51.228133 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:35:51.232391 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:35:51.234988 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:35:51.234988 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:35:51.233680 unknown[961]: wrote ssh authorized keys file for user: core Jun 20 19:35:51.241552 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:35:51.241552 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jun 20 19:35:51.304625 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:35:51.630005 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jun 20 19:35:51.630005 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:35:51.634591 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 19:35:52.306649 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:35:52.734922 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:35:52.734922 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:35:52.739187 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:35:52.739187 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:35:52.739187 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:35:52.739187 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:35:52.739187 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:35:52.739187 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:35:52.739187 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:35:52.752702 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:35:52.752702 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:35:52.752702 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:35:52.752702 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:35:52.752702 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:35:52.752702 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jun 20 19:35:53.314697 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:35:55.026809 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jun 20 19:35:55.026809 ignition[961]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:35:55.035294 ignition[961]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:35:55.044298 ignition[961]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:35:55.044298 ignition[961]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:35:55.044298 ignition[961]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:35:55.052543 ignition[961]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:35:55.052543 ignition[961]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:35:55.052543 ignition[961]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:35:55.052543 ignition[961]: INFO : files: files passed Jun 20 19:35:55.052543 ignition[961]: INFO : Ignition finished successfully Jun 20 19:35:55.047186 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:35:55.053588 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:35:55.065569 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:35:55.077701 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:35:55.077856 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:35:55.082292 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:35:55.082292 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:35:55.087087 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:35:55.087081 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:35:55.088069 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:35:55.090665 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:35:55.134272 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:35:55.134562 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:35:55.137330 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:35:55.139433 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:35:55.142299 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:35:55.144681 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:35:55.182892 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:35:55.187622 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:35:55.220788 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:35:55.224037 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:35:55.225957 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:35:55.228709 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:35:55.229003 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:35:55.232039 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:35:55.233998 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:35:55.236799 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:35:55.239252 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:35:55.241644 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:35:55.244576 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:35:55.247424 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:35:55.250208 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:35:55.253217 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:35:55.255915 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:35:55.258797 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:35:55.261566 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:35:55.262053 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:35:55.264775 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:35:55.266787 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:35:55.269281 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:35:55.270596 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:35:55.273811 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:35:55.274353 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:35:55.277365 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:35:55.277743 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:35:55.281240 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:35:55.281676 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:35:55.286144 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:35:55.291932 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:35:55.296066 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:35:55.296382 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:35:55.299221 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:35:55.299377 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:35:55.306679 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:35:55.306764 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:35:55.327918 ignition[1014]: INFO : Ignition 2.21.0 Jun 20 19:35:55.329288 ignition[1014]: INFO : Stage: umount Jun 20 19:35:55.329288 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:35:55.329288 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jun 20 19:35:55.332439 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:35:55.335848 ignition[1014]: INFO : umount: umount passed Jun 20 19:35:55.335848 ignition[1014]: INFO : Ignition finished successfully Jun 20 19:35:55.338329 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:35:55.338436 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:35:55.339819 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:35:55.339897 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:35:55.341081 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:35:55.341147 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:35:55.342244 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:35:55.342288 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:35:55.343284 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 19:35:55.343324 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 19:35:55.344384 systemd[1]: Stopped target network.target - Network. Jun 20 19:35:55.345390 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:35:55.345435 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:35:55.346544 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:35:55.347504 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:35:55.347739 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:35:55.348514 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:35:55.349452 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:35:55.350699 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:35:55.350734 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:35:55.351930 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:35:55.351962 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:35:55.352953 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:35:55.352995 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:35:55.354120 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:35:55.354161 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:35:55.359450 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:35:55.359517 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:35:55.360771 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:35:55.361895 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:35:55.367682 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:35:55.367772 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:35:55.371591 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:35:55.371835 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:35:55.371971 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:35:55.374121 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:35:55.374707 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:35:55.375830 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:35:55.375869 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:35:55.377630 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:35:55.379562 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:35:55.379610 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:35:55.380647 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:35:55.380691 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:35:55.383227 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:35:55.383271 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:35:55.384009 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:35:55.384048 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:35:55.385618 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:35:55.387049 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:35:55.387102 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:35:55.400589 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:35:55.400732 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:35:55.402237 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:35:55.402333 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:35:55.403631 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:35:55.403677 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:35:55.404412 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:35:55.404443 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:35:55.405515 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:35:55.405559 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:35:55.407330 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:35:55.407369 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:35:55.408454 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:35:55.408514 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:35:55.410586 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:35:55.412764 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:35:55.412811 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:35:55.414209 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:35:55.414251 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:35:55.415883 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 20 19:35:55.415927 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:35:55.417137 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:35:55.417176 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:35:55.417957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:35:55.417997 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:35:55.420731 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:35:55.420778 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jun 20 19:35:55.420815 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:35:55.420851 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:35:55.425766 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:35:55.425868 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:35:55.427346 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:35:55.429438 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:35:55.446648 systemd[1]: Switching root. Jun 20 19:35:55.487611 systemd-journald[214]: Journal stopped Jun 20 19:35:57.363523 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). Jun 20 19:35:57.363572 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:35:57.363589 kernel: SELinux: policy capability open_perms=1 Jun 20 19:35:57.363601 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:35:57.363613 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:35:57.363624 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:35:57.363635 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:35:57.363652 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:35:57.363664 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:35:57.363675 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:35:57.363686 kernel: audit: type=1403 audit(1750448156.216:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:35:57.363704 systemd[1]: Successfully loaded SELinux policy in 87.954ms. Jun 20 19:35:57.363733 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.794ms. Jun 20 19:35:57.363747 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:35:57.363765 systemd[1]: Detected virtualization kvm. Jun 20 19:35:57.363779 systemd[1]: Detected architecture x86-64. Jun 20 19:35:57.363791 systemd[1]: Detected first boot. Jun 20 19:35:57.363803 systemd[1]: Hostname set to . Jun 20 19:35:57.363815 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:35:57.363827 zram_generator::config[1057]: No configuration found. Jun 20 19:35:57.363840 kernel: Guest personality initialized and is inactive Jun 20 19:35:57.363851 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:35:57.363862 kernel: Initialized host personality Jun 20 19:35:57.363875 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:35:57.363886 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:35:57.363899 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:35:57.363912 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:35:57.363924 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:35:57.363936 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:35:57.363949 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:35:57.363961 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:35:57.363973 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:35:57.363989 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:35:57.364003 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:35:57.364016 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:35:57.364028 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:35:57.364040 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:35:57.364052 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:35:57.364064 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:35:57.364076 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:35:57.364090 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:35:57.364102 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:35:57.364115 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:35:57.364127 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:35:57.364139 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:35:57.364151 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:35:57.364163 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:35:57.364176 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:35:57.364189 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:35:57.364201 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:35:57.364213 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:35:57.364226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:35:57.364238 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:35:57.364250 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:35:57.364262 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:35:57.364274 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:35:57.364288 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:35:57.364301 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:35:57.364312 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:35:57.364324 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:35:57.364336 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:35:57.364349 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:35:57.364361 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:35:57.364373 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:35:57.364385 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:35:57.364399 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:35:57.364411 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:35:57.364423 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:35:57.364436 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:35:57.364448 systemd[1]: Reached target machines.target - Containers. Jun 20 19:35:57.364460 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:35:57.368249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:35:57.368268 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:35:57.368285 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:35:57.368298 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:35:57.368310 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:35:57.368322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:35:57.368334 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:35:57.368346 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:35:57.368359 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:35:57.368371 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:35:57.368383 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:35:57.368397 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:35:57.368409 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:35:57.368422 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:35:57.368434 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:35:57.368448 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:35:57.368462 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:35:57.369638 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:35:57.369656 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:35:57.369668 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:35:57.369680 kernel: fuse: init (API version 7.41) Jun 20 19:35:57.369696 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:35:57.369709 systemd[1]: Stopped verity-setup.service. Jun 20 19:35:57.369722 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:35:57.369734 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:35:57.369746 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:35:57.369758 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:35:57.369790 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:35:57.369803 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:35:57.369817 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:35:57.369829 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:35:57.369841 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:35:57.369861 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:35:57.369874 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:35:57.369888 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:35:57.369901 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:35:57.369913 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:35:57.369925 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:35:57.369939 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:35:57.369952 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:35:57.369964 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:35:57.369976 kernel: ACPI: bus type drm_connector registered Jun 20 19:35:57.369988 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:35:57.370000 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:35:57.370012 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:35:57.370024 kernel: loop: module loaded Jun 20 19:35:57.370036 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:35:57.370050 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:35:57.370064 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:35:57.370076 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:35:57.370114 systemd-journald[1144]: Collecting audit messages is disabled. Jun 20 19:35:57.370142 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:35:57.370155 systemd-journald[1144]: Journal started Jun 20 19:35:57.370180 systemd-journald[1144]: Runtime Journal (/run/log/journal/6d4f581c1e564a83baec68a8abbccbdd) is 8M, max 78.5M, 70.5M free. Jun 20 19:35:56.940944 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:35:56.966641 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 20 19:35:56.967097 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:35:57.382521 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:35:57.385824 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:35:57.385858 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:35:57.390492 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:35:57.396498 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:35:57.400584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:35:57.409619 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:35:57.409672 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:35:57.416482 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:35:57.422691 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:35:57.428556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:35:57.433508 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:35:57.442493 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:35:57.448287 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:35:57.448337 kernel: loop0: detected capacity change from 0 to 8 Jun 20 19:35:57.451366 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:35:57.452804 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:35:57.455705 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:35:57.456307 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:35:57.471039 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:35:57.458493 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:35:57.479520 kernel: loop1: detected capacity change from 0 to 224512 Jun 20 19:35:57.481178 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:35:57.487737 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:35:57.492429 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:35:57.511734 systemd-journald[1144]: Time spent on flushing to /var/log/journal/6d4f581c1e564a83baec68a8abbccbdd is 33.482ms for 987 entries. Jun 20 19:35:57.511734 systemd-journald[1144]: System Journal (/var/log/journal/6d4f581c1e564a83baec68a8abbccbdd) is 8M, max 584.8M, 576.8M free. Jun 20 19:35:57.570068 systemd-journald[1144]: Received client request to flush runtime journal. Jun 20 19:35:57.521345 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jun 20 19:35:57.521358 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jun 20 19:35:57.530343 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:35:57.531451 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:35:57.535599 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:35:57.572988 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:35:57.574737 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:35:57.608506 kernel: loop2: detected capacity change from 0 to 146240 Jun 20 19:35:57.622202 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:35:57.626635 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:35:57.657253 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jun 20 19:35:57.657590 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jun 20 19:35:57.666729 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:35:57.675499 kernel: loop3: detected capacity change from 0 to 113872 Jun 20 19:35:57.720567 kernel: loop4: detected capacity change from 0 to 8 Jun 20 19:35:57.723962 kernel: loop5: detected capacity change from 0 to 224512 Jun 20 19:35:57.790504 kernel: loop6: detected capacity change from 0 to 146240 Jun 20 19:35:57.844926 kernel: loop7: detected capacity change from 0 to 113872 Jun 20 19:35:57.876285 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jun 20 19:35:57.876698 (sd-merge)[1223]: Merged extensions into '/usr'. Jun 20 19:35:57.887173 systemd[1]: Reload requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:35:57.887187 systemd[1]: Reloading... Jun 20 19:35:57.966240 zram_generator::config[1248]: No configuration found. Jun 20 19:35:58.104910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:35:58.230957 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:35:58.231511 systemd[1]: Reloading finished in 343 ms. Jun 20 19:35:58.247620 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:35:58.261855 systemd[1]: Starting ensure-sysext.service... Jun 20 19:35:58.265301 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:35:58.307395 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:35:58.307442 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:35:58.308042 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:35:58.308525 systemd[1]: Reload requested from client PID 1304 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:35:58.308540 systemd[1]: Reloading... Jun 20 19:35:58.309649 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:35:58.310410 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:35:58.313742 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jun 20 19:35:58.313818 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Jun 20 19:35:58.322497 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:35:58.322507 systemd-tmpfiles[1305]: Skipping /boot Jun 20 19:35:58.337194 systemd-tmpfiles[1305]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:35:58.337206 systemd-tmpfiles[1305]: Skipping /boot Jun 20 19:35:58.396518 zram_generator::config[1329]: No configuration found. Jun 20 19:35:58.424843 ldconfig[1172]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:35:58.520331 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:35:58.621751 systemd[1]: Reloading finished in 312 ms. Jun 20 19:35:58.631763 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:35:58.632770 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:35:58.633636 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:35:58.645596 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:35:58.651830 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:35:58.655677 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:35:58.660441 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:35:58.664530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:35:58.667727 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:35:58.678338 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:35:58.679731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:35:58.681112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:35:58.688247 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:35:58.703366 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:35:58.704201 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:35:58.704408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:35:58.704730 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:35:58.711273 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:35:58.717435 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:35:58.717861 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:35:58.718124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:35:58.718317 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:35:58.718543 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:35:58.723586 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:35:58.729642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:35:58.730273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:35:58.732458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:35:58.733800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:35:58.739317 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:35:58.740188 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:35:58.740443 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:35:58.740689 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:35:58.744831 systemd[1]: Finished ensure-sysext.service. Jun 20 19:35:58.750129 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:35:58.752771 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:35:58.755291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:35:58.756294 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:35:58.757410 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:35:58.764911 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:35:58.768145 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:35:58.768324 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:35:58.770376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:35:58.770604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:35:58.771760 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:35:58.782868 systemd-udevd[1396]: Using default interface naming scheme 'v255'. Jun 20 19:35:58.795608 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:35:58.807084 augenrules[1433]: No rules Jun 20 19:35:58.808117 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:35:58.808348 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:35:58.820572 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:35:58.826955 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:35:58.831755 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:35:58.853823 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:35:58.855125 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:35:58.907141 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:35:58.909580 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:35:58.990412 systemd-networkd[1445]: lo: Link UP Jun 20 19:35:58.992746 systemd-networkd[1445]: lo: Gained carrier Jun 20 19:35:58.993328 systemd-networkd[1445]: Enumeration completed Jun 20 19:35:58.993415 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:35:58.996657 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:35:58.997550 systemd-resolved[1395]: Positive Trust Anchors: Jun 20 19:35:58.997757 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:35:58.997872 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:35:59.001642 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:35:59.004967 systemd-resolved[1395]: Using system hostname 'ci-4344-1-0-9-7ac33d8391.novalocal'. Jun 20 19:35:59.006446 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:35:59.008577 systemd[1]: Reached target network.target - Network. Jun 20 19:35:59.009024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:35:59.009564 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:35:59.010139 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:35:59.011410 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:35:59.011965 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:35:59.012597 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:35:59.013584 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:35:59.014541 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:35:59.015619 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:35:59.015654 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:35:59.022589 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:35:59.024256 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:35:59.027697 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:35:59.033291 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:35:59.034296 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:35:59.035277 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:35:59.045369 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:35:59.047751 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:35:59.049016 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:35:59.050849 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:35:59.051700 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:35:59.053156 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:35:59.053189 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:35:59.058635 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:35:59.062664 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 19:35:59.067632 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:35:59.074385 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:35:59.092487 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:35:59.091427 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:35:59.095142 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:35:59.096533 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:35:59.099430 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:35:59.105680 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:35:59.114064 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:35:59.118315 jq[1488]: false Jun 20 19:35:59.120374 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:35:59.123660 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:35:59.135002 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:35:59.138314 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:35:59.138866 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:35:59.143937 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:35:59.150280 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:35:59.151910 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:35:59.154780 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:35:59.156786 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:35:59.156950 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:35:59.165398 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing passwd entry cache Jun 20 19:35:59.165400 oslogin_cache_refresh[1492]: Refreshing passwd entry cache Jun 20 19:35:59.170606 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:35:59.174500 update_engine[1503]: I20250620 19:35:59.173197 1503 main.cc:92] Flatcar Update Engine starting Jun 20 19:35:59.187646 extend-filesystems[1490]: Found /dev/vda6 Jun 20 19:35:59.193549 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting users, quitting Jun 20 19:35:59.193549 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:35:59.193549 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Refreshing group entry cache Jun 20 19:35:59.193549 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Failure getting groups, quitting Jun 20 19:35:59.193549 google_oslogin_nss_cache[1492]: oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:35:59.192619 oslogin_cache_refresh[1492]: Failure getting users, quitting Jun 20 19:35:59.192635 oslogin_cache_refresh[1492]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:35:59.192677 oslogin_cache_refresh[1492]: Refreshing group entry cache Jun 20 19:35:59.193154 oslogin_cache_refresh[1492]: Failure getting groups, quitting Jun 20 19:35:59.193160 oslogin_cache_refresh[1492]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:35:59.196485 extend-filesystems[1490]: Found /dev/vda9 Jun 20 19:35:59.197031 jq[1504]: true Jun 20 19:35:59.204839 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:35:59.206525 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:35:59.207345 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:35:59.207549 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:35:59.208662 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:35:59.208822 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:35:59.212865 extend-filesystems[1490]: Checking size of /dev/vda9 Jun 20 19:35:59.232160 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:35:59.251828 jq[1524]: true Jun 20 19:35:59.261455 tar[1506]: linux-amd64/LICENSE Jun 20 19:35:59.261455 tar[1506]: linux-amd64/helm Jun 20 19:35:59.261551 dbus-daemon[1485]: [system] SELinux support is enabled Jun 20 19:35:59.261692 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:35:59.264872 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:35:59.264905 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:35:59.267564 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:35:59.267590 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:35:59.274347 extend-filesystems[1490]: Resized partition /dev/vda9 Jun 20 19:35:59.277591 extend-filesystems[1538]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 19:35:59.291462 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Jun 20 19:35:59.291528 update_engine[1503]: I20250620 19:35:59.290635 1503 update_check_scheduler.cc:74] Next update check in 2m34s Jun 20 19:35:59.287384 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:35:59.298586 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:35:59.312912 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Jun 20 19:35:59.352853 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:35:59.352955 systemd-networkd[1445]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:35:59.354305 systemd-networkd[1445]: eth0: Link UP Jun 20 19:35:59.354934 systemd-networkd[1445]: eth0: Gained carrier Jun 20 19:35:59.354949 systemd-networkd[1445]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:35:59.356916 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:35:59.363654 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:35:59.364133 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 20 19:35:59.364133 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 19:35:59.364133 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Jun 20 19:35:59.379825 extend-filesystems[1490]: Resized filesystem in /dev/vda9 Jun 20 19:35:59.366769 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:35:59.366971 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:35:59.370522 systemd-networkd[1445]: eth0: DHCPv4 address 172.24.4.217/24, gateway 172.24.4.1 acquired from 172.24.4.1 Jun 20 19:35:59.377454 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Jun 20 19:35:59.388497 bash[1552]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:35:59.390933 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:35:59.403259 systemd[1]: Starting sshkeys.service... Jun 20 19:35:59.451733 systemd-logind[1498]: New seat seat0. Jun 20 19:35:59.455169 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:35:59.458441 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:35:59.467774 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 19:35:59.470558 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 19:35:59.497298 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:35:59.563400 locksmithd[1543]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:35:59.582494 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:35:59.589497 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:35:59.604499 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:35:59.669674 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:35:59.744058 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jun 20 19:35:59.768507 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 20 19:35:59.774857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:35:59.783614 containerd[1530]: time="2025-06-20T19:35:59Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:35:59.789318 containerd[1530]: time="2025-06-20T19:35:59.789266805Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:35:59.804642 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jun 20 19:35:59.804698 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jun 20 19:35:59.815487 kernel: Console: switching to colour dummy device 80x25 Jun 20 19:35:59.822682 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 20 19:35:59.822732 kernel: [drm] features: -context_init Jun 20 19:35:59.850850 systemd-logind[1498]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:35:59.858002 systemd-logind[1498]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886390370Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.768µs" Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886428702Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886449371Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886619931Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886637544Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886663803Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886720539Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886734165Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886934551Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886950911Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886961922Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:35:59.888955 containerd[1530]: time="2025-06-20T19:35:59.886971450Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:35:59.889257 containerd[1530]: time="2025-06-20T19:35:59.887041481Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:35:59.889257 containerd[1530]: time="2025-06-20T19:35:59.887226899Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:35:59.889257 containerd[1530]: time="2025-06-20T19:35:59.887255252Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:35:59.889257 containerd[1530]: time="2025-06-20T19:35:59.887267385Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:35:59.889257 containerd[1530]: time="2025-06-20T19:35:59.887294065Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:35:59.894741 kernel: [drm] number of scanouts: 1 Jun 20 19:35:59.894783 kernel: [drm] number of cap sets: 0 Jun 20 19:35:59.894804 containerd[1530]: time="2025-06-20T19:35:59.889604468Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:35:59.894804 containerd[1530]: time="2025-06-20T19:35:59.889679018Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:35:59.896496 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Jun 20 19:35:59.908815 containerd[1530]: time="2025-06-20T19:35:59.908773299Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:35:59.908875 containerd[1530]: time="2025-06-20T19:35:59.908840776Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:35:59.908875 containerd[1530]: time="2025-06-20T19:35:59.908859261Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:35:59.908919 containerd[1530]: time="2025-06-20T19:35:59.908873848Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:35:59.908919 containerd[1530]: time="2025-06-20T19:35:59.908888666Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:35:59.908919 containerd[1530]: time="2025-06-20T19:35:59.908901279Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:35:59.908919 containerd[1530]: time="2025-06-20T19:35:59.908916007Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:35:59.909013 containerd[1530]: time="2025-06-20T19:35:59.908932017Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:35:59.909013 containerd[1530]: time="2025-06-20T19:35:59.908945823Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:35:59.909013 containerd[1530]: time="2025-06-20T19:35:59.908957104Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:35:59.909013 containerd[1530]: time="2025-06-20T19:35:59.908968265Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:35:59.909013 containerd[1530]: time="2025-06-20T19:35:59.908982452Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:35:59.909116 containerd[1530]: time="2025-06-20T19:35:59.909100954Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:35:59.909140 containerd[1530]: time="2025-06-20T19:35:59.909126101Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909162599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909181956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909194940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909206332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909221560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909233833Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909247298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909261335Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909275752Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909338800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909355982Z" level=info msg="Start snapshots syncer" Jun 20 19:35:59.910007 containerd[1530]: time="2025-06-20T19:35:59.909382732Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:35:59.909651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:35:59.909842 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:35:59.912817 containerd[1530]: time="2025-06-20T19:35:59.910899036Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:35:59.912817 containerd[1530]: time="2025-06-20T19:35:59.910973145Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:35:59.912945 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:35:59.912088 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:35:59.913069 containerd[1530]: time="2025-06-20T19:35:59.912525136Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:35:59.913069 containerd[1530]: time="2025-06-20T19:35:59.912650241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:35:59.913069 containerd[1530]: time="2025-06-20T19:35:59.912675257Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:35:59.913069 containerd[1530]: time="2025-06-20T19:35:59.912687070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:35:59.913069 containerd[1530]: time="2025-06-20T19:35:59.912698491Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:35:59.913069 containerd[1530]: time="2025-06-20T19:35:59.912711626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:35:59.913069 containerd[1530]: time="2025-06-20T19:35:59.912722837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:35:59.913069 containerd[1530]: time="2025-06-20T19:35:59.912734248Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:35:59.913069 containerd[1530]: time="2025-06-20T19:35:59.912757111Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:35:59.913069 containerd[1530]: time="2025-06-20T19:35:59.912769274Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:35:59.913069 containerd[1530]: time="2025-06-20T19:35:59.912781447Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913078484Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913101777Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913112267Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913161359Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913174063Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913185384Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913201274Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913222624Z" level=info msg="runtime interface created" Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913229016Z" level=info msg="created NRI interface" Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913240067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913252771Z" level=info msg="Connect containerd service" Jun 20 19:35:59.913285 containerd[1530]: time="2025-06-20T19:35:59.913279330Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:35:59.920567 containerd[1530]: time="2025-06-20T19:35:59.918031243Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:35:59.923711 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:35:59.981705 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:35:59.986286 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:35:59.992484 systemd[1]: Started sshd@0-172.24.4.217:22-172.24.4.1:47506.service - OpenSSH per-connection server daemon (172.24.4.1:47506). Jun 20 19:36:00.004087 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:36:00.004411 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:36:00.009726 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:36:00.051218 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:36:00.053308 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:36:00.056218 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:36:00.058797 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:36:00.083635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:36:00.142943 containerd[1530]: time="2025-06-20T19:36:00.142909462Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:36:00.143176 containerd[1530]: time="2025-06-20T19:36:00.143054794Z" level=info msg="Start subscribing containerd event" Jun 20 19:36:00.143279 containerd[1530]: time="2025-06-20T19:36:00.143252074Z" level=info msg="Start recovering state" Jun 20 19:36:00.143449 containerd[1530]: time="2025-06-20T19:36:00.143435418Z" level=info msg="Start event monitor" Jun 20 19:36:00.143544 containerd[1530]: time="2025-06-20T19:36:00.143530466Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:36:00.143656 containerd[1530]: time="2025-06-20T19:36:00.143586832Z" level=info msg="Start streaming server" Jun 20 19:36:00.143748 containerd[1530]: time="2025-06-20T19:36:00.143735090Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:36:00.143913 containerd[1530]: time="2025-06-20T19:36:00.143898817Z" level=info msg="runtime interface starting up..." Jun 20 19:36:00.143986 containerd[1530]: time="2025-06-20T19:36:00.143956656Z" level=info msg="starting plugins..." Jun 20 19:36:00.144044 containerd[1530]: time="2025-06-20T19:36:00.144032839Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:36:00.144221 containerd[1530]: time="2025-06-20T19:36:00.144205683Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:36:00.144598 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:36:00.144717 containerd[1530]: time="2025-06-20T19:36:00.144700350Z" level=info msg="containerd successfully booted in 0.361365s" Jun 20 19:36:00.325085 tar[1506]: linux-amd64/README.md Jun 20 19:36:00.350458 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:36:00.671753 systemd-networkd[1445]: eth0: Gained IPv6LL Jun 20 19:36:00.672812 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Jun 20 19:36:00.675334 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:36:00.677981 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:36:00.683305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:36:00.685946 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:36:00.742250 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:36:00.930297 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:36:00.930430 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:36:01.316209 sshd[1609]: Accepted publickey for core from 172.24.4.1 port 47506 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:36:01.320080 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:36:01.338323 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:36:01.341365 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:36:01.367376 systemd-logind[1498]: New session 1 of user core. Jun 20 19:36:01.381878 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:36:01.387121 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:36:01.405820 (systemd)[1646]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:36:01.409822 systemd-logind[1498]: New session c1 of user core. Jun 20 19:36:01.582697 systemd[1646]: Queued start job for default target default.target. Jun 20 19:36:01.587314 systemd[1646]: Created slice app.slice - User Application Slice. Jun 20 19:36:01.587341 systemd[1646]: Reached target paths.target - Paths. Jun 20 19:36:01.587379 systemd[1646]: Reached target timers.target - Timers. Jun 20 19:36:01.590571 systemd[1646]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:36:01.598378 systemd[1646]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:36:01.599283 systemd[1646]: Reached target sockets.target - Sockets. Jun 20 19:36:01.599328 systemd[1646]: Reached target basic.target - Basic System. Jun 20 19:36:01.599359 systemd[1646]: Reached target default.target - Main User Target. Jun 20 19:36:01.599386 systemd[1646]: Startup finished in 181ms. Jun 20 19:36:01.599839 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:36:01.605653 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:36:02.060898 systemd[1]: Started sshd@1-172.24.4.217:22-172.24.4.1:47522.service - OpenSSH per-connection server daemon (172.24.4.1:47522). Jun 20 19:36:02.805293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:36:02.820077 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:36:02.955511 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:36:02.955620 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:36:03.501078 sshd[1657]: Accepted publickey for core from 172.24.4.1 port 47522 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:36:03.503043 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:36:03.513894 systemd-logind[1498]: New session 2 of user core. Jun 20 19:36:03.519798 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:36:03.967516 kubelet[1664]: E0620 19:36:03.967414 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:36:03.972049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:36:03.972502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:36:03.973553 systemd[1]: kubelet.service: Consumed 2.195s CPU time, 264.1M memory peak. Jun 20 19:36:04.187286 sshd[1672]: Connection closed by 172.24.4.1 port 47522 Jun 20 19:36:04.187127 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Jun 20 19:36:04.205058 systemd[1]: sshd@1-172.24.4.217:22-172.24.4.1:47522.service: Deactivated successfully. Jun 20 19:36:04.208734 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:36:04.211092 systemd-logind[1498]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:36:04.216802 systemd[1]: Started sshd@2-172.24.4.217:22-172.24.4.1:44870.service - OpenSSH per-connection server daemon (172.24.4.1:44870). Jun 20 19:36:04.220963 systemd-logind[1498]: Removed session 2. Jun 20 19:36:05.131743 login[1619]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:36:05.133913 login[1618]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jun 20 19:36:05.146540 systemd-logind[1498]: New session 3 of user core. Jun 20 19:36:05.155882 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:36:05.163327 systemd-logind[1498]: New session 4 of user core. Jun 20 19:36:05.171060 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:36:05.485168 sshd[1679]: Accepted publickey for core from 172.24.4.1 port 44870 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:36:05.488044 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:36:05.499873 systemd-logind[1498]: New session 5 of user core. Jun 20 19:36:05.508863 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:36:06.021628 sshd[1707]: Connection closed by 172.24.4.1 port 44870 Jun 20 19:36:06.022898 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Jun 20 19:36:06.031882 systemd[1]: sshd@2-172.24.4.217:22-172.24.4.1:44870.service: Deactivated successfully. Jun 20 19:36:06.036744 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:36:06.039253 systemd-logind[1498]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:36:06.044407 systemd-logind[1498]: Removed session 5. Jun 20 19:36:06.977520 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:36:06.991563 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jun 20 19:36:06.993880 coreos-metadata[1484]: Jun 20 19:36:06.993 WARN failed to locate config-drive, using the metadata service API instead Jun 20 19:36:07.008636 coreos-metadata[1565]: Jun 20 19:36:07.008 WARN failed to locate config-drive, using the metadata service API instead Jun 20 19:36:07.045659 coreos-metadata[1484]: Jun 20 19:36:07.045 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jun 20 19:36:07.050834 coreos-metadata[1565]: Jun 20 19:36:07.050 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jun 20 19:36:07.231353 coreos-metadata[1484]: Jun 20 19:36:07.231 INFO Fetch successful Jun 20 19:36:07.231353 coreos-metadata[1484]: Jun 20 19:36:07.231 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jun 20 19:36:07.240323 coreos-metadata[1565]: Jun 20 19:36:07.240 INFO Fetch successful Jun 20 19:36:07.240323 coreos-metadata[1565]: Jun 20 19:36:07.240 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 20 19:36:07.247765 coreos-metadata[1484]: Jun 20 19:36:07.247 INFO Fetch successful Jun 20 19:36:07.248000 coreos-metadata[1484]: Jun 20 19:36:07.247 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jun 20 19:36:07.255025 coreos-metadata[1565]: Jun 20 19:36:07.254 INFO Fetch successful Jun 20 19:36:07.260318 coreos-metadata[1484]: Jun 20 19:36:07.260 INFO Fetch successful Jun 20 19:36:07.260574 coreos-metadata[1484]: Jun 20 19:36:07.260 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jun 20 19:36:07.261740 unknown[1565]: wrote ssh authorized keys file for user: core Jun 20 19:36:07.278020 coreos-metadata[1484]: Jun 20 19:36:07.277 INFO Fetch successful Jun 20 19:36:07.278278 coreos-metadata[1484]: Jun 20 19:36:07.278 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jun 20 19:36:07.292241 coreos-metadata[1484]: Jun 20 19:36:07.292 INFO Fetch successful Jun 20 19:36:07.292241 coreos-metadata[1484]: Jun 20 19:36:07.292 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jun 20 19:36:07.307687 coreos-metadata[1484]: Jun 20 19:36:07.307 INFO Fetch successful Jun 20 19:36:07.322573 update-ssh-keys[1716]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:36:07.323188 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 19:36:07.328868 systemd[1]: Finished sshkeys.service. Jun 20 19:36:07.357349 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 19:36:07.358362 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:36:07.358737 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:36:07.361821 systemd[1]: Startup finished in 3.779s (kernel) + 16.483s (initrd) + 11.230s (userspace) = 31.493s. Jun 20 19:36:14.134179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:36:14.137545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:36:14.549623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:36:14.567042 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:36:14.667522 kubelet[1733]: E0620 19:36:14.667342 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:36:14.671147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:36:14.671446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:36:14.672382 systemd[1]: kubelet.service: Consumed 329ms CPU time, 111.1M memory peak. Jun 20 19:36:16.039214 systemd[1]: Started sshd@3-172.24.4.217:22-172.24.4.1:35952.service - OpenSSH per-connection server daemon (172.24.4.1:35952). Jun 20 19:36:17.496952 sshd[1741]: Accepted publickey for core from 172.24.4.1 port 35952 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:36:17.498511 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:36:17.514577 systemd-logind[1498]: New session 6 of user core. Jun 20 19:36:17.520790 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:36:18.212164 sshd[1743]: Connection closed by 172.24.4.1 port 35952 Jun 20 19:36:18.212374 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jun 20 19:36:18.232351 systemd[1]: sshd@3-172.24.4.217:22-172.24.4.1:35952.service: Deactivated successfully. Jun 20 19:36:18.236768 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:36:18.238970 systemd-logind[1498]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:36:18.244858 systemd[1]: Started sshd@4-172.24.4.217:22-172.24.4.1:35968.service - OpenSSH per-connection server daemon (172.24.4.1:35968). Jun 20 19:36:18.247280 systemd-logind[1498]: Removed session 6. Jun 20 19:36:19.720750 sshd[1749]: Accepted publickey for core from 172.24.4.1 port 35968 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:36:19.726940 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:36:19.742572 systemd-logind[1498]: New session 7 of user core. Jun 20 19:36:19.750794 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:36:20.540534 sshd[1751]: Connection closed by 172.24.4.1 port 35968 Jun 20 19:36:20.540441 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jun 20 19:36:20.555837 systemd[1]: sshd@4-172.24.4.217:22-172.24.4.1:35968.service: Deactivated successfully. Jun 20 19:36:20.559650 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:36:20.562947 systemd-logind[1498]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:36:20.568247 systemd[1]: Started sshd@5-172.24.4.217:22-172.24.4.1:35976.service - OpenSSH per-connection server daemon (172.24.4.1:35976). Jun 20 19:36:20.571068 systemd-logind[1498]: Removed session 7. Jun 20 19:36:21.918861 sshd[1757]: Accepted publickey for core from 172.24.4.1 port 35976 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:36:21.922038 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:36:21.937604 systemd-logind[1498]: New session 8 of user core. Jun 20 19:36:21.949921 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:36:22.776378 sshd[1759]: Connection closed by 172.24.4.1 port 35976 Jun 20 19:36:22.776209 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jun 20 19:36:22.793344 systemd[1]: sshd@5-172.24.4.217:22-172.24.4.1:35976.service: Deactivated successfully. Jun 20 19:36:22.796834 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:36:22.798750 systemd-logind[1498]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:36:22.805566 systemd[1]: Started sshd@6-172.24.4.217:22-172.24.4.1:35992.service - OpenSSH per-connection server daemon (172.24.4.1:35992). Jun 20 19:36:22.808267 systemd-logind[1498]: Removed session 8. Jun 20 19:36:24.199264 sshd[1765]: Accepted publickey for core from 172.24.4.1 port 35992 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:36:24.201954 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:36:24.212299 systemd-logind[1498]: New session 9 of user core. Jun 20 19:36:24.233870 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:36:24.687368 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:36:24.688703 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:36:24.691168 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 19:36:24.695703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:36:24.710087 sudo[1768]: pam_unix(sudo:session): session closed for user root Jun 20 19:36:24.967707 sshd[1767]: Connection closed by 172.24.4.1 port 35992 Jun 20 19:36:24.970068 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jun 20 19:36:24.987409 systemd[1]: sshd@6-172.24.4.217:22-172.24.4.1:35992.service: Deactivated successfully. Jun 20 19:36:24.993421 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:36:24.998060 systemd-logind[1498]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:36:25.006053 systemd[1]: Started sshd@7-172.24.4.217:22-172.24.4.1:43238.service - OpenSSH per-connection server daemon (172.24.4.1:43238). Jun 20 19:36:25.010870 systemd-logind[1498]: Removed session 9. Jun 20 19:36:25.096851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:36:25.106708 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:36:25.221524 kubelet[1783]: E0620 19:36:25.220777 1783 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:36:25.226322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:36:25.226678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:36:25.227690 systemd[1]: kubelet.service: Consumed 294ms CPU time, 108M memory peak. Jun 20 19:36:26.168017 sshd[1777]: Accepted publickey for core from 172.24.4.1 port 43238 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:36:26.171169 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:36:26.184596 systemd-logind[1498]: New session 10 of user core. Jun 20 19:36:26.193904 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:36:26.635578 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:36:26.636409 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:36:26.649529 sudo[1793]: pam_unix(sudo:session): session closed for user root Jun 20 19:36:26.661117 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:36:26.662042 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:36:26.685132 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:36:26.764835 augenrules[1815]: No rules Jun 20 19:36:26.767052 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:36:26.767708 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:36:26.769378 sudo[1792]: pam_unix(sudo:session): session closed for user root Jun 20 19:36:27.075653 sshd[1791]: Connection closed by 172.24.4.1 port 43238 Jun 20 19:36:27.075222 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jun 20 19:36:27.091241 systemd[1]: sshd@7-172.24.4.217:22-172.24.4.1:43238.service: Deactivated successfully. Jun 20 19:36:27.095678 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:36:27.097869 systemd-logind[1498]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:36:27.104020 systemd[1]: Started sshd@8-172.24.4.217:22-172.24.4.1:43242.service - OpenSSH per-connection server daemon (172.24.4.1:43242). Jun 20 19:36:27.105900 systemd-logind[1498]: Removed session 10. Jun 20 19:36:28.191883 sshd[1824]: Accepted publickey for core from 172.24.4.1 port 43242 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:36:28.194328 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:36:28.205587 systemd-logind[1498]: New session 11 of user core. Jun 20 19:36:28.215766 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:36:28.657014 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:36:28.658119 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:36:29.305735 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:36:29.320962 (dockerd)[1844]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:36:29.824447 dockerd[1844]: time="2025-06-20T19:36:29.824369372Z" level=info msg="Starting up" Jun 20 19:36:29.828098 dockerd[1844]: time="2025-06-20T19:36:29.827894974Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:36:29.952650 dockerd[1844]: time="2025-06-20T19:36:29.952541028Z" level=info msg="Loading containers: start." Jun 20 19:36:29.977528 kernel: Initializing XFRM netlink socket Jun 20 19:36:30.322318 systemd-timesyncd[1416]: Network configuration changed, trying to establish connection. Jun 20 19:36:30.388929 systemd-networkd[1445]: docker0: Link UP Jun 20 19:36:30.395747 dockerd[1844]: time="2025-06-20T19:36:30.395700243Z" level=info msg="Loading containers: done." Jun 20 19:36:30.412108 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2257645472-merged.mount: Deactivated successfully. Jun 20 19:36:30.416241 dockerd[1844]: time="2025-06-20T19:36:30.416181235Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:36:30.416336 dockerd[1844]: time="2025-06-20T19:36:30.416306099Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:36:30.416553 dockerd[1844]: time="2025-06-20T19:36:30.416510443Z" level=info msg="Initializing buildkit" Jun 20 19:36:30.453651 dockerd[1844]: time="2025-06-20T19:36:30.453575258Z" level=info msg="Completed buildkit initialization" Jun 20 19:36:30.461944 dockerd[1844]: time="2025-06-20T19:36:30.461872128Z" level=info msg="Daemon has completed initialization" Jun 20 19:36:30.462090 dockerd[1844]: time="2025-06-20T19:36:30.462036857Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:36:30.462243 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:36:30.482825 systemd-timesyncd[1416]: Contacted time server 104.233.211.205:123 (2.flatcar.pool.ntp.org). Jun 20 19:36:30.483120 systemd-timesyncd[1416]: Initial clock synchronization to Fri 2025-06-20 19:36:30.273211 UTC. Jun 20 19:36:31.867552 containerd[1530]: time="2025-06-20T19:36:31.867321424Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jun 20 19:36:32.775291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501210253.mount: Deactivated successfully. Jun 20 19:36:34.567120 containerd[1530]: time="2025-06-20T19:36:34.567036336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:34.568424 containerd[1530]: time="2025-06-20T19:36:34.568382887Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799053" Jun 20 19:36:34.569970 containerd[1530]: time="2025-06-20T19:36:34.569924549Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:34.572984 containerd[1530]: time="2025-06-20T19:36:34.572942368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:34.574151 containerd[1530]: time="2025-06-20T19:36:34.574014297Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.706625727s" Jun 20 19:36:34.574151 containerd[1530]: time="2025-06-20T19:36:34.574046877Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jun 20 19:36:34.574786 containerd[1530]: time="2025-06-20T19:36:34.574742187Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jun 20 19:36:35.383973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 19:36:35.388851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:36:36.012160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:36:36.028287 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:36:36.106970 kubelet[2113]: E0620 19:36:36.106911 2113 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:36:36.108769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:36:36.108903 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:36:36.109243 systemd[1]: kubelet.service: Consumed 266ms CPU time, 110.1M memory peak. Jun 20 19:36:36.761242 containerd[1530]: time="2025-06-20T19:36:36.761176962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:36.762568 containerd[1530]: time="2025-06-20T19:36:36.762519311Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783920" Jun 20 19:36:36.763776 containerd[1530]: time="2025-06-20T19:36:36.763742257Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:36.766378 containerd[1530]: time="2025-06-20T19:36:36.766313215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:36.767882 containerd[1530]: time="2025-06-20T19:36:36.767839669Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 2.193058988s" Jun 20 19:36:36.767882 containerd[1530]: time="2025-06-20T19:36:36.767873849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jun 20 19:36:36.768858 containerd[1530]: time="2025-06-20T19:36:36.768806117Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jun 20 19:36:38.538586 containerd[1530]: time="2025-06-20T19:36:38.538529921Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:38.540859 containerd[1530]: time="2025-06-20T19:36:38.540828282Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176924" Jun 20 19:36:38.542246 containerd[1530]: time="2025-06-20T19:36:38.542203686Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:38.546267 containerd[1530]: time="2025-06-20T19:36:38.546219042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:38.547517 containerd[1530]: time="2025-06-20T19:36:38.546874022Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.7780363s" Jun 20 19:36:38.547517 containerd[1530]: time="2025-06-20T19:36:38.546908876Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jun 20 19:36:38.548447 containerd[1530]: time="2025-06-20T19:36:38.548362943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jun 20 19:36:40.042082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761597768.mount: Deactivated successfully. Jun 20 19:36:40.646646 containerd[1530]: time="2025-06-20T19:36:40.646579668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:40.647785 containerd[1530]: time="2025-06-20T19:36:40.647739542Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895371" Jun 20 19:36:40.649054 containerd[1530]: time="2025-06-20T19:36:40.649008556Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:40.651355 containerd[1530]: time="2025-06-20T19:36:40.651310811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:40.652155 containerd[1530]: time="2025-06-20T19:36:40.651960983Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.103538929s" Jun 20 19:36:40.652155 containerd[1530]: time="2025-06-20T19:36:40.652008396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jun 20 19:36:40.652709 containerd[1530]: time="2025-06-20T19:36:40.652615022Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:36:41.304185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2120697077.mount: Deactivated successfully. Jun 20 19:36:42.694678 containerd[1530]: time="2025-06-20T19:36:42.694601825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:42.698092 containerd[1530]: time="2025-06-20T19:36:42.698004004Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jun 20 19:36:42.699816 containerd[1530]: time="2025-06-20T19:36:42.699669822Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:42.706323 containerd[1530]: time="2025-06-20T19:36:42.706271588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:42.709649 containerd[1530]: time="2025-06-20T19:36:42.709267673Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.05660949s" Jun 20 19:36:42.709649 containerd[1530]: time="2025-06-20T19:36:42.709413505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:36:42.711821 containerd[1530]: time="2025-06-20T19:36:42.710171680Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:36:43.348098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350883067.mount: Deactivated successfully. Jun 20 19:36:43.363070 containerd[1530]: time="2025-06-20T19:36:43.362992219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:36:43.364959 containerd[1530]: time="2025-06-20T19:36:43.364912235Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jun 20 19:36:43.367807 containerd[1530]: time="2025-06-20T19:36:43.367753237Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:36:43.372883 containerd[1530]: time="2025-06-20T19:36:43.372793472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:36:43.374532 containerd[1530]: time="2025-06-20T19:36:43.374425022Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 664.199523ms" Jun 20 19:36:43.374532 containerd[1530]: time="2025-06-20T19:36:43.374535884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:36:43.375296 containerd[1530]: time="2025-06-20T19:36:43.375145961Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jun 20 19:36:44.094872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687613979.mount: Deactivated successfully. Jun 20 19:36:44.873609 update_engine[1503]: I20250620 19:36:44.873561 1503 update_attempter.cc:509] Updating boot flags... Jun 20 19:36:46.133061 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 19:36:46.136748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:36:46.284578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:36:46.293966 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:36:46.682488 kubelet[2264]: E0620 19:36:46.682428 2264 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:36:46.686254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:36:46.686403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:36:46.687592 systemd[1]: kubelet.service: Consumed 168ms CPU time, 108.1M memory peak. Jun 20 19:36:47.789722 containerd[1530]: time="2025-06-20T19:36:47.789619630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:47.792848 containerd[1530]: time="2025-06-20T19:36:47.792670224Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jun 20 19:36:47.794886 containerd[1530]: time="2025-06-20T19:36:47.794730094Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:47.804632 containerd[1530]: time="2025-06-20T19:36:47.804570118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:36:47.807453 containerd[1530]: time="2025-06-20T19:36:47.807223496Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.432020807s" Jun 20 19:36:47.807453 containerd[1530]: time="2025-06-20T19:36:47.807311110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jun 20 19:36:52.023837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:36:52.024145 systemd[1]: kubelet.service: Consumed 168ms CPU time, 108.1M memory peak. Jun 20 19:36:52.027690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:36:52.075732 systemd[1]: Reload requested from client PID 2300 ('systemctl') (unit session-11.scope)... Jun 20 19:36:52.075748 systemd[1]: Reloading... Jun 20 19:36:52.185512 zram_generator::config[2357]: No configuration found. Jun 20 19:36:52.311623 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:36:52.490048 systemd[1]: Reloading finished in 413 ms. Jun 20 19:36:52.569174 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:36:52.569633 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:36:52.570456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:36:52.570686 systemd[1]: kubelet.service: Consumed 174ms CPU time, 98.3M memory peak. Jun 20 19:36:52.574050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:36:52.847226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:36:52.863573 (kubelet)[2412]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:36:52.947650 kubelet[2412]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:36:52.947650 kubelet[2412]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:36:52.947650 kubelet[2412]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:36:52.947650 kubelet[2412]: I0620 19:36:52.947453 2412 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:36:53.513586 kubelet[2412]: I0620 19:36:53.513398 2412 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:36:53.513586 kubelet[2412]: I0620 19:36:53.513511 2412 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:36:53.514362 kubelet[2412]: I0620 19:36:53.514287 2412 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:36:54.229330 kubelet[2412]: E0620 19:36:54.229047 2412 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.217:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.217:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:36:54.232064 kubelet[2412]: I0620 19:36:54.229598 2412 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:36:54.271044 kubelet[2412]: I0620 19:36:54.269325 2412 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:36:54.282796 kubelet[2412]: I0620 19:36:54.282668 2412 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:36:54.283350 kubelet[2412]: I0620 19:36:54.283237 2412 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:36:54.283864 kubelet[2412]: I0620 19:36:54.283319 2412 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-1-0-9-7ac33d8391.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:36:54.285720 kubelet[2412]: I0620 19:36:54.285627 2412 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:36:54.285720 kubelet[2412]: I0620 19:36:54.285675 2412 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:36:54.286038 kubelet[2412]: I0620 19:36:54.285948 2412 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:36:54.297771 kubelet[2412]: I0620 19:36:54.297656 2412 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:36:54.297771 kubelet[2412]: I0620 19:36:54.297771 2412 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:36:54.298132 kubelet[2412]: I0620 19:36:54.297854 2412 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:36:54.298132 kubelet[2412]: I0620 19:36:54.297895 2412 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:36:54.304534 kubelet[2412]: W0620 19:36:54.304318 2412 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-0-9-7ac33d8391.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Jun 20 19:36:54.304761 kubelet[2412]: E0620 19:36:54.304464 2412 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.217:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4344-1-0-9-7ac33d8391.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.217:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:36:54.307176 kubelet[2412]: W0620 19:36:54.307054 2412 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.217:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Jun 20 19:36:54.309738 kubelet[2412]: E0620 19:36:54.307198 2412 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.217:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.217:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:36:54.309738 kubelet[2412]: I0620 19:36:54.308833 2412 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:36:54.312250 kubelet[2412]: I0620 19:36:54.312186 2412 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:36:54.312424 kubelet[2412]: W0620 19:36:54.312378 2412 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:36:54.319018 kubelet[2412]: I0620 19:36:54.317784 2412 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:36:54.319018 kubelet[2412]: I0620 19:36:54.317880 2412 server.go:1287] "Started kubelet" Jun 20 19:36:54.353834 kubelet[2412]: I0620 19:36:54.353548 2412 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:36:54.354154 kubelet[2412]: E0620 19:36:54.350428 2412 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.217:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.217:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4344-1-0-9-7ac33d8391.novalocal.184ad7606c219775 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4344-1-0-9-7ac33d8391.novalocal,UID:ci-4344-1-0-9-7ac33d8391.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4344-1-0-9-7ac33d8391.novalocal,},FirstTimestamp:2025-06-20 19:36:54.317832053 +0000 UTC m=+1.441473508,LastTimestamp:2025-06-20 19:36:54.317832053 +0000 UTC m=+1.441473508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4344-1-0-9-7ac33d8391.novalocal,}" Jun 20 19:36:54.360341 kubelet[2412]: I0620 19:36:54.360273 2412 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:36:54.361596 kubelet[2412]: I0620 19:36:54.361565 2412 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:36:54.362962 kubelet[2412]: I0620 19:36:54.362934 2412 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:36:54.363274 kubelet[2412]: E0620 19:36:54.363240 2412 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" Jun 20 19:36:54.365495 kubelet[2412]: I0620 19:36:54.365429 2412 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:36:54.365736 kubelet[2412]: I0620 19:36:54.365710 2412 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:36:54.367209 kubelet[2412]: W0620 19:36:54.366918 2412 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Jun 20 19:36:54.367209 kubelet[2412]: E0620 19:36:54.367026 2412 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.217:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:36:54.367969 kubelet[2412]: E0620 19:36:54.367901 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-0-9-7ac33d8391.novalocal?timeout=10s\": dial tcp 172.24.4.217:6443: connect: connection refused" interval="200ms" Jun 20 19:36:54.369518 kubelet[2412]: I0620 19:36:54.368639 2412 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:36:54.371856 kubelet[2412]: E0620 19:36:54.371826 2412 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:36:54.374669 kubelet[2412]: I0620 19:36:54.374621 2412 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:36:54.377754 kubelet[2412]: I0620 19:36:54.377644 2412 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:36:54.379698 kubelet[2412]: I0620 19:36:54.376561 2412 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:36:54.379698 kubelet[2412]: I0620 19:36:54.379691 2412 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:36:54.380260 kubelet[2412]: I0620 19:36:54.380177 2412 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:36:54.404683 kubelet[2412]: I0620 19:36:54.404639 2412 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:36:54.405020 kubelet[2412]: I0620 19:36:54.404994 2412 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:36:54.405167 kubelet[2412]: I0620 19:36:54.405150 2412 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:36:54.415417 kubelet[2412]: I0620 19:36:54.415366 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:36:54.417684 kubelet[2412]: I0620 19:36:54.417662 2412 policy_none.go:49] "None policy: Start" Jun 20 19:36:54.418418 kubelet[2412]: I0620 19:36:54.418133 2412 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:36:54.418418 kubelet[2412]: I0620 19:36:54.418153 2412 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:36:54.418577 kubelet[2412]: I0620 19:36:54.418096 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:36:54.418657 kubelet[2412]: I0620 19:36:54.418645 2412 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:36:54.419617 kubelet[2412]: I0620 19:36:54.419583 2412 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:36:54.419617 kubelet[2412]: I0620 19:36:54.419610 2412 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:36:54.419726 kubelet[2412]: E0620 19:36:54.419668 2412 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:36:54.425345 kubelet[2412]: W0620 19:36:54.425285 2412 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Jun 20 19:36:54.425627 kubelet[2412]: E0620 19:36:54.425554 2412 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.217:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.217:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:36:54.431605 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:36:54.443714 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:36:54.448259 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:36:54.463918 kubelet[2412]: E0620 19:36:54.463846 2412 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" Jun 20 19:36:54.468395 kubelet[2412]: I0620 19:36:54.468371 2412 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:36:54.468686 kubelet[2412]: I0620 19:36:54.468669 2412 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:36:54.468792 kubelet[2412]: I0620 19:36:54.468755 2412 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:36:54.469223 kubelet[2412]: I0620 19:36:54.469118 2412 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:36:54.471711 kubelet[2412]: E0620 19:36:54.471671 2412 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:36:54.471772 kubelet[2412]: E0620 19:36:54.471744 2412 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" Jun 20 19:36:54.548550 systemd[1]: Created slice kubepods-burstable-poddb749958ada660b8343d3b788508856b.slice - libcontainer container kubepods-burstable-poddb749958ada660b8343d3b788508856b.slice. Jun 20 19:36:54.570976 kubelet[2412]: E0620 19:36:54.570805 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-0-9-7ac33d8391.novalocal?timeout=10s\": dial tcp 172.24.4.217:6443: connect: connection refused" interval="400ms" Jun 20 19:36:54.572704 kubelet[2412]: E0620 19:36:54.572008 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.575106 kubelet[2412]: I0620 19:36:54.575054 2412 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.576335 kubelet[2412]: E0620 19:36:54.576090 2412 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.217:6443/api/v1/nodes\": dial tcp 172.24.4.217:6443: connect: connection refused" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.587907 systemd[1]: Created slice kubepods-burstable-pod63a18b84cd01da5965e954325fc41ac3.slice - libcontainer container kubepods-burstable-pod63a18b84cd01da5965e954325fc41ac3.slice. Jun 20 19:36:54.593833 kubelet[2412]: E0620 19:36:54.593779 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.600071 systemd[1]: Created slice kubepods-burstable-pod9c1400b987c6e9fc3d0fe38ec5fbc7b9.slice - libcontainer container kubepods-burstable-pod9c1400b987c6e9fc3d0fe38ec5fbc7b9.slice. Jun 20 19:36:54.611124 kubelet[2412]: E0620 19:36:54.611045 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.667189 kubelet[2412]: I0620 19:36:54.666889 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db749958ada660b8343d3b788508856b-ca-certs\") pod \"kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"db749958ada660b8343d3b788508856b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.667189 kubelet[2412]: I0620 19:36:54.666994 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c1400b987c6e9fc3d0fe38ec5fbc7b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"9c1400b987c6e9fc3d0fe38ec5fbc7b9\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.667189 kubelet[2412]: I0620 19:36:54.667060 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63a18b84cd01da5965e954325fc41ac3-kubeconfig\") pod \"kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"63a18b84cd01da5965e954325fc41ac3\") " pod="kube-system/kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.667189 kubelet[2412]: I0620 19:36:54.667111 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c1400b987c6e9fc3d0fe38ec5fbc7b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"9c1400b987c6e9fc3d0fe38ec5fbc7b9\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.668170 kubelet[2412]: I0620 19:36:54.667864 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db749958ada660b8343d3b788508856b-k8s-certs\") pod \"kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"db749958ada660b8343d3b788508856b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.668170 kubelet[2412]: I0620 19:36:54.667945 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db749958ada660b8343d3b788508856b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"db749958ada660b8343d3b788508856b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.668170 kubelet[2412]: I0620 19:36:54.667992 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c1400b987c6e9fc3d0fe38ec5fbc7b9-ca-certs\") pod \"kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"9c1400b987c6e9fc3d0fe38ec5fbc7b9\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.668170 kubelet[2412]: I0620 19:36:54.668033 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c1400b987c6e9fc3d0fe38ec5fbc7b9-k8s-certs\") pod \"kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"9c1400b987c6e9fc3d0fe38ec5fbc7b9\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.668527 kubelet[2412]: I0620 19:36:54.668079 2412 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c1400b987c6e9fc3d0fe38ec5fbc7b9-kubeconfig\") pod \"kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"9c1400b987c6e9fc3d0fe38ec5fbc7b9\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.782674 kubelet[2412]: I0620 19:36:54.782556 2412 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.783994 kubelet[2412]: E0620 19:36:54.783841 2412 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.217:6443/api/v1/nodes\": dial tcp 172.24.4.217:6443: connect: connection refused" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:54.880353 containerd[1530]: time="2025-06-20T19:36:54.879331436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal,Uid:db749958ada660b8343d3b788508856b,Namespace:kube-system,Attempt:0,}" Jun 20 19:36:54.895598 containerd[1530]: time="2025-06-20T19:36:54.895402469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal,Uid:63a18b84cd01da5965e954325fc41ac3,Namespace:kube-system,Attempt:0,}" Jun 20 19:36:54.916175 containerd[1530]: time="2025-06-20T19:36:54.915581947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal,Uid:9c1400b987c6e9fc3d0fe38ec5fbc7b9,Namespace:kube-system,Attempt:0,}" Jun 20 19:36:54.981264 kubelet[2412]: E0620 19:36:54.981078 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.217:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4344-1-0-9-7ac33d8391.novalocal?timeout=10s\": dial tcp 172.24.4.217:6443: connect: connection refused" interval="800ms" Jun 20 19:36:55.000530 containerd[1530]: time="2025-06-20T19:36:54.999939641Z" level=info msg="connecting to shim d6d906052d17127580e30f6e0ac7bcd760fc2151a05a044b26534dd8acc2b5ba" address="unix:///run/containerd/s/4feb0cc2a0dce4229f3c8228a25f7097bdca8335ab43842262ce98300234011e" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:36:55.012546 containerd[1530]: time="2025-06-20T19:36:55.012461862Z" level=info msg="connecting to shim 61f405fd278291849b3ccb43e68faa55f37925494602436f0061950393074d6f" address="unix:///run/containerd/s/eb26ec32ab0807adb41c9b3461106246b0f79cd1894fd7948cb26552b748c6c2" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:36:55.037358 containerd[1530]: time="2025-06-20T19:36:55.037211535Z" level=info msg="connecting to shim e3efe7997862a8308ba309ecb8433ed6f756c0a4daf79b3cd5719f39d68b33bc" address="unix:///run/containerd/s/d8cc16a365f1240e9436b9876a8bf78b1da47030d2efa4783d10a07aab966e1a" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:36:55.069298 systemd[1]: Started cri-containerd-61f405fd278291849b3ccb43e68faa55f37925494602436f0061950393074d6f.scope - libcontainer container 61f405fd278291849b3ccb43e68faa55f37925494602436f0061950393074d6f. Jun 20 19:36:55.080399 systemd[1]: Started cri-containerd-d6d906052d17127580e30f6e0ac7bcd760fc2151a05a044b26534dd8acc2b5ba.scope - libcontainer container d6d906052d17127580e30f6e0ac7bcd760fc2151a05a044b26534dd8acc2b5ba. Jun 20 19:36:55.082217 systemd[1]: Started cri-containerd-e3efe7997862a8308ba309ecb8433ed6f756c0a4daf79b3cd5719f39d68b33bc.scope - libcontainer container e3efe7997862a8308ba309ecb8433ed6f756c0a4daf79b3cd5719f39d68b33bc. Jun 20 19:36:55.171370 containerd[1530]: time="2025-06-20T19:36:55.171266967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal,Uid:9c1400b987c6e9fc3d0fe38ec5fbc7b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3efe7997862a8308ba309ecb8433ed6f756c0a4daf79b3cd5719f39d68b33bc\"" Jun 20 19:36:55.174934 containerd[1530]: time="2025-06-20T19:36:55.174883263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal,Uid:db749958ada660b8343d3b788508856b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6d906052d17127580e30f6e0ac7bcd760fc2151a05a044b26534dd8acc2b5ba\"" Jun 20 19:36:55.179959 containerd[1530]: time="2025-06-20T19:36:55.179917884Z" level=info msg="CreateContainer within sandbox \"d6d906052d17127580e30f6e0ac7bcd760fc2151a05a044b26534dd8acc2b5ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:36:55.180971 containerd[1530]: time="2025-06-20T19:36:55.180557565Z" level=info msg="CreateContainer within sandbox \"e3efe7997862a8308ba309ecb8433ed6f756c0a4daf79b3cd5719f39d68b33bc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:36:55.183110 containerd[1530]: time="2025-06-20T19:36:55.183054570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal,Uid:63a18b84cd01da5965e954325fc41ac3,Namespace:kube-system,Attempt:0,} returns sandbox id \"61f405fd278291849b3ccb43e68faa55f37925494602436f0061950393074d6f\"" Jun 20 19:36:55.187681 containerd[1530]: time="2025-06-20T19:36:55.187521940Z" level=info msg="CreateContainer within sandbox \"61f405fd278291849b3ccb43e68faa55f37925494602436f0061950393074d6f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:36:55.187795 kubelet[2412]: I0620 19:36:55.187592 2412 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:55.188175 kubelet[2412]: E0620 19:36:55.188133 2412 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.24.4.217:6443/api/v1/nodes\": dial tcp 172.24.4.217:6443: connect: connection refused" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:55.206492 containerd[1530]: time="2025-06-20T19:36:55.206403141Z" level=info msg="Container 69ba0ba92122e63a77292146eed361639f55c8f1b85f09a0dd779e9aa5d7e331: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:36:55.224342 containerd[1530]: time="2025-06-20T19:36:55.224239432Z" level=info msg="Container 1199a731520620ed0fe83e4f703e0d226dba14acd170cb30e99a9a97de0c3272: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:36:55.226311 kubelet[2412]: W0620 19:36:55.226245 2412 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.217:6443: connect: connection refused Jun 20 19:36:55.226490 kubelet[2412]: E0620 19:36:55.226320 2412 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.217:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.217:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:36:55.229447 containerd[1530]: time="2025-06-20T19:36:55.229332801Z" level=info msg="Container 5b668d4fe7516fe9f800f2557d5404f3fda9fbb1170955d389235ee7e060390b: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:36:55.242810 containerd[1530]: time="2025-06-20T19:36:55.242630165Z" level=info msg="CreateContainer within sandbox \"d6d906052d17127580e30f6e0ac7bcd760fc2151a05a044b26534dd8acc2b5ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"69ba0ba92122e63a77292146eed361639f55c8f1b85f09a0dd779e9aa5d7e331\"" Jun 20 19:36:55.244599 containerd[1530]: time="2025-06-20T19:36:55.244449386Z" level=info msg="StartContainer for \"69ba0ba92122e63a77292146eed361639f55c8f1b85f09a0dd779e9aa5d7e331\"" Jun 20 19:36:55.247912 containerd[1530]: time="2025-06-20T19:36:55.247834325Z" level=info msg="connecting to shim 69ba0ba92122e63a77292146eed361639f55c8f1b85f09a0dd779e9aa5d7e331" address="unix:///run/containerd/s/4feb0cc2a0dce4229f3c8228a25f7097bdca8335ab43842262ce98300234011e" protocol=ttrpc version=3 Jun 20 19:36:55.260311 containerd[1530]: time="2025-06-20T19:36:55.260114328Z" level=info msg="CreateContainer within sandbox \"61f405fd278291849b3ccb43e68faa55f37925494602436f0061950393074d6f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5b668d4fe7516fe9f800f2557d5404f3fda9fbb1170955d389235ee7e060390b\"" Jun 20 19:36:55.262067 containerd[1530]: time="2025-06-20T19:36:55.261938464Z" level=info msg="StartContainer for \"5b668d4fe7516fe9f800f2557d5404f3fda9fbb1170955d389235ee7e060390b\"" Jun 20 19:36:55.265867 containerd[1530]: time="2025-06-20T19:36:55.265776922Z" level=info msg="connecting to shim 5b668d4fe7516fe9f800f2557d5404f3fda9fbb1170955d389235ee7e060390b" address="unix:///run/containerd/s/eb26ec32ab0807adb41c9b3461106246b0f79cd1894fd7948cb26552b748c6c2" protocol=ttrpc version=3 Jun 20 19:36:55.271171 containerd[1530]: time="2025-06-20T19:36:55.271103401Z" level=info msg="CreateContainer within sandbox \"e3efe7997862a8308ba309ecb8433ed6f756c0a4daf79b3cd5719f39d68b33bc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1199a731520620ed0fe83e4f703e0d226dba14acd170cb30e99a9a97de0c3272\"" Jun 20 19:36:55.272973 containerd[1530]: time="2025-06-20T19:36:55.272945381Z" level=info msg="StartContainer for \"1199a731520620ed0fe83e4f703e0d226dba14acd170cb30e99a9a97de0c3272\"" Jun 20 19:36:55.275412 containerd[1530]: time="2025-06-20T19:36:55.275266022Z" level=info msg="connecting to shim 1199a731520620ed0fe83e4f703e0d226dba14acd170cb30e99a9a97de0c3272" address="unix:///run/containerd/s/d8cc16a365f1240e9436b9876a8bf78b1da47030d2efa4783d10a07aab966e1a" protocol=ttrpc version=3 Jun 20 19:36:55.290856 systemd[1]: Started cri-containerd-69ba0ba92122e63a77292146eed361639f55c8f1b85f09a0dd779e9aa5d7e331.scope - libcontainer container 69ba0ba92122e63a77292146eed361639f55c8f1b85f09a0dd779e9aa5d7e331. Jun 20 19:36:55.313689 systemd[1]: Started cri-containerd-1199a731520620ed0fe83e4f703e0d226dba14acd170cb30e99a9a97de0c3272.scope - libcontainer container 1199a731520620ed0fe83e4f703e0d226dba14acd170cb30e99a9a97de0c3272. Jun 20 19:36:55.326704 systemd[1]: Started cri-containerd-5b668d4fe7516fe9f800f2557d5404f3fda9fbb1170955d389235ee7e060390b.scope - libcontainer container 5b668d4fe7516fe9f800f2557d5404f3fda9fbb1170955d389235ee7e060390b. Jun 20 19:36:55.414253 containerd[1530]: time="2025-06-20T19:36:55.413710599Z" level=info msg="StartContainer for \"69ba0ba92122e63a77292146eed361639f55c8f1b85f09a0dd779e9aa5d7e331\" returns successfully" Jun 20 19:36:55.438433 containerd[1530]: time="2025-06-20T19:36:55.437713854Z" level=info msg="StartContainer for \"1199a731520620ed0fe83e4f703e0d226dba14acd170cb30e99a9a97de0c3272\" returns successfully" Jun 20 19:36:55.446115 kubelet[2412]: E0620 19:36:55.446027 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:55.457911 kubelet[2412]: E0620 19:36:55.457866 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:55.471751 containerd[1530]: time="2025-06-20T19:36:55.471711831Z" level=info msg="StartContainer for \"5b668d4fe7516fe9f800f2557d5404f3fda9fbb1170955d389235ee7e060390b\" returns successfully" Jun 20 19:36:55.991051 kubelet[2412]: I0620 19:36:55.991015 2412 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:56.480978 kubelet[2412]: E0620 19:36:56.480878 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:56.483180 kubelet[2412]: E0620 19:36:56.482616 2412 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:57.196855 kubelet[2412]: I0620 19:36:57.196802 2412 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:57.196855 kubelet[2412]: E0620 19:36:57.196855 2412 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4344-1-0-9-7ac33d8391.novalocal\": node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" Jun 20 19:36:57.263895 kubelet[2412]: I0620 19:36:57.263840 2412 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:57.310089 kubelet[2412]: I0620 19:36:57.310046 2412 apiserver.go:52] "Watching apiserver" Jun 20 19:36:57.318676 kubelet[2412]: E0620 19:36:57.318619 2412 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Jun 20 19:36:57.340754 kubelet[2412]: E0620 19:36:57.340665 2412 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:57.341160 kubelet[2412]: I0620 19:36:57.340712 2412 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:57.344136 kubelet[2412]: E0620 19:36:57.344078 2412 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:57.344136 kubelet[2412]: I0620 19:36:57.344134 2412 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:57.346329 kubelet[2412]: E0620 19:36:57.346288 2412 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:57.366490 kubelet[2412]: I0620 19:36:57.366427 2412 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:36:57.476731 kubelet[2412]: I0620 19:36:57.476611 2412 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:57.480495 kubelet[2412]: E0620 19:36:57.479446 2412 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:58.481987 kubelet[2412]: I0620 19:36:58.481852 2412 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:36:58.510390 kubelet[2412]: W0620 19:36:58.510124 2412 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:37:00.200530 systemd[1]: Reload requested from client PID 2680 ('systemctl') (unit session-11.scope)... Jun 20 19:37:00.200719 systemd[1]: Reloading... Jun 20 19:37:00.363565 zram_generator::config[2722]: No configuration found. Jun 20 19:37:00.514594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:37:00.706510 systemd[1]: Reloading finished in 504 ms. Jun 20 19:37:00.756588 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:37:00.772811 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:37:00.774405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:37:00.774828 systemd[1]: kubelet.service: Consumed 1.655s CPU time, 130.6M memory peak. Jun 20 19:37:00.781971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:37:01.166947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:37:01.181413 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:37:01.292403 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:37:01.293767 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 19:37:01.293767 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:37:01.293767 kubelet[2789]: I0620 19:37:01.293044 2789 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:37:01.307849 kubelet[2789]: I0620 19:37:01.307713 2789 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jun 20 19:37:01.307849 kubelet[2789]: I0620 19:37:01.307774 2789 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:37:01.309589 kubelet[2789]: I0620 19:37:01.308409 2789 server.go:954] "Client rotation is on, will bootstrap in background" Jun 20 19:37:01.310538 kubelet[2789]: I0620 19:37:01.310460 2789 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:37:01.314963 kubelet[2789]: I0620 19:37:01.314891 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:37:01.333658 kubelet[2789]: I0620 19:37:01.333605 2789 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:37:01.344734 kubelet[2789]: I0620 19:37:01.344255 2789 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:37:01.346269 kubelet[2789]: I0620 19:37:01.345486 2789 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:37:01.346269 kubelet[2789]: I0620 19:37:01.345632 2789 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4344-1-0-9-7ac33d8391.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:37:01.346269 kubelet[2789]: I0620 19:37:01.346031 2789 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:37:01.346269 kubelet[2789]: I0620 19:37:01.346046 2789 container_manager_linux.go:304] "Creating device plugin manager" Jun 20 19:37:01.347440 kubelet[2789]: I0620 19:37:01.346185 2789 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:37:01.350799 kubelet[2789]: I0620 19:37:01.350748 2789 kubelet.go:446] "Attempting to sync node with API server" Jun 20 19:37:01.352407 kubelet[2789]: I0620 19:37:01.352156 2789 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:37:01.352407 kubelet[2789]: I0620 19:37:01.352250 2789 kubelet.go:352] "Adding apiserver pod source" Jun 20 19:37:01.352407 kubelet[2789]: I0620 19:37:01.352303 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:37:01.361074 kubelet[2789]: I0620 19:37:01.358665 2789 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:37:01.361438 kubelet[2789]: I0620 19:37:01.361393 2789 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:37:01.366432 kubelet[2789]: I0620 19:37:01.364855 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 19:37:01.366432 kubelet[2789]: I0620 19:37:01.365014 2789 server.go:1287] "Started kubelet" Jun 20 19:37:01.373231 kubelet[2789]: I0620 19:37:01.373187 2789 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:37:01.374853 kubelet[2789]: I0620 19:37:01.374829 2789 server.go:479] "Adding debug handlers to kubelet server" Jun 20 19:37:01.375728 kubelet[2789]: I0620 19:37:01.374439 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:37:01.376867 kubelet[2789]: I0620 19:37:01.376848 2789 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:37:01.381046 kubelet[2789]: I0620 19:37:01.380659 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:37:01.381263 kubelet[2789]: I0620 19:37:01.381222 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:37:01.396046 kubelet[2789]: I0620 19:37:01.396006 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 19:37:01.400686 kubelet[2789]: I0620 19:37:01.396294 2789 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 19:37:01.400686 kubelet[2789]: E0620 19:37:01.396722 2789 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4344-1-0-9-7ac33d8391.novalocal\" not found" Jun 20 19:37:01.400686 kubelet[2789]: I0620 19:37:01.399781 2789 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:37:01.407507 kubelet[2789]: I0620 19:37:01.407410 2789 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:37:01.408613 kubelet[2789]: I0620 19:37:01.408557 2789 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:37:01.410762 kubelet[2789]: E0620 19:37:01.410708 2789 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:37:01.414375 kubelet[2789]: I0620 19:37:01.414336 2789 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:37:01.431882 kubelet[2789]: I0620 19:37:01.431214 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:37:01.440012 kubelet[2789]: I0620 19:37:01.439968 2789 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:37:01.440140 kubelet[2789]: I0620 19:37:01.440062 2789 status_manager.go:227] "Starting to sync pod status with apiserver" Jun 20 19:37:01.440140 kubelet[2789]: I0620 19:37:01.440095 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 19:37:01.440140 kubelet[2789]: I0620 19:37:01.440122 2789 kubelet.go:2382] "Starting kubelet main sync loop" Jun 20 19:37:01.440278 kubelet[2789]: E0620 19:37:01.440172 2789 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:37:01.453048 sudo[2818]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:37:01.454481 sudo[2818]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:37:01.486254 kubelet[2789]: I0620 19:37:01.485746 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 19:37:01.486254 kubelet[2789]: I0620 19:37:01.485767 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 19:37:01.486254 kubelet[2789]: I0620 19:37:01.485795 2789 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:37:01.486254 kubelet[2789]: I0620 19:37:01.485992 2789 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:37:01.486254 kubelet[2789]: I0620 19:37:01.486009 2789 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:37:01.486254 kubelet[2789]: I0620 19:37:01.486047 2789 policy_none.go:49] "None policy: Start" Jun 20 19:37:01.486254 kubelet[2789]: I0620 19:37:01.486111 2789 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 19:37:01.486254 kubelet[2789]: I0620 19:37:01.486143 2789 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:37:01.486804 kubelet[2789]: I0620 19:37:01.486787 2789 state_mem.go:75] "Updated machine memory state" Jun 20 19:37:01.493928 kubelet[2789]: I0620 19:37:01.493892 2789 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:37:01.494160 kubelet[2789]: I0620 19:37:01.494140 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:37:01.494218 kubelet[2789]: I0620 19:37:01.494164 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:37:01.498490 kubelet[2789]: I0620 19:37:01.498368 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:37:01.503464 kubelet[2789]: E0620 19:37:01.502415 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 19:37:01.542362 kubelet[2789]: I0620 19:37:01.542134 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.544509 kubelet[2789]: I0620 19:37:01.544239 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.545381 kubelet[2789]: I0620 19:37:01.544365 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.559691 kubelet[2789]: W0620 19:37:01.559644 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:37:01.563500 kubelet[2789]: W0620 19:37:01.563280 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:37:01.564140 kubelet[2789]: E0620 19:37:01.563908 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.564640 kubelet[2789]: W0620 19:37:01.564310 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:37:01.603177 kubelet[2789]: I0620 19:37:01.602457 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.621660 kubelet[2789]: I0620 19:37:01.621580 2789 kubelet_node_status.go:124] "Node was previously registered" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.622171 kubelet[2789]: I0620 19:37:01.621923 2789 kubelet_node_status.go:78] "Successfully registered node" node="ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.702268 kubelet[2789]: I0620 19:37:01.701550 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c1400b987c6e9fc3d0fe38ec5fbc7b9-ca-certs\") pod \"kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"9c1400b987c6e9fc3d0fe38ec5fbc7b9\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.702268 kubelet[2789]: I0620 19:37:01.701607 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c1400b987c6e9fc3d0fe38ec5fbc7b9-k8s-certs\") pod \"kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"9c1400b987c6e9fc3d0fe38ec5fbc7b9\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.702268 kubelet[2789]: I0620 19:37:01.701633 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c1400b987c6e9fc3d0fe38ec5fbc7b9-kubeconfig\") pod \"kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"9c1400b987c6e9fc3d0fe38ec5fbc7b9\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.702268 kubelet[2789]: I0620 19:37:01.701658 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db749958ada660b8343d3b788508856b-ca-certs\") pod \"kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"db749958ada660b8343d3b788508856b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.702268 kubelet[2789]: I0620 19:37:01.701679 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db749958ada660b8343d3b788508856b-k8s-certs\") pod \"kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"db749958ada660b8343d3b788508856b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.702574 kubelet[2789]: I0620 19:37:01.701701 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db749958ada660b8343d3b788508856b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"db749958ada660b8343d3b788508856b\") " pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.702574 kubelet[2789]: I0620 19:37:01.701726 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9c1400b987c6e9fc3d0fe38ec5fbc7b9-flexvolume-dir\") pod \"kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"9c1400b987c6e9fc3d0fe38ec5fbc7b9\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.702574 kubelet[2789]: I0620 19:37:01.701747 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c1400b987c6e9fc3d0fe38ec5fbc7b9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"9c1400b987c6e9fc3d0fe38ec5fbc7b9\") " pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:01.702574 kubelet[2789]: I0620 19:37:01.701768 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/63a18b84cd01da5965e954325fc41ac3-kubeconfig\") pod \"kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal\" (UID: \"63a18b84cd01da5965e954325fc41ac3\") " pod="kube-system/kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:02.149035 sudo[2818]: pam_unix(sudo:session): session closed for user root Jun 20 19:37:02.370413 kubelet[2789]: I0620 19:37:02.368530 2789 apiserver.go:52] "Watching apiserver" Jun 20 19:37:02.401305 kubelet[2789]: I0620 19:37:02.400979 2789 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 19:37:02.473490 kubelet[2789]: I0620 19:37:02.472244 2789 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:02.506766 kubelet[2789]: W0620 19:37:02.506736 2789 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jun 20 19:37:02.507027 kubelet[2789]: E0620 19:37:02.507005 2789 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" Jun 20 19:37:02.558642 kubelet[2789]: I0620 19:37:02.558551 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4344-1-0-9-7ac33d8391.novalocal" podStartSLOduration=1.558421632 podStartE2EDuration="1.558421632s" podCreationTimestamp="2025-06-20 19:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:37:02.542710316 +0000 UTC m=+1.351576157" watchObservedRunningTime="2025-06-20 19:37:02.558421632 +0000 UTC m=+1.367287473" Jun 20 19:37:02.573232 kubelet[2789]: I0620 19:37:02.573082 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4344-1-0-9-7ac33d8391.novalocal" podStartSLOduration=1.573060312 podStartE2EDuration="1.573060312s" podCreationTimestamp="2025-06-20 19:37:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:37:02.559372459 +0000 UTC m=+1.368238300" watchObservedRunningTime="2025-06-20 19:37:02.573060312 +0000 UTC m=+1.381926153" Jun 20 19:37:02.586480 kubelet[2789]: I0620 19:37:02.586414 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4344-1-0-9-7ac33d8391.novalocal" podStartSLOduration=4.586397914 podStartE2EDuration="4.586397914s" podCreationTimestamp="2025-06-20 19:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:37:02.573688469 +0000 UTC m=+1.382554310" watchObservedRunningTime="2025-06-20 19:37:02.586397914 +0000 UTC m=+1.395263755" Jun 20 19:37:04.956148 sudo[1827]: pam_unix(sudo:session): session closed for user root Jun 20 19:37:05.272814 sshd[1826]: Connection closed by 172.24.4.1 port 43242 Jun 20 19:37:05.276602 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Jun 20 19:37:05.291014 systemd-logind[1498]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:37:05.292747 systemd[1]: sshd@8-172.24.4.217:22-172.24.4.1:43242.service: Deactivated successfully. Jun 20 19:37:05.307938 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:37:05.309005 systemd[1]: session-11.scope: Consumed 7.967s CPU time, 272.6M memory peak. Jun 20 19:37:05.319604 systemd-logind[1498]: Removed session 11. Jun 20 19:37:06.248542 kubelet[2789]: I0620 19:37:06.247823 2789 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:37:06.252841 kubelet[2789]: I0620 19:37:06.251792 2789 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:37:06.253041 containerd[1530]: time="2025-06-20T19:37:06.251197952Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:37:06.887954 systemd[1]: Created slice kubepods-besteffort-pod6b67bd7a_0cf6_4fc2_8576_77c5d5f962e3.slice - libcontainer container kubepods-besteffort-pod6b67bd7a_0cf6_4fc2_8576_77c5d5f962e3.slice. Jun 20 19:37:06.921256 systemd[1]: Created slice kubepods-burstable-pod856b4ef3_aa72_41ae_b22a_feb15c63f816.slice - libcontainer container kubepods-burstable-pod856b4ef3_aa72_41ae_b22a_feb15c63f816.slice. Jun 20 19:37:07.042839 kubelet[2789]: I0620 19:37:07.042698 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b67bd7a-0cf6-4fc2-8576-77c5d5f962e3-kube-proxy\") pod \"kube-proxy-hl47h\" (UID: \"6b67bd7a-0cf6-4fc2-8576-77c5d5f962e3\") " pod="kube-system/kube-proxy-hl47h" Jun 20 19:37:07.042839 kubelet[2789]: I0620 19:37:07.042772 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b67bd7a-0cf6-4fc2-8576-77c5d5f962e3-xtables-lock\") pod \"kube-proxy-hl47h\" (UID: \"6b67bd7a-0cf6-4fc2-8576-77c5d5f962e3\") " pod="kube-system/kube-proxy-hl47h" Jun 20 19:37:07.042839 kubelet[2789]: I0620 19:37:07.042834 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2vv4\" (UniqueName: \"kubernetes.io/projected/856b4ef3-aa72-41ae-b22a-feb15c63f816-kube-api-access-j2vv4\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043347 kubelet[2789]: I0620 19:37:07.042904 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-run\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043347 kubelet[2789]: I0620 19:37:07.042950 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-xtables-lock\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043347 kubelet[2789]: I0620 19:37:07.042979 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/856b4ef3-aa72-41ae-b22a-feb15c63f816-clustermesh-secrets\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043347 kubelet[2789]: I0620 19:37:07.043007 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqmhb\" (UniqueName: \"kubernetes.io/projected/6b67bd7a-0cf6-4fc2-8576-77c5d5f962e3-kube-api-access-zqmhb\") pod \"kube-proxy-hl47h\" (UID: \"6b67bd7a-0cf6-4fc2-8576-77c5d5f962e3\") " pod="kube-system/kube-proxy-hl47h" Jun 20 19:37:07.043347 kubelet[2789]: I0620 19:37:07.043042 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-etc-cni-netd\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043572 kubelet[2789]: I0620 19:37:07.043077 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-config-path\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043572 kubelet[2789]: I0620 19:37:07.043123 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-hostproc\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043572 kubelet[2789]: I0620 19:37:07.043149 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/856b4ef3-aa72-41ae-b22a-feb15c63f816-hubble-tls\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043572 kubelet[2789]: I0620 19:37:07.043185 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b67bd7a-0cf6-4fc2-8576-77c5d5f962e3-lib-modules\") pod \"kube-proxy-hl47h\" (UID: \"6b67bd7a-0cf6-4fc2-8576-77c5d5f962e3\") " pod="kube-system/kube-proxy-hl47h" Jun 20 19:37:07.043572 kubelet[2789]: I0620 19:37:07.043207 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-cgroup\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043572 kubelet[2789]: I0620 19:37:07.043224 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-host-proc-sys-net\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043788 kubelet[2789]: I0620 19:37:07.043259 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cni-path\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043788 kubelet[2789]: I0620 19:37:07.043289 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-host-proc-sys-kernel\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043788 kubelet[2789]: I0620 19:37:07.043312 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-bpf-maps\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.043788 kubelet[2789]: I0620 19:37:07.043329 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-lib-modules\") pod \"cilium-2z5kj\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " pod="kube-system/cilium-2z5kj" Jun 20 19:37:07.229332 containerd[1530]: time="2025-06-20T19:37:07.228956671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2z5kj,Uid:856b4ef3-aa72-41ae-b22a-feb15c63f816,Namespace:kube-system,Attempt:0,}" Jun 20 19:37:07.293026 systemd[1]: Created slice kubepods-besteffort-pod65e9be8b_9429_42bf_b704_bd8e99a88c5e.slice - libcontainer container kubepods-besteffort-pod65e9be8b_9429_42bf_b704_bd8e99a88c5e.slice. Jun 20 19:37:07.325603 containerd[1530]: time="2025-06-20T19:37:07.325446937Z" level=info msg="connecting to shim 1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96" address="unix:///run/containerd/s/ee865cc2fad01a82ae0dc460539e73f8105e9ca9ddcbb0d65e4c9bb2a4144f17" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:37:07.347797 kubelet[2789]: I0620 19:37:07.347218 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65e9be8b-9429-42bf-b704-bd8e99a88c5e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-w5zwz\" (UID: \"65e9be8b-9429-42bf-b704-bd8e99a88c5e\") " pod="kube-system/cilium-operator-6c4d7847fc-w5zwz" Jun 20 19:37:07.348437 kubelet[2789]: I0620 19:37:07.348019 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg4mj\" (UniqueName: \"kubernetes.io/projected/65e9be8b-9429-42bf-b704-bd8e99a88c5e-kube-api-access-cg4mj\") pod \"cilium-operator-6c4d7847fc-w5zwz\" (UID: \"65e9be8b-9429-42bf-b704-bd8e99a88c5e\") " pod="kube-system/cilium-operator-6c4d7847fc-w5zwz" Jun 20 19:37:07.378627 systemd[1]: Started cri-containerd-1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96.scope - libcontainer container 1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96. Jun 20 19:37:07.425000 containerd[1530]: time="2025-06-20T19:37:07.424921588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2z5kj,Uid:856b4ef3-aa72-41ae-b22a-feb15c63f816,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\"" Jun 20 19:37:07.429585 containerd[1530]: time="2025-06-20T19:37:07.429077052Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:37:07.510576 containerd[1530]: time="2025-06-20T19:37:07.510082047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hl47h,Uid:6b67bd7a-0cf6-4fc2-8576-77c5d5f962e3,Namespace:kube-system,Attempt:0,}" Jun 20 19:37:07.564865 containerd[1530]: time="2025-06-20T19:37:07.564685905Z" level=info msg="connecting to shim fd8c38e7243dc1133038960163dac70d8f2510c6d073306f5681a1942fcc64b6" address="unix:///run/containerd/s/0ad75537c0ff965393b2606b5b2ff80ba56319305af8b4baa7b4f90797c6cb37" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:37:07.601118 containerd[1530]: time="2025-06-20T19:37:07.601068692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w5zwz,Uid:65e9be8b-9429-42bf-b704-bd8e99a88c5e,Namespace:kube-system,Attempt:0,}" Jun 20 19:37:07.622668 systemd[1]: Started cri-containerd-fd8c38e7243dc1133038960163dac70d8f2510c6d073306f5681a1942fcc64b6.scope - libcontainer container fd8c38e7243dc1133038960163dac70d8f2510c6d073306f5681a1942fcc64b6. Jun 20 19:37:07.661522 containerd[1530]: time="2025-06-20T19:37:07.661435357Z" level=info msg="connecting to shim ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce" address="unix:///run/containerd/s/c92a094cc235336e671b8cf299fcab854394bda41763de6debc030455c68c5e6" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:37:07.667196 containerd[1530]: time="2025-06-20T19:37:07.667096368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hl47h,Uid:6b67bd7a-0cf6-4fc2-8576-77c5d5f962e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd8c38e7243dc1133038960163dac70d8f2510c6d073306f5681a1942fcc64b6\"" Jun 20 19:37:07.671873 containerd[1530]: time="2025-06-20T19:37:07.671835807Z" level=info msg="CreateContainer within sandbox \"fd8c38e7243dc1133038960163dac70d8f2510c6d073306f5681a1942fcc64b6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:37:07.703514 containerd[1530]: time="2025-06-20T19:37:07.702250848Z" level=info msg="Container 2144c82708f1c4be49e1efec39b44b3e91f9b798d25fff4a3592927af963246a: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:37:07.704702 systemd[1]: Started cri-containerd-ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce.scope - libcontainer container ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce. Jun 20 19:37:07.726427 containerd[1530]: time="2025-06-20T19:37:07.726383899Z" level=info msg="CreateContainer within sandbox \"fd8c38e7243dc1133038960163dac70d8f2510c6d073306f5681a1942fcc64b6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2144c82708f1c4be49e1efec39b44b3e91f9b798d25fff4a3592927af963246a\"" Jun 20 19:37:07.727329 containerd[1530]: time="2025-06-20T19:37:07.727300872Z" level=info msg="StartContainer for \"2144c82708f1c4be49e1efec39b44b3e91f9b798d25fff4a3592927af963246a\"" Jun 20 19:37:07.730140 containerd[1530]: time="2025-06-20T19:37:07.730107256Z" level=info msg="connecting to shim 2144c82708f1c4be49e1efec39b44b3e91f9b798d25fff4a3592927af963246a" address="unix:///run/containerd/s/0ad75537c0ff965393b2606b5b2ff80ba56319305af8b4baa7b4f90797c6cb37" protocol=ttrpc version=3 Jun 20 19:37:07.762671 systemd[1]: Started cri-containerd-2144c82708f1c4be49e1efec39b44b3e91f9b798d25fff4a3592927af963246a.scope - libcontainer container 2144c82708f1c4be49e1efec39b44b3e91f9b798d25fff4a3592927af963246a. Jun 20 19:37:07.787903 containerd[1530]: time="2025-06-20T19:37:07.787727942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w5zwz,Uid:65e9be8b-9429-42bf-b704-bd8e99a88c5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\"" Jun 20 19:37:07.824145 containerd[1530]: time="2025-06-20T19:37:07.824080171Z" level=info msg="StartContainer for \"2144c82708f1c4be49e1efec39b44b3e91f9b798d25fff4a3592927af963246a\" returns successfully" Jun 20 19:37:08.935126 kubelet[2789]: I0620 19:37:08.934942 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hl47h" podStartSLOduration=2.934886155 podStartE2EDuration="2.934886155s" podCreationTimestamp="2025-06-20 19:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:37:08.571986046 +0000 UTC m=+7.380851937" watchObservedRunningTime="2025-06-20 19:37:08.934886155 +0000 UTC m=+7.743752006" Jun 20 19:37:12.737815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3823274786.mount: Deactivated successfully. Jun 20 19:37:15.659521 containerd[1530]: time="2025-06-20T19:37:15.659262204Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:37:15.665569 containerd[1530]: time="2025-06-20T19:37:15.665434916Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:37:15.666279 containerd[1530]: time="2025-06-20T19:37:15.666135176Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:37:15.671067 containerd[1530]: time="2025-06-20T19:37:15.670990285Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.241803034s" Jun 20 19:37:15.671402 containerd[1530]: time="2025-06-20T19:37:15.671354117Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:37:15.676428 containerd[1530]: time="2025-06-20T19:37:15.675969131Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:37:15.685082 containerd[1530]: time="2025-06-20T19:37:15.683893149Z" level=info msg="CreateContainer within sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:37:15.722715 containerd[1530]: time="2025-06-20T19:37:15.722649990Z" level=info msg="Container b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:37:15.734944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607059875.mount: Deactivated successfully. Jun 20 19:37:15.740355 containerd[1530]: time="2025-06-20T19:37:15.740250434Z" level=info msg="CreateContainer within sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\"" Jun 20 19:37:15.740857 containerd[1530]: time="2025-06-20T19:37:15.740830505Z" level=info msg="StartContainer for \"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\"" Jun 20 19:37:15.742206 containerd[1530]: time="2025-06-20T19:37:15.742166873Z" level=info msg="connecting to shim b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012" address="unix:///run/containerd/s/ee865cc2fad01a82ae0dc460539e73f8105e9ca9ddcbb0d65e4c9bb2a4144f17" protocol=ttrpc version=3 Jun 20 19:37:15.779687 systemd[1]: Started cri-containerd-b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012.scope - libcontainer container b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012. Jun 20 19:37:15.826496 containerd[1530]: time="2025-06-20T19:37:15.826331816Z" level=info msg="StartContainer for \"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\" returns successfully" Jun 20 19:37:15.835930 systemd[1]: cri-containerd-b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012.scope: Deactivated successfully. Jun 20 19:37:15.839354 containerd[1530]: time="2025-06-20T19:37:15.839308749Z" level=info msg="received exit event container_id:\"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\" id:\"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\" pid:3202 exited_at:{seconds:1750448235 nanos:837986719}" Jun 20 19:37:15.839428 containerd[1530]: time="2025-06-20T19:37:15.839384303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\" id:\"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\" pid:3202 exited_at:{seconds:1750448235 nanos:837986719}" Jun 20 19:37:15.864190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012-rootfs.mount: Deactivated successfully. Jun 20 19:37:17.597596 containerd[1530]: time="2025-06-20T19:37:17.597395653Z" level=info msg="CreateContainer within sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:37:17.648542 containerd[1530]: time="2025-06-20T19:37:17.645747426Z" level=info msg="Container d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:37:17.650232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2167044214.mount: Deactivated successfully. Jun 20 19:37:17.671833 containerd[1530]: time="2025-06-20T19:37:17.671787189Z" level=info msg="CreateContainer within sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\"" Jun 20 19:37:17.672703 containerd[1530]: time="2025-06-20T19:37:17.672663671Z" level=info msg="StartContainer for \"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\"" Jun 20 19:37:17.674134 containerd[1530]: time="2025-06-20T19:37:17.674092451Z" level=info msg="connecting to shim d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f" address="unix:///run/containerd/s/ee865cc2fad01a82ae0dc460539e73f8105e9ca9ddcbb0d65e4c9bb2a4144f17" protocol=ttrpc version=3 Jun 20 19:37:17.707666 systemd[1]: Started cri-containerd-d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f.scope - libcontainer container d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f. Jun 20 19:37:17.745680 containerd[1530]: time="2025-06-20T19:37:17.745566553Z" level=info msg="StartContainer for \"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\" returns successfully" Jun 20 19:37:17.763165 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:37:17.764274 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:37:17.764907 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:37:17.768414 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:37:17.771086 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:37:17.772214 systemd[1]: cri-containerd-d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f.scope: Deactivated successfully. Jun 20 19:37:17.773869 containerd[1530]: time="2025-06-20T19:37:17.773834001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\" id:\"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\" pid:3247 exited_at:{seconds:1750448237 nanos:772552390}" Jun 20 19:37:17.776092 containerd[1530]: time="2025-06-20T19:37:17.775912693Z" level=info msg="received exit event container_id:\"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\" id:\"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\" pid:3247 exited_at:{seconds:1750448237 nanos:772552390}" Jun 20 19:37:17.802251 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:37:18.603150 containerd[1530]: time="2025-06-20T19:37:18.603113100Z" level=info msg="CreateContainer within sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:37:18.626626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f-rootfs.mount: Deactivated successfully. Jun 20 19:37:18.639703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239832062.mount: Deactivated successfully. Jun 20 19:37:18.644542 containerd[1530]: time="2025-06-20T19:37:18.642922330Z" level=info msg="Container 6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:37:18.648076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1728589641.mount: Deactivated successfully. Jun 20 19:37:18.664638 containerd[1530]: time="2025-06-20T19:37:18.664582353Z" level=info msg="CreateContainer within sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\"" Jun 20 19:37:18.665642 containerd[1530]: time="2025-06-20T19:37:18.665618146Z" level=info msg="StartContainer for \"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\"" Jun 20 19:37:18.669124 containerd[1530]: time="2025-06-20T19:37:18.669098114Z" level=info msg="connecting to shim 6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230" address="unix:///run/containerd/s/ee865cc2fad01a82ae0dc460539e73f8105e9ca9ddcbb0d65e4c9bb2a4144f17" protocol=ttrpc version=3 Jun 20 19:37:18.702677 systemd[1]: Started cri-containerd-6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230.scope - libcontainer container 6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230. Jun 20 19:37:18.771400 systemd[1]: cri-containerd-6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230.scope: Deactivated successfully. Jun 20 19:37:18.775053 containerd[1530]: time="2025-06-20T19:37:18.775012885Z" level=info msg="received exit event container_id:\"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\" id:\"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\" pid:3303 exited_at:{seconds:1750448238 nanos:774354377}" Jun 20 19:37:18.775188 containerd[1530]: time="2025-06-20T19:37:18.775163821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\" id:\"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\" pid:3303 exited_at:{seconds:1750448238 nanos:774354377}" Jun 20 19:37:18.776771 containerd[1530]: time="2025-06-20T19:37:18.776743405Z" level=info msg="StartContainer for \"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\" returns successfully" Jun 20 19:37:19.218251 containerd[1530]: time="2025-06-20T19:37:19.218204530Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:37:19.220525 containerd[1530]: time="2025-06-20T19:37:19.220502685Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:37:19.221811 containerd[1530]: time="2025-06-20T19:37:19.221786989Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:37:19.224443 containerd[1530]: time="2025-06-20T19:37:19.224417533Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.548367357s" Jun 20 19:37:19.224588 containerd[1530]: time="2025-06-20T19:37:19.224567528Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:37:19.227443 containerd[1530]: time="2025-06-20T19:37:19.227418950Z" level=info msg="CreateContainer within sandbox \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:37:19.242104 containerd[1530]: time="2025-06-20T19:37:19.242024204Z" level=info msg="Container 0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:37:19.261532 containerd[1530]: time="2025-06-20T19:37:19.261408084Z" level=info msg="CreateContainer within sandbox \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\"" Jun 20 19:37:19.262216 containerd[1530]: time="2025-06-20T19:37:19.262138608Z" level=info msg="StartContainer for \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\"" Jun 20 19:37:19.265821 containerd[1530]: time="2025-06-20T19:37:19.265744871Z" level=info msg="connecting to shim 0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704" address="unix:///run/containerd/s/c92a094cc235336e671b8cf299fcab854394bda41763de6debc030455c68c5e6" protocol=ttrpc version=3 Jun 20 19:37:19.291713 systemd[1]: Started cri-containerd-0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704.scope - libcontainer container 0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704. Jun 20 19:37:19.339166 containerd[1530]: time="2025-06-20T19:37:19.339132007Z" level=info msg="StartContainer for \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\" returns successfully" Jun 20 19:37:19.603768 containerd[1530]: time="2025-06-20T19:37:19.603644281Z" level=info msg="CreateContainer within sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:37:19.620058 containerd[1530]: time="2025-06-20T19:37:19.620008369Z" level=info msg="Container de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:37:19.636310 containerd[1530]: time="2025-06-20T19:37:19.635448886Z" level=info msg="CreateContainer within sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\"" Jun 20 19:37:19.637598 containerd[1530]: time="2025-06-20T19:37:19.637232695Z" level=info msg="StartContainer for \"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\"" Jun 20 19:37:19.640426 containerd[1530]: time="2025-06-20T19:37:19.640394968Z" level=info msg="connecting to shim de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3" address="unix:///run/containerd/s/ee865cc2fad01a82ae0dc460539e73f8105e9ca9ddcbb0d65e4c9bb2a4144f17" protocol=ttrpc version=3 Jun 20 19:37:19.688854 systemd[1]: Started cri-containerd-de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3.scope - libcontainer container de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3. Jun 20 19:37:19.743388 systemd[1]: cri-containerd-de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3.scope: Deactivated successfully. Jun 20 19:37:19.747905 containerd[1530]: time="2025-06-20T19:37:19.747547392Z" level=info msg="received exit event container_id:\"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\" id:\"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\" pid:3381 exited_at:{seconds:1750448239 nanos:746614164}" Jun 20 19:37:19.749425 containerd[1530]: time="2025-06-20T19:37:19.748959589Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\" id:\"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\" pid:3381 exited_at:{seconds:1750448239 nanos:746614164}" Jun 20 19:37:19.749425 containerd[1530]: time="2025-06-20T19:37:19.749119792Z" level=info msg="StartContainer for \"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\" returns successfully" Jun 20 19:37:19.789840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3-rootfs.mount: Deactivated successfully. Jun 20 19:37:20.645097 containerd[1530]: time="2025-06-20T19:37:20.643772648Z" level=info msg="CreateContainer within sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:37:20.692522 containerd[1530]: time="2025-06-20T19:37:20.692315194Z" level=info msg="Container 8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:37:20.699490 kubelet[2789]: I0620 19:37:20.697405 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-w5zwz" podStartSLOduration=2.262058647 podStartE2EDuration="13.6973029s" podCreationTimestamp="2025-06-20 19:37:07 +0000 UTC" firstStartedPulling="2025-06-20 19:37:07.790159008 +0000 UTC m=+6.599024859" lastFinishedPulling="2025-06-20 19:37:19.225403271 +0000 UTC m=+18.034269112" observedRunningTime="2025-06-20 19:37:19.675659467 +0000 UTC m=+18.484525308" watchObservedRunningTime="2025-06-20 19:37:20.6973029 +0000 UTC m=+19.506168741" Jun 20 19:37:20.702541 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31046738.mount: Deactivated successfully. Jun 20 19:37:20.710721 containerd[1530]: time="2025-06-20T19:37:20.710622907Z" level=info msg="CreateContainer within sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\"" Jun 20 19:37:20.710721 containerd[1530]: time="2025-06-20T19:37:20.711226631Z" level=info msg="StartContainer for \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\"" Jun 20 19:37:20.712895 containerd[1530]: time="2025-06-20T19:37:20.712854395Z" level=info msg="connecting to shim 8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411" address="unix:///run/containerd/s/ee865cc2fad01a82ae0dc460539e73f8105e9ca9ddcbb0d65e4c9bb2a4144f17" protocol=ttrpc version=3 Jun 20 19:37:20.743649 systemd[1]: Started cri-containerd-8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411.scope - libcontainer container 8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411. Jun 20 19:37:20.791690 containerd[1530]: time="2025-06-20T19:37:20.791644016Z" level=info msg="StartContainer for \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" returns successfully" Jun 20 19:37:20.891680 containerd[1530]: time="2025-06-20T19:37:20.891630694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" id:\"92a489a6748381480b5ef6e6a5b29e9e6b4d2834b4ed71cd9de9701d5ddd227f\" pid:3447 exited_at:{seconds:1750448240 nanos:890680364}" Jun 20 19:37:20.909606 kubelet[2789]: I0620 19:37:20.908825 2789 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 19:37:20.970544 systemd[1]: Created slice kubepods-burstable-pod4a9e9bca_c5e1_43e8_98ea_b593ae3c5b35.slice - libcontainer container kubepods-burstable-pod4a9e9bca_c5e1_43e8_98ea_b593ae3c5b35.slice. Jun 20 19:37:20.981052 systemd[1]: Created slice kubepods-burstable-pod47aa8fd1_c2ba_41cc_8ce9_eed5ae8670b0.slice - libcontainer container kubepods-burstable-pod47aa8fd1_c2ba_41cc_8ce9_eed5ae8670b0.slice. Jun 20 19:37:21.063701 kubelet[2789]: I0620 19:37:21.063651 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47aa8fd1-c2ba-41cc-8ce9-eed5ae8670b0-config-volume\") pod \"coredns-668d6bf9bc-nxzck\" (UID: \"47aa8fd1-c2ba-41cc-8ce9-eed5ae8670b0\") " pod="kube-system/coredns-668d6bf9bc-nxzck" Jun 20 19:37:21.063701 kubelet[2789]: I0620 19:37:21.063703 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j59nz\" (UniqueName: \"kubernetes.io/projected/4a9e9bca-c5e1-43e8-98ea-b593ae3c5b35-kube-api-access-j59nz\") pod \"coredns-668d6bf9bc-jbww4\" (UID: \"4a9e9bca-c5e1-43e8-98ea-b593ae3c5b35\") " pod="kube-system/coredns-668d6bf9bc-jbww4" Jun 20 19:37:21.063971 kubelet[2789]: I0620 19:37:21.063727 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a9e9bca-c5e1-43e8-98ea-b593ae3c5b35-config-volume\") pod \"coredns-668d6bf9bc-jbww4\" (UID: \"4a9e9bca-c5e1-43e8-98ea-b593ae3c5b35\") " pod="kube-system/coredns-668d6bf9bc-jbww4" Jun 20 19:37:21.064520 kubelet[2789]: I0620 19:37:21.063749 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd29v\" (UniqueName: \"kubernetes.io/projected/47aa8fd1-c2ba-41cc-8ce9-eed5ae8670b0-kube-api-access-dd29v\") pod \"coredns-668d6bf9bc-nxzck\" (UID: \"47aa8fd1-c2ba-41cc-8ce9-eed5ae8670b0\") " pod="kube-system/coredns-668d6bf9bc-nxzck" Jun 20 19:37:21.277087 containerd[1530]: time="2025-06-20T19:37:21.276803613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jbww4,Uid:4a9e9bca-c5e1-43e8-98ea-b593ae3c5b35,Namespace:kube-system,Attempt:0,}" Jun 20 19:37:21.286742 containerd[1530]: time="2025-06-20T19:37:21.286685020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nxzck,Uid:47aa8fd1-c2ba-41cc-8ce9-eed5ae8670b0,Namespace:kube-system,Attempt:0,}" Jun 20 19:37:21.733616 kubelet[2789]: I0620 19:37:21.733423 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2z5kj" podStartSLOduration=7.486284495 podStartE2EDuration="15.733391878s" podCreationTimestamp="2025-06-20 19:37:06 +0000 UTC" firstStartedPulling="2025-06-20 19:37:07.427819518 +0000 UTC m=+6.236685359" lastFinishedPulling="2025-06-20 19:37:15.674926851 +0000 UTC m=+14.483792742" observedRunningTime="2025-06-20 19:37:21.731899091 +0000 UTC m=+20.540764963" watchObservedRunningTime="2025-06-20 19:37:21.733391878 +0000 UTC m=+20.542257719" Jun 20 19:37:23.089172 systemd-networkd[1445]: cilium_host: Link UP Jun 20 19:37:23.089634 systemd-networkd[1445]: cilium_net: Link UP Jun 20 19:37:23.089989 systemd-networkd[1445]: cilium_net: Gained carrier Jun 20 19:37:23.090248 systemd-networkd[1445]: cilium_host: Gained carrier Jun 20 19:37:23.224237 systemd-networkd[1445]: cilium_vxlan: Link UP Jun 20 19:37:23.224249 systemd-networkd[1445]: cilium_vxlan: Gained carrier Jun 20 19:37:23.596558 kernel: NET: Registered PF_ALG protocol family Jun 20 19:37:23.679977 systemd-networkd[1445]: cilium_net: Gained IPv6LL Jun 20 19:37:23.999613 systemd-networkd[1445]: cilium_host: Gained IPv6LL Jun 20 19:37:24.484091 systemd-networkd[1445]: lxc_health: Link UP Jun 20 19:37:24.496802 systemd-networkd[1445]: lxc_health: Gained carrier Jun 20 19:37:24.639688 systemd-networkd[1445]: cilium_vxlan: Gained IPv6LL Jun 20 19:37:24.832287 systemd-networkd[1445]: lxcaba825e60a71: Link UP Jun 20 19:37:24.843505 kernel: eth0: renamed from tmp52050 Jun 20 19:37:24.843716 systemd-networkd[1445]: lxcaba825e60a71: Gained carrier Jun 20 19:37:24.876496 kernel: eth0: renamed from tmpbcfbb Jun 20 19:37:24.874552 systemd-networkd[1445]: lxcf2bf5ace18df: Link UP Jun 20 19:37:24.883637 systemd-networkd[1445]: lxcf2bf5ace18df: Gained carrier Jun 20 19:37:26.239692 systemd-networkd[1445]: lxc_health: Gained IPv6LL Jun 20 19:37:26.623667 systemd-networkd[1445]: lxcf2bf5ace18df: Gained IPv6LL Jun 20 19:37:26.879819 systemd-networkd[1445]: lxcaba825e60a71: Gained IPv6LL Jun 20 19:37:29.957996 containerd[1530]: time="2025-06-20T19:37:29.957649339Z" level=info msg="connecting to shim 520509fcd0f754fb40fc13438d152cf37d47cebe9b92e78c6bc65b9d9df6aa68" address="unix:///run/containerd/s/b62ce9143b59d3c54ab827fa3a52320ccfc2d19591d3f4e965b1fea20b8f56d7" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:37:30.033922 systemd[1]: Started cri-containerd-520509fcd0f754fb40fc13438d152cf37d47cebe9b92e78c6bc65b9d9df6aa68.scope - libcontainer container 520509fcd0f754fb40fc13438d152cf37d47cebe9b92e78c6bc65b9d9df6aa68. Jun 20 19:37:30.042797 containerd[1530]: time="2025-06-20T19:37:30.042700106Z" level=info msg="connecting to shim bcfbbf3847698f64d99bd0a61a7f5c91d0993b1c798ce2a73daf709f30ad1897" address="unix:///run/containerd/s/708661686f782d31ec5b13231c05b44eba354a67deea29a80f60c03ef32c76f2" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:37:30.080728 systemd[1]: Started cri-containerd-bcfbbf3847698f64d99bd0a61a7f5c91d0993b1c798ce2a73daf709f30ad1897.scope - libcontainer container bcfbbf3847698f64d99bd0a61a7f5c91d0993b1c798ce2a73daf709f30ad1897. Jun 20 19:37:30.171665 containerd[1530]: time="2025-06-20T19:37:30.171606645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jbww4,Uid:4a9e9bca-c5e1-43e8-98ea-b593ae3c5b35,Namespace:kube-system,Attempt:0,} returns sandbox id \"520509fcd0f754fb40fc13438d152cf37d47cebe9b92e78c6bc65b9d9df6aa68\"" Jun 20 19:37:30.181464 containerd[1530]: time="2025-06-20T19:37:30.181407094Z" level=info msg="CreateContainer within sandbox \"520509fcd0f754fb40fc13438d152cf37d47cebe9b92e78c6bc65b9d9df6aa68\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:37:30.183917 containerd[1530]: time="2025-06-20T19:37:30.183852099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nxzck,Uid:47aa8fd1-c2ba-41cc-8ce9-eed5ae8670b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcfbbf3847698f64d99bd0a61a7f5c91d0993b1c798ce2a73daf709f30ad1897\"" Jun 20 19:37:30.192325 containerd[1530]: time="2025-06-20T19:37:30.190612319Z" level=info msg="CreateContainer within sandbox \"bcfbbf3847698f64d99bd0a61a7f5c91d0993b1c798ce2a73daf709f30ad1897\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:37:30.207771 containerd[1530]: time="2025-06-20T19:37:30.207715793Z" level=info msg="Container 94ffcdb2aee46fa41ed4eeac85830ae67fd3276184bf853d6023a8628e492ec6: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:37:30.215766 containerd[1530]: time="2025-06-20T19:37:30.215664276Z" level=info msg="Container bc5116a82b12d4a2f9de008fde329692054166ec9cbcf8291e8ff251d869a036: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:37:30.227324 containerd[1530]: time="2025-06-20T19:37:30.227260665Z" level=info msg="CreateContainer within sandbox \"520509fcd0f754fb40fc13438d152cf37d47cebe9b92e78c6bc65b9d9df6aa68\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94ffcdb2aee46fa41ed4eeac85830ae67fd3276184bf853d6023a8628e492ec6\"" Jun 20 19:37:30.229381 containerd[1530]: time="2025-06-20T19:37:30.229338867Z" level=info msg="StartContainer for \"94ffcdb2aee46fa41ed4eeac85830ae67fd3276184bf853d6023a8628e492ec6\"" Jun 20 19:37:30.233395 containerd[1530]: time="2025-06-20T19:37:30.233333007Z" level=info msg="CreateContainer within sandbox \"bcfbbf3847698f64d99bd0a61a7f5c91d0993b1c798ce2a73daf709f30ad1897\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc5116a82b12d4a2f9de008fde329692054166ec9cbcf8291e8ff251d869a036\"" Jun 20 19:37:30.234534 containerd[1530]: time="2025-06-20T19:37:30.234452239Z" level=info msg="connecting to shim 94ffcdb2aee46fa41ed4eeac85830ae67fd3276184bf853d6023a8628e492ec6" address="unix:///run/containerd/s/b62ce9143b59d3c54ab827fa3a52320ccfc2d19591d3f4e965b1fea20b8f56d7" protocol=ttrpc version=3 Jun 20 19:37:30.234866 containerd[1530]: time="2025-06-20T19:37:30.234834291Z" level=info msg="StartContainer for \"bc5116a82b12d4a2f9de008fde329692054166ec9cbcf8291e8ff251d869a036\"" Jun 20 19:37:30.236505 containerd[1530]: time="2025-06-20T19:37:30.236443628Z" level=info msg="connecting to shim bc5116a82b12d4a2f9de008fde329692054166ec9cbcf8291e8ff251d869a036" address="unix:///run/containerd/s/708661686f782d31ec5b13231c05b44eba354a67deea29a80f60c03ef32c76f2" protocol=ttrpc version=3 Jun 20 19:37:30.261812 systemd[1]: Started cri-containerd-94ffcdb2aee46fa41ed4eeac85830ae67fd3276184bf853d6023a8628e492ec6.scope - libcontainer container 94ffcdb2aee46fa41ed4eeac85830ae67fd3276184bf853d6023a8628e492ec6. Jun 20 19:37:30.271802 systemd[1]: Started cri-containerd-bc5116a82b12d4a2f9de008fde329692054166ec9cbcf8291e8ff251d869a036.scope - libcontainer container bc5116a82b12d4a2f9de008fde329692054166ec9cbcf8291e8ff251d869a036. Jun 20 19:37:30.337125 containerd[1530]: time="2025-06-20T19:37:30.337059616Z" level=info msg="StartContainer for \"bc5116a82b12d4a2f9de008fde329692054166ec9cbcf8291e8ff251d869a036\" returns successfully" Jun 20 19:37:30.337912 containerd[1530]: time="2025-06-20T19:37:30.337868343Z" level=info msg="StartContainer for \"94ffcdb2aee46fa41ed4eeac85830ae67fd3276184bf853d6023a8628e492ec6\" returns successfully" Jun 20 19:37:30.777538 kubelet[2789]: I0620 19:37:30.774967 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nxzck" podStartSLOduration=23.774651675 podStartE2EDuration="23.774651675s" podCreationTimestamp="2025-06-20 19:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:37:30.758016214 +0000 UTC m=+29.566882156" watchObservedRunningTime="2025-06-20 19:37:30.774651675 +0000 UTC m=+29.583517566" Jun 20 19:37:30.838136 kubelet[2789]: I0620 19:37:30.837857 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jbww4" podStartSLOduration=23.837829897 podStartE2EDuration="23.837829897s" podCreationTimestamp="2025-06-20 19:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:37:30.834782906 +0000 UTC m=+29.643648757" watchObservedRunningTime="2025-06-20 19:37:30.837829897 +0000 UTC m=+29.646695738" Jun 20 19:37:30.939200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2910225148.mount: Deactivated successfully. Jun 20 19:38:23.150651 systemd[1]: Started sshd@9-172.24.4.217:22-172.24.4.1:45304.service - OpenSSH per-connection server daemon (172.24.4.1:45304). Jun 20 19:38:24.288764 sshd[4107]: Accepted publickey for core from 172.24.4.1 port 45304 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:38:24.293701 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:38:24.325905 systemd-logind[1498]: New session 12 of user core. Jun 20 19:38:24.340107 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:38:25.228193 sshd[4109]: Connection closed by 172.24.4.1 port 45304 Jun 20 19:38:25.229992 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Jun 20 19:38:25.238089 systemd[1]: sshd@9-172.24.4.217:22-172.24.4.1:45304.service: Deactivated successfully. Jun 20 19:38:25.246897 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:38:25.250605 systemd-logind[1498]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:38:25.255430 systemd-logind[1498]: Removed session 12. Jun 20 19:38:30.257313 systemd[1]: Started sshd@10-172.24.4.217:22-172.24.4.1:39768.service - OpenSSH per-connection server daemon (172.24.4.1:39768). Jun 20 19:38:31.523753 sshd[4123]: Accepted publickey for core from 172.24.4.1 port 39768 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:38:31.529176 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:38:31.550885 systemd-logind[1498]: New session 13 of user core. Jun 20 19:38:31.554834 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:38:32.350712 sshd[4125]: Connection closed by 172.24.4.1 port 39768 Jun 20 19:38:32.350901 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Jun 20 19:38:32.365673 systemd[1]: sshd@10-172.24.4.217:22-172.24.4.1:39768.service: Deactivated successfully. Jun 20 19:38:32.376411 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:38:32.383964 systemd-logind[1498]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:38:32.392442 systemd-logind[1498]: Removed session 13. Jun 20 19:38:33.870338 update_engine[1503]: I20250620 19:38:33.869972 1503 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 20 19:38:33.870338 update_engine[1503]: I20250620 19:38:33.870206 1503 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 20 19:38:33.873143 update_engine[1503]: I20250620 19:38:33.871280 1503 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 20 19:38:33.875172 update_engine[1503]: I20250620 19:38:33.874557 1503 omaha_request_params.cc:62] Current group set to beta Jun 20 19:38:33.875172 update_engine[1503]: I20250620 19:38:33.875140 1503 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 20 19:38:33.875172 update_engine[1503]: I20250620 19:38:33.875169 1503 update_attempter.cc:643] Scheduling an action processor start. Jun 20 19:38:33.875673 update_engine[1503]: I20250620 19:38:33.875226 1503 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 19:38:33.875673 update_engine[1503]: I20250620 19:38:33.875447 1503 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 20 19:38:33.877557 update_engine[1503]: I20250620 19:38:33.876894 1503 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 19:38:33.877557 update_engine[1503]: I20250620 19:38:33.876937 1503 omaha_request_action.cc:272] Request: Jun 20 19:38:33.877557 update_engine[1503]: Jun 20 19:38:33.877557 update_engine[1503]: Jun 20 19:38:33.877557 update_engine[1503]: Jun 20 19:38:33.877557 update_engine[1503]: Jun 20 19:38:33.877557 update_engine[1503]: Jun 20 19:38:33.877557 update_engine[1503]: Jun 20 19:38:33.877557 update_engine[1503]: Jun 20 19:38:33.877557 update_engine[1503]: Jun 20 19:38:33.877557 update_engine[1503]: I20250620 19:38:33.876960 1503 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:38:33.882837 update_engine[1503]: I20250620 19:38:33.882774 1503 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:38:33.883002 locksmithd[1543]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 20 19:38:33.884121 update_engine[1503]: I20250620 19:38:33.883989 1503 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:38:33.891911 update_engine[1503]: E20250620 19:38:33.891809 1503 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:38:33.892144 update_engine[1503]: I20250620 19:38:33.892019 1503 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 20 19:38:37.396086 systemd[1]: Started sshd@11-172.24.4.217:22-172.24.4.1:54914.service - OpenSSH per-connection server daemon (172.24.4.1:54914). Jun 20 19:38:38.600277 sshd[4138]: Accepted publickey for core from 172.24.4.1 port 54914 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:38:38.601561 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:38:38.612755 systemd-logind[1498]: New session 14 of user core. Jun 20 19:38:38.617646 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:38:39.625144 sshd[4143]: Connection closed by 172.24.4.1 port 54914 Jun 20 19:38:39.626084 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Jun 20 19:38:39.644882 systemd[1]: sshd@11-172.24.4.217:22-172.24.4.1:54914.service: Deactivated successfully. Jun 20 19:38:39.652661 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:38:39.655541 systemd-logind[1498]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:38:39.663330 systemd[1]: Started sshd@12-172.24.4.217:22-172.24.4.1:54924.service - OpenSSH per-connection server daemon (172.24.4.1:54924). Jun 20 19:38:39.665974 systemd-logind[1498]: Removed session 14. Jun 20 19:38:41.407506 sshd[4156]: Accepted publickey for core from 172.24.4.1 port 54924 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:38:41.411571 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:38:41.425051 systemd-logind[1498]: New session 15 of user core. Jun 20 19:38:41.444121 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:38:42.143305 sshd[4158]: Connection closed by 172.24.4.1 port 54924 Jun 20 19:38:42.144089 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Jun 20 19:38:42.153517 systemd[1]: sshd@12-172.24.4.217:22-172.24.4.1:54924.service: Deactivated successfully. Jun 20 19:38:42.156665 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:38:42.161559 systemd-logind[1498]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:38:42.164300 systemd[1]: Started sshd@13-172.24.4.217:22-172.24.4.1:54932.service - OpenSSH per-connection server daemon (172.24.4.1:54932). Jun 20 19:38:42.169522 systemd-logind[1498]: Removed session 15. Jun 20 19:38:43.289346 sshd[4167]: Accepted publickey for core from 172.24.4.1 port 54932 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:38:43.293632 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:38:43.308997 systemd-logind[1498]: New session 16 of user core. Jun 20 19:38:43.325897 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:38:43.869999 update_engine[1503]: I20250620 19:38:43.869444 1503 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:38:43.871967 update_engine[1503]: I20250620 19:38:43.871351 1503 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:38:43.872551 update_engine[1503]: I20250620 19:38:43.872388 1503 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:38:43.877862 update_engine[1503]: E20250620 19:38:43.877744 1503 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:38:43.878011 update_engine[1503]: I20250620 19:38:43.877928 1503 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 20 19:38:43.945725 sshd[4169]: Connection closed by 172.24.4.1 port 54932 Jun 20 19:38:43.946419 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Jun 20 19:38:43.955804 systemd[1]: sshd@13-172.24.4.217:22-172.24.4.1:54932.service: Deactivated successfully. Jun 20 19:38:43.961213 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:38:43.964521 systemd-logind[1498]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:38:43.968728 systemd-logind[1498]: Removed session 16. Jun 20 19:38:48.975675 systemd[1]: Started sshd@14-172.24.4.217:22-172.24.4.1:52834.service - OpenSSH per-connection server daemon (172.24.4.1:52834). Jun 20 19:38:50.155933 sshd[4181]: Accepted publickey for core from 172.24.4.1 port 52834 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:38:50.159329 sshd-session[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:38:50.173514 systemd-logind[1498]: New session 17 of user core. Jun 20 19:38:50.189797 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:38:51.021701 sshd[4183]: Connection closed by 172.24.4.1 port 52834 Jun 20 19:38:51.023065 sshd-session[4181]: pam_unix(sshd:session): session closed for user core Jun 20 19:38:51.033146 systemd[1]: sshd@14-172.24.4.217:22-172.24.4.1:52834.service: Deactivated successfully. Jun 20 19:38:51.040059 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:38:51.043081 systemd-logind[1498]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:38:51.048552 systemd-logind[1498]: Removed session 17. Jun 20 19:38:53.872593 update_engine[1503]: I20250620 19:38:53.871410 1503 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:38:53.872593 update_engine[1503]: I20250620 19:38:53.871979 1503 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:38:53.873161 update_engine[1503]: I20250620 19:38:53.872717 1503 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:38:53.878334 update_engine[1503]: E20250620 19:38:53.878232 1503 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:38:53.878410 update_engine[1503]: I20250620 19:38:53.878360 1503 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 20 19:38:56.049081 systemd[1]: Started sshd@15-172.24.4.217:22-172.24.4.1:43832.service - OpenSSH per-connection server daemon (172.24.4.1:43832). Jun 20 19:38:57.409926 sshd[4195]: Accepted publickey for core from 172.24.4.1 port 43832 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:38:57.412205 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:38:57.424645 systemd-logind[1498]: New session 18 of user core. Jun 20 19:38:57.428711 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:38:58.023518 sshd[4197]: Connection closed by 172.24.4.1 port 43832 Jun 20 19:38:58.025658 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Jun 20 19:38:58.041946 systemd[1]: sshd@15-172.24.4.217:22-172.24.4.1:43832.service: Deactivated successfully. Jun 20 19:38:58.047414 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:38:58.052611 systemd-logind[1498]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:38:58.060935 systemd[1]: Started sshd@16-172.24.4.217:22-172.24.4.1:43836.service - OpenSSH per-connection server daemon (172.24.4.1:43836). Jun 20 19:38:58.064458 systemd-logind[1498]: Removed session 18. Jun 20 19:38:59.349292 sshd[4208]: Accepted publickey for core from 172.24.4.1 port 43836 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:38:59.353531 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:38:59.374741 systemd-logind[1498]: New session 19 of user core. Jun 20 19:38:59.380829 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:39:00.227522 sshd[4210]: Connection closed by 172.24.4.1 port 43836 Jun 20 19:39:00.225204 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:00.246285 systemd[1]: sshd@16-172.24.4.217:22-172.24.4.1:43836.service: Deactivated successfully. Jun 20 19:39:00.252109 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:39:00.255634 systemd-logind[1498]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:39:00.263737 systemd[1]: Started sshd@17-172.24.4.217:22-172.24.4.1:43838.service - OpenSSH per-connection server daemon (172.24.4.1:43838). Jun 20 19:39:00.267039 systemd-logind[1498]: Removed session 19. Jun 20 19:39:01.509340 sshd[4220]: Accepted publickey for core from 172.24.4.1 port 43838 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:39:01.512437 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:01.527913 systemd-logind[1498]: New session 20 of user core. Jun 20 19:39:01.533207 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:39:03.681560 sshd[4224]: Connection closed by 172.24.4.1 port 43838 Jun 20 19:39:03.682982 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:03.703737 systemd[1]: sshd@17-172.24.4.217:22-172.24.4.1:43838.service: Deactivated successfully. Jun 20 19:39:03.710173 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:39:03.717107 systemd-logind[1498]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:39:03.723726 systemd[1]: Started sshd@18-172.24.4.217:22-172.24.4.1:58868.service - OpenSSH per-connection server daemon (172.24.4.1:58868). Jun 20 19:39:03.728240 systemd-logind[1498]: Removed session 20. Jun 20 19:39:03.869247 update_engine[1503]: I20250620 19:39:03.868818 1503 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:39:03.870784 update_engine[1503]: I20250620 19:39:03.870687 1503 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:39:03.872000 update_engine[1503]: I20250620 19:39:03.871894 1503 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:39:03.877458 update_engine[1503]: E20250620 19:39:03.877296 1503 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:39:03.877458 update_engine[1503]: I20250620 19:39:03.877429 1503 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 19:39:03.877875 update_engine[1503]: I20250620 19:39:03.877547 1503 omaha_request_action.cc:617] Omaha request response: Jun 20 19:39:03.877875 update_engine[1503]: E20250620 19:39:03.877836 1503 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 20 19:39:03.878424 update_engine[1503]: I20250620 19:39:03.878345 1503 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 20 19:39:03.878424 update_engine[1503]: I20250620 19:39:03.878376 1503 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:39:03.878424 update_engine[1503]: I20250620 19:39:03.878396 1503 update_attempter.cc:306] Processing Done. Jun 20 19:39:03.878711 update_engine[1503]: E20250620 19:39:03.878540 1503 update_attempter.cc:619] Update failed. Jun 20 19:39:03.878711 update_engine[1503]: I20250620 19:39:03.878575 1503 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 20 19:39:03.878711 update_engine[1503]: I20250620 19:39:03.878587 1503 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 20 19:39:03.878711 update_engine[1503]: I20250620 19:39:03.878598 1503 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 20 19:39:03.879077 update_engine[1503]: I20250620 19:39:03.878924 1503 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 19:39:03.879077 update_engine[1503]: I20250620 19:39:03.879063 1503 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 19:39:03.879215 update_engine[1503]: I20250620 19:39:03.879079 1503 omaha_request_action.cc:272] Request: Jun 20 19:39:03.879215 update_engine[1503]: Jun 20 19:39:03.879215 update_engine[1503]: Jun 20 19:39:03.879215 update_engine[1503]: Jun 20 19:39:03.879215 update_engine[1503]: Jun 20 19:39:03.879215 update_engine[1503]: Jun 20 19:39:03.879215 update_engine[1503]: Jun 20 19:39:03.879215 update_engine[1503]: I20250620 19:39:03.879093 1503 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 19:39:03.881064 update_engine[1503]: I20250620 19:39:03.879386 1503 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 19:39:03.881064 update_engine[1503]: I20250620 19:39:03.879892 1503 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 19:39:03.884435 locksmithd[1543]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 20 19:39:03.885879 update_engine[1503]: E20250620 19:39:03.885766 1503 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 19:39:03.885879 update_engine[1503]: I20250620 19:39:03.885869 1503 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 19:39:03.886062 update_engine[1503]: I20250620 19:39:03.885887 1503 omaha_request_action.cc:617] Omaha request response: Jun 20 19:39:03.886062 update_engine[1503]: I20250620 19:39:03.885902 1503 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:39:03.886062 update_engine[1503]: I20250620 19:39:03.885914 1503 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 19:39:03.886062 update_engine[1503]: I20250620 19:39:03.885926 1503 update_attempter.cc:306] Processing Done. Jun 20 19:39:03.886062 update_engine[1503]: I20250620 19:39:03.885938 1503 update_attempter.cc:310] Error event sent. Jun 20 19:39:03.886062 update_engine[1503]: I20250620 19:39:03.885982 1503 update_check_scheduler.cc:74] Next update check in 44m43s Jun 20 19:39:03.887239 locksmithd[1543]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 20 19:39:04.973580 sshd[4241]: Accepted publickey for core from 172.24.4.1 port 58868 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:39:04.976970 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:04.991096 systemd-logind[1498]: New session 21 of user core. Jun 20 19:39:05.007597 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:39:06.008505 sshd[4243]: Connection closed by 172.24.4.1 port 58868 Jun 20 19:39:06.009348 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:06.030674 systemd[1]: sshd@18-172.24.4.217:22-172.24.4.1:58868.service: Deactivated successfully. Jun 20 19:39:06.036644 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:39:06.039634 systemd-logind[1498]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:39:06.048416 systemd[1]: Started sshd@19-172.24.4.217:22-172.24.4.1:58874.service - OpenSSH per-connection server daemon (172.24.4.1:58874). Jun 20 19:39:06.053186 systemd-logind[1498]: Removed session 21. Jun 20 19:39:07.245753 sshd[4253]: Accepted publickey for core from 172.24.4.1 port 58874 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:39:07.247537 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:07.262732 systemd-logind[1498]: New session 22 of user core. Jun 20 19:39:07.270903 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:39:08.034268 sshd[4255]: Connection closed by 172.24.4.1 port 58874 Jun 20 19:39:08.035542 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:08.043343 systemd[1]: sshd@19-172.24.4.217:22-172.24.4.1:58874.service: Deactivated successfully. Jun 20 19:39:08.048710 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:39:08.052076 systemd-logind[1498]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:39:08.057121 systemd-logind[1498]: Removed session 22. Jun 20 19:39:13.063729 systemd[1]: Started sshd@20-172.24.4.217:22-172.24.4.1:58876.service - OpenSSH per-connection server daemon (172.24.4.1:58876). Jun 20 19:39:14.231356 sshd[4271]: Accepted publickey for core from 172.24.4.1 port 58876 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:39:14.235789 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:14.250808 systemd-logind[1498]: New session 23 of user core. Jun 20 19:39:14.274868 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:39:15.095545 sshd[4273]: Connection closed by 172.24.4.1 port 58876 Jun 20 19:39:15.097972 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:15.108151 systemd[1]: sshd@20-172.24.4.217:22-172.24.4.1:58876.service: Deactivated successfully. Jun 20 19:39:15.116338 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:39:15.120538 systemd-logind[1498]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:39:15.124535 systemd-logind[1498]: Removed session 23. Jun 20 19:39:20.123392 systemd[1]: Started sshd@21-172.24.4.217:22-172.24.4.1:56760.service - OpenSSH per-connection server daemon (172.24.4.1:56760). Jun 20 19:39:21.296595 sshd[4285]: Accepted publickey for core from 172.24.4.1 port 56760 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:39:21.300169 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:21.342387 systemd-logind[1498]: New session 24 of user core. Jun 20 19:39:21.354269 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:39:22.030620 sshd[4287]: Connection closed by 172.24.4.1 port 56760 Jun 20 19:39:22.032571 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:22.047015 systemd[1]: sshd@21-172.24.4.217:22-172.24.4.1:56760.service: Deactivated successfully. Jun 20 19:39:22.055102 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:39:22.057322 systemd-logind[1498]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:39:22.061114 systemd-logind[1498]: Removed session 24. Jun 20 19:39:27.065026 systemd[1]: Started sshd@22-172.24.4.217:22-172.24.4.1:34518.service - OpenSSH per-connection server daemon (172.24.4.1:34518). Jun 20 19:39:28.230140 sshd[4298]: Accepted publickey for core from 172.24.4.1 port 34518 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:39:28.234326 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:28.257890 systemd-logind[1498]: New session 25 of user core. Jun 20 19:39:28.273361 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:39:29.096226 sshd[4300]: Connection closed by 172.24.4.1 port 34518 Jun 20 19:39:29.095982 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:29.114606 systemd[1]: sshd@22-172.24.4.217:22-172.24.4.1:34518.service: Deactivated successfully. Jun 20 19:39:29.121378 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:39:29.124203 systemd-logind[1498]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:39:29.131866 systemd[1]: Started sshd@23-172.24.4.217:22-172.24.4.1:34532.service - OpenSSH per-connection server daemon (172.24.4.1:34532). Jun 20 19:39:29.134871 systemd-logind[1498]: Removed session 25. Jun 20 19:39:30.338751 sshd[4312]: Accepted publickey for core from 172.24.4.1 port 34532 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:39:30.340244 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:30.359289 systemd-logind[1498]: New session 26 of user core. Jun 20 19:39:30.378891 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:39:32.686517 containerd[1530]: time="2025-06-20T19:39:32.685867857Z" level=info msg="StopContainer for \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\" with timeout 30 (s)" Jun 20 19:39:32.688848 containerd[1530]: time="2025-06-20T19:39:32.688805517Z" level=info msg="Stop container \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\" with signal terminated" Jun 20 19:39:32.731053 systemd[1]: cri-containerd-0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704.scope: Deactivated successfully. Jun 20 19:39:32.737147 containerd[1530]: time="2025-06-20T19:39:32.737085764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\" id:\"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\" pid:3347 exited_at:{seconds:1750448372 nanos:735702119}" Jun 20 19:39:32.737426 containerd[1530]: time="2025-06-20T19:39:32.737394796Z" level=info msg="received exit event container_id:\"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\" id:\"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\" pid:3347 exited_at:{seconds:1750448372 nanos:735702119}" Jun 20 19:39:32.751043 containerd[1530]: time="2025-06-20T19:39:32.750984698Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:39:32.758662 containerd[1530]: time="2025-06-20T19:39:32.758623148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" id:\"50a534b4c6b5d2d44d2b7512d6d1228215ec1904bca5e3202201fd246316ba75\" pid:4335 exited_at:{seconds:1750448372 nanos:757321267}" Jun 20 19:39:32.765197 containerd[1530]: time="2025-06-20T19:39:32.765160624Z" level=info msg="StopContainer for \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" with timeout 2 (s)" Jun 20 19:39:32.765946 containerd[1530]: time="2025-06-20T19:39:32.765909955Z" level=info msg="Stop container \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" with signal terminated" Jun 20 19:39:32.780606 systemd-networkd[1445]: lxc_health: Link DOWN Jun 20 19:39:32.780624 systemd-networkd[1445]: lxc_health: Lost carrier Jun 20 19:39:32.802907 systemd[1]: cri-containerd-8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411.scope: Deactivated successfully. Jun 20 19:39:32.804213 systemd[1]: cri-containerd-8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411.scope: Consumed 9.904s CPU time, 125.9M memory peak, 144K read from disk, 13.3M written to disk. Jun 20 19:39:32.806781 containerd[1530]: time="2025-06-20T19:39:32.806665617Z" level=info msg="received exit event container_id:\"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" id:\"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" pid:3416 exited_at:{seconds:1750448372 nanos:805973123}" Jun 20 19:39:32.807348 containerd[1530]: time="2025-06-20T19:39:32.806675105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" id:\"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" pid:3416 exited_at:{seconds:1750448372 nanos:805973123}" Jun 20 19:39:32.812822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704-rootfs.mount: Deactivated successfully. Jun 20 19:39:32.839331 containerd[1530]: time="2025-06-20T19:39:32.839288870Z" level=info msg="StopContainer for \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\" returns successfully" Jun 20 19:39:32.841211 containerd[1530]: time="2025-06-20T19:39:32.841175621Z" level=info msg="StopPodSandbox for \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\"" Jun 20 19:39:32.841365 containerd[1530]: time="2025-06-20T19:39:32.841335272Z" level=info msg="Container to stop \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:39:32.845733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411-rootfs.mount: Deactivated successfully. Jun 20 19:39:32.857699 systemd[1]: cri-containerd-ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce.scope: Deactivated successfully. Jun 20 19:39:32.864129 containerd[1530]: time="2025-06-20T19:39:32.863969319Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\" id:\"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\" pid:2986 exit_status:137 exited_at:{seconds:1750448372 nanos:863256447}" Jun 20 19:39:32.866188 containerd[1530]: time="2025-06-20T19:39:32.866159463Z" level=info msg="StopContainer for \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" returns successfully" Jun 20 19:39:32.867294 containerd[1530]: time="2025-06-20T19:39:32.867039920Z" level=info msg="StopPodSandbox for \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\"" Jun 20 19:39:32.867294 containerd[1530]: time="2025-06-20T19:39:32.867139938Z" level=info msg="Container to stop \"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:39:32.867294 containerd[1530]: time="2025-06-20T19:39:32.867156841Z" level=info msg="Container to stop \"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:39:32.867294 containerd[1530]: time="2025-06-20T19:39:32.867173963Z" level=info msg="Container to stop \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:39:32.867294 containerd[1530]: time="2025-06-20T19:39:32.867185254Z" level=info msg="Container to stop \"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:39:32.867294 containerd[1530]: time="2025-06-20T19:39:32.867196024Z" level=info msg="Container to stop \"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:39:32.885656 systemd[1]: cri-containerd-1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96.scope: Deactivated successfully. Jun 20 19:39:32.924367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce-rootfs.mount: Deactivated successfully. Jun 20 19:39:32.933895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96-rootfs.mount: Deactivated successfully. Jun 20 19:39:32.949201 containerd[1530]: time="2025-06-20T19:39:32.949071162Z" level=info msg="shim disconnected" id=ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce namespace=k8s.io Jun 20 19:39:32.951098 containerd[1530]: time="2025-06-20T19:39:32.950595782Z" level=warning msg="cleaning up after shim disconnected" id=ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce namespace=k8s.io Jun 20 19:39:32.951098 containerd[1530]: time="2025-06-20T19:39:32.950628343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:39:32.951970 containerd[1530]: time="2025-06-20T19:39:32.951922469Z" level=info msg="shim disconnected" id=1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96 namespace=k8s.io Jun 20 19:39:32.952092 containerd[1530]: time="2025-06-20T19:39:32.951981120Z" level=warning msg="cleaning up after shim disconnected" id=1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96 namespace=k8s.io Jun 20 19:39:32.952092 containerd[1530]: time="2025-06-20T19:39:32.951994876Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:39:32.975495 containerd[1530]: time="2025-06-20T19:39:32.972889390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" id:\"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" pid:2895 exit_status:137 exited_at:{seconds:1750448372 nanos:887015243}" Jun 20 19:39:32.975231 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce-shm.mount: Deactivated successfully. Jun 20 19:39:32.975929 containerd[1530]: time="2025-06-20T19:39:32.975873698Z" level=info msg="TearDown network for sandbox \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\" successfully" Jun 20 19:39:32.975986 containerd[1530]: time="2025-06-20T19:39:32.975934391Z" level=info msg="StopPodSandbox for \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\" returns successfully" Jun 20 19:39:32.976842 containerd[1530]: time="2025-06-20T19:39:32.976450453Z" level=info msg="received exit event sandbox_id:\"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\" exit_status:137 exited_at:{seconds:1750448372 nanos:863256447}" Jun 20 19:39:32.978547 containerd[1530]: time="2025-06-20T19:39:32.978503828Z" level=info msg="received exit event sandbox_id:\"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" exit_status:137 exited_at:{seconds:1750448372 nanos:887015243}" Jun 20 19:39:32.979666 containerd[1530]: time="2025-06-20T19:39:32.979634707Z" level=info msg="TearDown network for sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" successfully" Jun 20 19:39:32.979774 containerd[1530]: time="2025-06-20T19:39:32.979756897Z" level=info msg="StopPodSandbox for \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" returns successfully" Jun 20 19:39:33.085948 kubelet[2789]: I0620 19:39:33.085802 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg4mj\" (UniqueName: \"kubernetes.io/projected/65e9be8b-9429-42bf-b704-bd8e99a88c5e-kube-api-access-cg4mj\") pod \"65e9be8b-9429-42bf-b704-bd8e99a88c5e\" (UID: \"65e9be8b-9429-42bf-b704-bd8e99a88c5e\") " Jun 20 19:39:33.085948 kubelet[2789]: I0620 19:39:33.085969 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2vv4\" (UniqueName: \"kubernetes.io/projected/856b4ef3-aa72-41ae-b22a-feb15c63f816-kube-api-access-j2vv4\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.087949 kubelet[2789]: I0620 19:39:33.086028 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-xtables-lock\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.087949 kubelet[2789]: I0620 19:39:33.086099 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-etc-cni-netd\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.087949 kubelet[2789]: I0620 19:39:33.086153 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/856b4ef3-aa72-41ae-b22a-feb15c63f816-hubble-tls\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.087949 kubelet[2789]: I0620 19:39:33.086232 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65e9be8b-9429-42bf-b704-bd8e99a88c5e-cilium-config-path\") pod \"65e9be8b-9429-42bf-b704-bd8e99a88c5e\" (UID: \"65e9be8b-9429-42bf-b704-bd8e99a88c5e\") " Jun 20 19:39:33.087949 kubelet[2789]: I0620 19:39:33.086277 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-cgroup\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.087949 kubelet[2789]: I0620 19:39:33.086323 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-bpf-maps\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.089506 kubelet[2789]: I0620 19:39:33.086392 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/856b4ef3-aa72-41ae-b22a-feb15c63f816-clustermesh-secrets\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.089506 kubelet[2789]: I0620 19:39:33.086524 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-host-proc-sys-kernel\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.089506 kubelet[2789]: I0620 19:39:33.086578 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-lib-modules\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.089506 kubelet[2789]: I0620 19:39:33.086632 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-host-proc-sys-net\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.089506 kubelet[2789]: I0620 19:39:33.086686 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-hostproc\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.089506 kubelet[2789]: I0620 19:39:33.086737 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-run\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.091896 kubelet[2789]: I0620 19:39:33.086783 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-config-path\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.091896 kubelet[2789]: I0620 19:39:33.086843 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cni-path\") pod \"856b4ef3-aa72-41ae-b22a-feb15c63f816\" (UID: \"856b4ef3-aa72-41ae-b22a-feb15c63f816\") " Jun 20 19:39:33.091896 kubelet[2789]: I0620 19:39:33.087127 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cni-path" (OuterVolumeSpecName: "cni-path") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:39:33.091896 kubelet[2789]: I0620 19:39:33.087636 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:39:33.092740 kubelet[2789]: I0620 19:39:33.092652 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:39:33.093134 kubelet[2789]: I0620 19:39:33.093090 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:39:33.093438 kubelet[2789]: I0620 19:39:33.093132 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:39:33.093944 kubelet[2789]: I0620 19:39:33.093878 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-hostproc" (OuterVolumeSpecName: "hostproc") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:39:33.094264 kubelet[2789]: I0620 19:39:33.093954 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:39:33.105657 kubelet[2789]: I0620 19:39:33.102701 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:39:33.108798 kubelet[2789]: I0620 19:39:33.093378 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:39:33.109566 kubelet[2789]: I0620 19:39:33.093828 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:39:33.119978 kubelet[2789]: I0620 19:39:33.119822 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856b4ef3-aa72-41ae-b22a-feb15c63f816-kube-api-access-j2vv4" (OuterVolumeSpecName: "kube-api-access-j2vv4") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "kube-api-access-j2vv4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:39:33.120290 kubelet[2789]: I0620 19:39:33.120227 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/856b4ef3-aa72-41ae-b22a-feb15c63f816-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:39:33.120896 kubelet[2789]: I0620 19:39:33.120459 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e9be8b-9429-42bf-b704-bd8e99a88c5e-kube-api-access-cg4mj" (OuterVolumeSpecName: "kube-api-access-cg4mj") pod "65e9be8b-9429-42bf-b704-bd8e99a88c5e" (UID: "65e9be8b-9429-42bf-b704-bd8e99a88c5e"). InnerVolumeSpecName "kube-api-access-cg4mj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:39:33.127385 kubelet[2789]: I0620 19:39:33.127276 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65e9be8b-9429-42bf-b704-bd8e99a88c5e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "65e9be8b-9429-42bf-b704-bd8e99a88c5e" (UID: "65e9be8b-9429-42bf-b704-bd8e99a88c5e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:39:33.127840 kubelet[2789]: I0620 19:39:33.127313 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:39:33.128447 kubelet[2789]: I0620 19:39:33.128369 2789 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/856b4ef3-aa72-41ae-b22a-feb15c63f816-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "856b4ef3-aa72-41ae-b22a-feb15c63f816" (UID: "856b4ef3-aa72-41ae-b22a-feb15c63f816"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:39:33.187987 kubelet[2789]: I0620 19:39:33.187917 2789 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cg4mj\" (UniqueName: \"kubernetes.io/projected/65e9be8b-9429-42bf-b704-bd8e99a88c5e-kube-api-access-cg4mj\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.187987 kubelet[2789]: I0620 19:39:33.187971 2789 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j2vv4\" (UniqueName: \"kubernetes.io/projected/856b4ef3-aa72-41ae-b22a-feb15c63f816-kube-api-access-j2vv4\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.187987 kubelet[2789]: I0620 19:39:33.187998 2789 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-xtables-lock\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.187987 kubelet[2789]: I0620 19:39:33.188021 2789 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-etc-cni-netd\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.188703 kubelet[2789]: I0620 19:39:33.188046 2789 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/856b4ef3-aa72-41ae-b22a-feb15c63f816-hubble-tls\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.188703 kubelet[2789]: I0620 19:39:33.188072 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65e9be8b-9429-42bf-b704-bd8e99a88c5e-cilium-config-path\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.188703 kubelet[2789]: I0620 19:39:33.188094 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-cgroup\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.188703 kubelet[2789]: I0620 19:39:33.188116 2789 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-bpf-maps\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.188703 kubelet[2789]: I0620 19:39:33.188137 2789 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/856b4ef3-aa72-41ae-b22a-feb15c63f816-clustermesh-secrets\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.188703 kubelet[2789]: I0620 19:39:33.188159 2789 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-host-proc-sys-kernel\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.188703 kubelet[2789]: I0620 19:39:33.188180 2789 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-lib-modules\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.189246 kubelet[2789]: I0620 19:39:33.188204 2789 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-host-proc-sys-net\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.189246 kubelet[2789]: I0620 19:39:33.188226 2789 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-hostproc\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.189246 kubelet[2789]: I0620 19:39:33.188275 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-run\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.189246 kubelet[2789]: I0620 19:39:33.188300 2789 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/856b4ef3-aa72-41ae-b22a-feb15c63f816-cilium-config-path\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.189246 kubelet[2789]: I0620 19:39:33.188332 2789 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/856b4ef3-aa72-41ae-b22a-feb15c63f816-cni-path\") on node \"ci-4344-1-0-9-7ac33d8391.novalocal\" DevicePath \"\"" Jun 20 19:39:33.459851 systemd[1]: Removed slice kubepods-burstable-pod856b4ef3_aa72_41ae_b22a_feb15c63f816.slice - libcontainer container kubepods-burstable-pod856b4ef3_aa72_41ae_b22a_feb15c63f816.slice. Jun 20 19:39:33.460773 systemd[1]: kubepods-burstable-pod856b4ef3_aa72_41ae_b22a_feb15c63f816.slice: Consumed 10.023s CPU time, 126.3M memory peak, 144K read from disk, 13.3M written to disk. Jun 20 19:39:33.469826 systemd[1]: Removed slice kubepods-besteffort-pod65e9be8b_9429_42bf_b704_bd8e99a88c5e.slice - libcontainer container kubepods-besteffort-pod65e9be8b_9429_42bf_b704_bd8e99a88c5e.slice. Jun 20 19:39:33.497269 kubelet[2789]: I0620 19:39:33.495168 2789 scope.go:117] "RemoveContainer" containerID="0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704" Jun 20 19:39:33.517342 containerd[1530]: time="2025-06-20T19:39:33.508889988Z" level=info msg="RemoveContainer for \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\"" Jun 20 19:39:33.555589 containerd[1530]: time="2025-06-20T19:39:33.555515419Z" level=info msg="RemoveContainer for \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\" returns successfully" Jun 20 19:39:33.556115 kubelet[2789]: I0620 19:39:33.556066 2789 scope.go:117] "RemoveContainer" containerID="0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704" Jun 20 19:39:33.557818 containerd[1530]: time="2025-06-20T19:39:33.557608570Z" level=error msg="ContainerStatus for \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\": not found" Jun 20 19:39:33.558149 kubelet[2789]: E0620 19:39:33.558101 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\": not found" containerID="0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704" Jun 20 19:39:33.558541 kubelet[2789]: I0620 19:39:33.558325 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704"} err="failed to get container status \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\": rpc error: code = NotFound desc = an error occurred when try to find container \"0e647be69e2ff44a96010f5fc5dd0b7d077568d6ff96872c0aee727a9e628704\": not found" Jun 20 19:39:33.558644 kubelet[2789]: I0620 19:39:33.558628 2789 scope.go:117] "RemoveContainer" containerID="8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411" Jun 20 19:39:33.562624 containerd[1530]: time="2025-06-20T19:39:33.562569708Z" level=info msg="RemoveContainer for \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\"" Jun 20 19:39:33.574936 containerd[1530]: time="2025-06-20T19:39:33.574124099Z" level=info msg="RemoveContainer for \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" returns successfully" Jun 20 19:39:33.575299 kubelet[2789]: I0620 19:39:33.575278 2789 scope.go:117] "RemoveContainer" containerID="de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3" Jun 20 19:39:33.578776 containerd[1530]: time="2025-06-20T19:39:33.578744316Z" level=info msg="RemoveContainer for \"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\"" Jun 20 19:39:33.592375 containerd[1530]: time="2025-06-20T19:39:33.592315734Z" level=info msg="RemoveContainer for \"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\" returns successfully" Jun 20 19:39:33.592959 kubelet[2789]: I0620 19:39:33.592925 2789 scope.go:117] "RemoveContainer" containerID="6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230" Jun 20 19:39:33.596278 containerd[1530]: time="2025-06-20T19:39:33.596236505Z" level=info msg="RemoveContainer for \"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\"" Jun 20 19:39:33.601213 containerd[1530]: time="2025-06-20T19:39:33.601179249Z" level=info msg="RemoveContainer for \"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\" returns successfully" Jun 20 19:39:33.601659 kubelet[2789]: I0620 19:39:33.601605 2789 scope.go:117] "RemoveContainer" containerID="d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f" Jun 20 19:39:33.603713 containerd[1530]: time="2025-06-20T19:39:33.603684686Z" level=info msg="RemoveContainer for \"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\"" Jun 20 19:39:33.608166 containerd[1530]: time="2025-06-20T19:39:33.608128481Z" level=info msg="RemoveContainer for \"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\" returns successfully" Jun 20 19:39:33.608394 kubelet[2789]: I0620 19:39:33.608343 2789 scope.go:117] "RemoveContainer" containerID="b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012" Jun 20 19:39:33.610628 containerd[1530]: time="2025-06-20T19:39:33.610069104Z" level=info msg="RemoveContainer for \"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\"" Jun 20 19:39:33.613873 containerd[1530]: time="2025-06-20T19:39:33.613845393Z" level=info msg="RemoveContainer for \"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\" returns successfully" Jun 20 19:39:33.614827 kubelet[2789]: I0620 19:39:33.614758 2789 scope.go:117] "RemoveContainer" containerID="8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411" Jun 20 19:39:33.615719 containerd[1530]: time="2025-06-20T19:39:33.615639610Z" level=error msg="ContainerStatus for \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\": not found" Jun 20 19:39:33.615980 kubelet[2789]: E0620 19:39:33.615951 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\": not found" containerID="8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411" Jun 20 19:39:33.616128 kubelet[2789]: I0620 19:39:33.616099 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411"} err="failed to get container status \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c405a5f159f3e47c6d94a38d424c2b9ade6bc20820a9942b947ee3edd54c411\": not found" Jun 20 19:39:33.616211 kubelet[2789]: I0620 19:39:33.616198 2789 scope.go:117] "RemoveContainer" containerID="de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3" Jun 20 19:39:33.616962 containerd[1530]: time="2025-06-20T19:39:33.616884945Z" level=error msg="ContainerStatus for \"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\": not found" Jun 20 19:39:33.617204 kubelet[2789]: E0620 19:39:33.617091 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\": not found" containerID="de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3" Jun 20 19:39:33.617204 kubelet[2789]: I0620 19:39:33.617122 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3"} err="failed to get container status \"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"de10756f0ebd8cafb4b130bff024bfc1d1e302d79ba297358748eff96fc711e3\": not found" Jun 20 19:39:33.617204 kubelet[2789]: I0620 19:39:33.617138 2789 scope.go:117] "RemoveContainer" containerID="6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230" Jun 20 19:39:33.617583 containerd[1530]: time="2025-06-20T19:39:33.617547231Z" level=error msg="ContainerStatus for \"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\": not found" Jun 20 19:39:33.617962 kubelet[2789]: E0620 19:39:33.617905 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\": not found" containerID="6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230" Jun 20 19:39:33.618049 kubelet[2789]: I0620 19:39:33.617976 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230"} err="failed to get container status \"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\": rpc error: code = NotFound desc = an error occurred when try to find container \"6408e29a7e033697be930f9f8a9cbf67c68c5b4de27c9790d7bf6e3a094da230\": not found" Jun 20 19:39:33.618151 kubelet[2789]: I0620 19:39:33.618039 2789 scope.go:117] "RemoveContainer" containerID="d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f" Jun 20 19:39:33.618961 containerd[1530]: time="2025-06-20T19:39:33.618671597Z" level=error msg="ContainerStatus for \"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\": not found" Jun 20 19:39:33.619245 kubelet[2789]: E0620 19:39:33.619191 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\": not found" containerID="d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f" Jun 20 19:39:33.619384 kubelet[2789]: I0620 19:39:33.619254 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f"} err="failed to get container status \"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\": rpc error: code = NotFound desc = an error occurred when try to find container \"d35a7c8dabe4eff9d8f10ced638ba36193208d9096bb2e915ede492f64eb166f\": not found" Jun 20 19:39:33.619384 kubelet[2789]: I0620 19:39:33.619372 2789 scope.go:117] "RemoveContainer" containerID="b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012" Jun 20 19:39:33.619817 containerd[1530]: time="2025-06-20T19:39:33.619765898Z" level=error msg="ContainerStatus for \"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\": not found" Jun 20 19:39:33.620049 kubelet[2789]: E0620 19:39:33.620006 2789 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\": not found" containerID="b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012" Jun 20 19:39:33.620143 kubelet[2789]: I0620 19:39:33.620065 2789 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012"} err="failed to get container status \"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3820b517276a622d40ccd9e2a29ab0c6aa7cbdd7e8a4cfbc61d2e2bb0c96012\": not found" Jun 20 19:39:33.813857 systemd[1]: var-lib-kubelet-pods-65e9be8b\x2d9429\x2d42bf\x2db704\x2dbd8e99a88c5e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcg4mj.mount: Deactivated successfully. Jun 20 19:39:33.815075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96-shm.mount: Deactivated successfully. Jun 20 19:39:33.815302 systemd[1]: var-lib-kubelet-pods-856b4ef3\x2daa72\x2d41ae\x2db22a\x2dfeb15c63f816-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj2vv4.mount: Deactivated successfully. Jun 20 19:39:33.815955 systemd[1]: var-lib-kubelet-pods-856b4ef3\x2daa72\x2d41ae\x2db22a\x2dfeb15c63f816-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:39:33.816167 systemd[1]: var-lib-kubelet-pods-856b4ef3\x2daa72\x2d41ae\x2db22a\x2dfeb15c63f816-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:39:34.703769 sshd[4314]: Connection closed by 172.24.4.1 port 34532 Jun 20 19:39:34.705755 sshd-session[4312]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:34.727460 systemd[1]: sshd@23-172.24.4.217:22-172.24.4.1:34532.service: Deactivated successfully. Jun 20 19:39:34.735899 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:39:34.736844 systemd[1]: session-26.scope: Consumed 1.270s CPU time, 23.7M memory peak. Jun 20 19:39:34.740174 systemd-logind[1498]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:39:34.747710 systemd[1]: Started sshd@24-172.24.4.217:22-172.24.4.1:42718.service - OpenSSH per-connection server daemon (172.24.4.1:42718). Jun 20 19:39:34.752893 systemd-logind[1498]: Removed session 26. Jun 20 19:39:35.447569 kubelet[2789]: I0620 19:39:35.447270 2789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e9be8b-9429-42bf-b704-bd8e99a88c5e" path="/var/lib/kubelet/pods/65e9be8b-9429-42bf-b704-bd8e99a88c5e/volumes" Jun 20 19:39:35.450462 kubelet[2789]: I0620 19:39:35.449614 2789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="856b4ef3-aa72-41ae-b22a-feb15c63f816" path="/var/lib/kubelet/pods/856b4ef3-aa72-41ae-b22a-feb15c63f816/volumes" Jun 20 19:39:36.008515 sshd[4467]: Accepted publickey for core from 172.24.4.1 port 42718 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:39:36.012463 sshd-session[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:36.027338 systemd-logind[1498]: New session 27 of user core. Jun 20 19:39:36.039974 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 19:39:36.606653 kubelet[2789]: E0620 19:39:36.606390 2789 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:39:37.362260 kubelet[2789]: I0620 19:39:37.360761 2789 memory_manager.go:355] "RemoveStaleState removing state" podUID="856b4ef3-aa72-41ae-b22a-feb15c63f816" containerName="cilium-agent" Jun 20 19:39:37.362260 kubelet[2789]: I0620 19:39:37.360806 2789 memory_manager.go:355] "RemoveStaleState removing state" podUID="65e9be8b-9429-42bf-b704-bd8e99a88c5e" containerName="cilium-operator" Jun 20 19:39:37.375880 systemd[1]: Created slice kubepods-burstable-pod163d9b71_6a49_42cf_adf1_e24803d9a42f.slice - libcontainer container kubepods-burstable-pod163d9b71_6a49_42cf_adf1_e24803d9a42f.slice. Jun 20 19:39:37.426049 kubelet[2789]: I0620 19:39:37.425966 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/163d9b71-6a49-42cf-adf1-e24803d9a42f-cilium-ipsec-secrets\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.426365 kubelet[2789]: I0620 19:39:37.426240 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/163d9b71-6a49-42cf-adf1-e24803d9a42f-cilium-cgroup\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.426365 kubelet[2789]: I0620 19:39:37.426315 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82krt\" (UniqueName: \"kubernetes.io/projected/163d9b71-6a49-42cf-adf1-e24803d9a42f-kube-api-access-82krt\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.426584 kubelet[2789]: I0620 19:39:37.426529 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/163d9b71-6a49-42cf-adf1-e24803d9a42f-cni-path\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.426726 kubelet[2789]: I0620 19:39:37.426652 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/163d9b71-6a49-42cf-adf1-e24803d9a42f-etc-cni-netd\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.426945 kubelet[2789]: I0620 19:39:37.426704 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/163d9b71-6a49-42cf-adf1-e24803d9a42f-cilium-run\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.426945 kubelet[2789]: I0620 19:39:37.426861 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/163d9b71-6a49-42cf-adf1-e24803d9a42f-lib-modules\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.427138 kubelet[2789]: I0620 19:39:37.427085 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/163d9b71-6a49-42cf-adf1-e24803d9a42f-host-proc-sys-kernel\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.427324 kubelet[2789]: I0620 19:39:37.427282 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/163d9b71-6a49-42cf-adf1-e24803d9a42f-clustermesh-secrets\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.427464 kubelet[2789]: I0620 19:39:37.427415 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/163d9b71-6a49-42cf-adf1-e24803d9a42f-host-proc-sys-net\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.428450 kubelet[2789]: I0620 19:39:37.428425 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/163d9b71-6a49-42cf-adf1-e24803d9a42f-hostproc\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.428564 kubelet[2789]: I0620 19:39:37.428550 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/163d9b71-6a49-42cf-adf1-e24803d9a42f-xtables-lock\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.428726 kubelet[2789]: I0620 19:39:37.428635 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/163d9b71-6a49-42cf-adf1-e24803d9a42f-bpf-maps\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.428863 kubelet[2789]: I0620 19:39:37.428655 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/163d9b71-6a49-42cf-adf1-e24803d9a42f-cilium-config-path\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.428863 kubelet[2789]: I0620 19:39:37.428812 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/163d9b71-6a49-42cf-adf1-e24803d9a42f-hubble-tls\") pod \"cilium-kkvnx\" (UID: \"163d9b71-6a49-42cf-adf1-e24803d9a42f\") " pod="kube-system/cilium-kkvnx" Jun 20 19:39:37.594914 sshd[4469]: Connection closed by 172.24.4.1 port 42718 Jun 20 19:39:37.594711 sshd-session[4467]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:37.633320 systemd[1]: sshd@24-172.24.4.217:22-172.24.4.1:42718.service: Deactivated successfully. Jun 20 19:39:37.635519 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 19:39:37.636560 systemd-logind[1498]: Session 27 logged out. Waiting for processes to exit. Jun 20 19:39:37.641410 systemd[1]: Started sshd@25-172.24.4.217:22-172.24.4.1:42732.service - OpenSSH per-connection server daemon (172.24.4.1:42732). Jun 20 19:39:37.643823 systemd-logind[1498]: Removed session 27. Jun 20 19:39:37.683318 containerd[1530]: time="2025-06-20T19:39:37.683238292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kkvnx,Uid:163d9b71-6a49-42cf-adf1-e24803d9a42f,Namespace:kube-system,Attempt:0,}" Jun 20 19:39:37.715745 containerd[1530]: time="2025-06-20T19:39:37.715525669Z" level=info msg="connecting to shim 750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7" address="unix:///run/containerd/s/b3804232eff40b46cd2fea7d1ca92992b3eed0f5d45a58e7599a6dabb9a10c01" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:39:37.752842 systemd[1]: Started cri-containerd-750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7.scope - libcontainer container 750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7. Jun 20 19:39:37.805562 containerd[1530]: time="2025-06-20T19:39:37.805262789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kkvnx,Uid:163d9b71-6a49-42cf-adf1-e24803d9a42f,Namespace:kube-system,Attempt:0,} returns sandbox id \"750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7\"" Jun 20 19:39:37.812066 containerd[1530]: time="2025-06-20T19:39:37.812004288Z" level=info msg="CreateContainer within sandbox \"750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:39:37.822734 containerd[1530]: time="2025-06-20T19:39:37.821948197Z" level=info msg="Container 591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:39:37.848930 containerd[1530]: time="2025-06-20T19:39:37.848892095Z" level=info msg="CreateContainer within sandbox \"750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063\"" Jun 20 19:39:37.850070 containerd[1530]: time="2025-06-20T19:39:37.849806056Z" level=info msg="StartContainer for \"591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063\"" Jun 20 19:39:37.851230 containerd[1530]: time="2025-06-20T19:39:37.851195361Z" level=info msg="connecting to shim 591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063" address="unix:///run/containerd/s/b3804232eff40b46cd2fea7d1ca92992b3eed0f5d45a58e7599a6dabb9a10c01" protocol=ttrpc version=3 Jun 20 19:39:37.876689 systemd[1]: Started cri-containerd-591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063.scope - libcontainer container 591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063. Jun 20 19:39:37.927008 containerd[1530]: time="2025-06-20T19:39:37.926321978Z" level=info msg="StartContainer for \"591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063\" returns successfully" Jun 20 19:39:37.946013 systemd[1]: cri-containerd-591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063.scope: Deactivated successfully. Jun 20 19:39:37.950835 containerd[1530]: time="2025-06-20T19:39:37.950786759Z" level=info msg="TaskExit event in podsandbox handler container_id:\"591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063\" id:\"591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063\" pid:4544 exited_at:{seconds:1750448377 nanos:950290656}" Jun 20 19:39:37.950941 containerd[1530]: time="2025-06-20T19:39:37.950916523Z" level=info msg="received exit event container_id:\"591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063\" id:\"591e2b2c257613cc0d946fd6124db2b765c3bfb81905c5e72b9d959cf5424063\" pid:4544 exited_at:{seconds:1750448377 nanos:950290656}" Jun 20 19:39:38.629317 containerd[1530]: time="2025-06-20T19:39:38.629012815Z" level=info msg="CreateContainer within sandbox \"750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:39:38.661839 containerd[1530]: time="2025-06-20T19:39:38.659790740Z" level=info msg="Container 2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:39:38.687900 containerd[1530]: time="2025-06-20T19:39:38.687817927Z" level=info msg="CreateContainer within sandbox \"750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f\"" Jun 20 19:39:38.689901 containerd[1530]: time="2025-06-20T19:39:38.689852828Z" level=info msg="StartContainer for \"2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f\"" Jun 20 19:39:38.692292 containerd[1530]: time="2025-06-20T19:39:38.692240321Z" level=info msg="connecting to shim 2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f" address="unix:///run/containerd/s/b3804232eff40b46cd2fea7d1ca92992b3eed0f5d45a58e7599a6dabb9a10c01" protocol=ttrpc version=3 Jun 20 19:39:38.720667 systemd[1]: Started cri-containerd-2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f.scope - libcontainer container 2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f. Jun 20 19:39:38.758853 containerd[1530]: time="2025-06-20T19:39:38.758755136Z" level=info msg="StartContainer for \"2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f\" returns successfully" Jun 20 19:39:38.770641 systemd[1]: cri-containerd-2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f.scope: Deactivated successfully. Jun 20 19:39:38.772325 containerd[1530]: time="2025-06-20T19:39:38.772108061Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f\" id:\"2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f\" pid:4589 exited_at:{seconds:1750448378 nanos:770935274}" Jun 20 19:39:38.772575 containerd[1530]: time="2025-06-20T19:39:38.772298520Z" level=info msg="received exit event container_id:\"2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f\" id:\"2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f\" pid:4589 exited_at:{seconds:1750448378 nanos:770935274}" Jun 20 19:39:38.794942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bd9fa0f30a5e3df14f478276c3872ed483997455ce919ae16fb21bfc9c9ec4f-rootfs.mount: Deactivated successfully. Jun 20 19:39:38.955135 sshd[4483]: Accepted publickey for core from 172.24.4.1 port 42732 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:39:38.959887 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:38.975612 systemd-logind[1498]: New session 28 of user core. Jun 20 19:39:38.986833 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 19:39:39.628526 sshd[4621]: Connection closed by 172.24.4.1 port 42732 Jun 20 19:39:39.630288 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:39.652106 systemd[1]: sshd@25-172.24.4.217:22-172.24.4.1:42732.service: Deactivated successfully. Jun 20 19:39:39.658161 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 19:39:39.665346 systemd-logind[1498]: Session 28 logged out. Waiting for processes to exit. Jun 20 19:39:39.677312 systemd[1]: Started sshd@26-172.24.4.217:22-172.24.4.1:42742.service - OpenSSH per-connection server daemon (172.24.4.1:42742). Jun 20 19:39:39.690122 systemd-logind[1498]: Removed session 28. Jun 20 19:39:39.694962 containerd[1530]: time="2025-06-20T19:39:39.687402732Z" level=info msg="CreateContainer within sandbox \"750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:39:39.734832 containerd[1530]: time="2025-06-20T19:39:39.734744154Z" level=info msg="Container 5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:39:39.739436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414548925.mount: Deactivated successfully. Jun 20 19:39:39.782106 containerd[1530]: time="2025-06-20T19:39:39.782045431Z" level=info msg="CreateContainer within sandbox \"750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192\"" Jun 20 19:39:39.784129 containerd[1530]: time="2025-06-20T19:39:39.784077616Z" level=info msg="StartContainer for \"5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192\"" Jun 20 19:39:39.787249 containerd[1530]: time="2025-06-20T19:39:39.787107590Z" level=info msg="connecting to shim 5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192" address="unix:///run/containerd/s/b3804232eff40b46cd2fea7d1ca92992b3eed0f5d45a58e7599a6dabb9a10c01" protocol=ttrpc version=3 Jun 20 19:39:39.823667 systemd[1]: Started cri-containerd-5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192.scope - libcontainer container 5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192. Jun 20 19:39:39.879049 systemd[1]: cri-containerd-5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192.scope: Deactivated successfully. Jun 20 19:39:39.882907 containerd[1530]: time="2025-06-20T19:39:39.882777894Z" level=info msg="StartContainer for \"5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192\" returns successfully" Jun 20 19:39:39.885184 containerd[1530]: time="2025-06-20T19:39:39.885134560Z" level=info msg="received exit event container_id:\"5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192\" id:\"5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192\" pid:4641 exited_at:{seconds:1750448379 nanos:884304457}" Jun 20 19:39:39.885826 containerd[1530]: time="2025-06-20T19:39:39.885784363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192\" id:\"5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192\" pid:4641 exited_at:{seconds:1750448379 nanos:884304457}" Jun 20 19:39:39.932133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5302f495f1086e34cf6edaad14b430e8e7ab1974c672468e652c92b74ad95192-rootfs.mount: Deactivated successfully. Jun 20 19:39:40.691992 containerd[1530]: time="2025-06-20T19:39:40.691886327Z" level=info msg="CreateContainer within sandbox \"750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:39:40.712541 containerd[1530]: time="2025-06-20T19:39:40.712413427Z" level=info msg="Container 39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:39:40.743042 containerd[1530]: time="2025-06-20T19:39:40.742799462Z" level=info msg="CreateContainer within sandbox \"750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab\"" Jun 20 19:39:40.745700 containerd[1530]: time="2025-06-20T19:39:40.745167581Z" level=info msg="StartContainer for \"39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab\"" Jun 20 19:39:40.750260 containerd[1530]: time="2025-06-20T19:39:40.750052215Z" level=info msg="connecting to shim 39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab" address="unix:///run/containerd/s/b3804232eff40b46cd2fea7d1ca92992b3eed0f5d45a58e7599a6dabb9a10c01" protocol=ttrpc version=3 Jun 20 19:39:40.777613 systemd[1]: Started cri-containerd-39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab.scope - libcontainer container 39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab. Jun 20 19:39:40.815134 systemd[1]: cri-containerd-39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab.scope: Deactivated successfully. Jun 20 19:39:40.816390 containerd[1530]: time="2025-06-20T19:39:40.815883290Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab\" id:\"39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab\" pid:4680 exited_at:{seconds:1750448380 nanos:815442050}" Jun 20 19:39:40.826056 containerd[1530]: time="2025-06-20T19:39:40.825889334Z" level=info msg="received exit event container_id:\"39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab\" id:\"39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab\" pid:4680 exited_at:{seconds:1750448380 nanos:815442050}" Jun 20 19:39:40.834353 containerd[1530]: time="2025-06-20T19:39:40.834312259Z" level=info msg="StartContainer for \"39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab\" returns successfully" Jun 20 19:39:40.850210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39eefa05056759396b475af599fb50be5914dc1db1e7a95472c3726b74aa7cab-rootfs.mount: Deactivated successfully. Jun 20 19:39:40.971102 sshd[4628]: Accepted publickey for core from 172.24.4.1 port 42742 ssh2: RSA SHA256:LYn+fusd8YWkzHw8aAHCykt0zs9fuaIug0oT7GKHECY Jun 20 19:39:40.973302 sshd-session[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:39:40.985274 systemd-logind[1498]: New session 29 of user core. Jun 20 19:39:40.995742 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 20 19:39:41.609598 kubelet[2789]: E0620 19:39:41.609351 2789 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:39:41.709074 containerd[1530]: time="2025-06-20T19:39:41.708966539Z" level=info msg="CreateContainer within sandbox \"750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:39:41.739504 containerd[1530]: time="2025-06-20T19:39:41.739385907Z" level=info msg="Container 9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:39:41.769164 containerd[1530]: time="2025-06-20T19:39:41.769099197Z" level=info msg="CreateContainer within sandbox \"750692b3a9566a422c1c749fc9088c9f198e91c3685cf6c4d14c5d6b7ef58df7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d\"" Jun 20 19:39:41.772082 containerd[1530]: time="2025-06-20T19:39:41.771029320Z" level=info msg="StartContainer for \"9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d\"" Jun 20 19:39:41.775037 containerd[1530]: time="2025-06-20T19:39:41.774985527Z" level=info msg="connecting to shim 9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d" address="unix:///run/containerd/s/b3804232eff40b46cd2fea7d1ca92992b3eed0f5d45a58e7599a6dabb9a10c01" protocol=ttrpc version=3 Jun 20 19:39:41.810641 systemd[1]: Started cri-containerd-9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d.scope - libcontainer container 9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d. Jun 20 19:39:41.862509 containerd[1530]: time="2025-06-20T19:39:41.862360032Z" level=info msg="StartContainer for \"9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d\" returns successfully" Jun 20 19:39:42.021103 containerd[1530]: time="2025-06-20T19:39:42.021053802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d\" id:\"edcfe37d91e24780aad10700fcdfb43a3c6ebe92e6c2aeb406c4c8d2cf1eff7a\" pid:4755 exited_at:{seconds:1750448382 nanos:20497334}" Jun 20 19:39:42.459529 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:39:42.528619 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Jun 20 19:39:42.764300 kubelet[2789]: I0620 19:39:42.764055 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kkvnx" podStartSLOduration=5.764003779 podStartE2EDuration="5.764003779s" podCreationTimestamp="2025-06-20 19:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:39:42.761966243 +0000 UTC m=+161.570832084" watchObservedRunningTime="2025-06-20 19:39:42.764003779 +0000 UTC m=+161.572869620" Jun 20 19:39:43.442534 kubelet[2789]: E0620 19:39:43.441327 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-nxzck" podUID="47aa8fd1-c2ba-41cc-8ce9-eed5ae8670b0" Jun 20 19:39:43.711212 containerd[1530]: time="2025-06-20T19:39:43.711109845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d\" id:\"1ed1e7d833bb0f1509024c587cff624b1df62a8692bb2ab6c634bb1000a7e2f9\" pid:4855 exit_status:1 exited_at:{seconds:1750448383 nanos:710427049}" Jun 20 19:39:44.956512 kubelet[2789]: I0620 19:39:44.955593 2789 setters.go:602] "Node became not ready" node="ci-4344-1-0-9-7ac33d8391.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:39:44Z","lastTransitionTime":"2025-06-20T19:39:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:39:45.441496 kubelet[2789]: E0620 19:39:45.441325 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-nxzck" podUID="47aa8fd1-c2ba-41cc-8ce9-eed5ae8670b0" Jun 20 19:39:45.870306 containerd[1530]: time="2025-06-20T19:39:45.869990787Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d\" id:\"5c36fd7327fa21166c7a98ee6c67138743db4d5cc991226532757ff16960ba7e\" pid:5226 exit_status:1 exited_at:{seconds:1750448385 nanos:869293344}" Jun 20 19:39:45.977157 systemd-networkd[1445]: lxc_health: Link UP Jun 20 19:39:45.988806 systemd-networkd[1445]: lxc_health: Gained carrier Jun 20 19:39:47.743629 systemd-networkd[1445]: lxc_health: Gained IPv6LL Jun 20 19:39:48.121310 containerd[1530]: time="2025-06-20T19:39:48.121162581Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d\" id:\"c3cd192a4e177bacf8aa7b1f2493ad8d9b900ec089bd12d944568c7647743d73\" pid:5315 exited_at:{seconds:1750448388 nanos:120129546}" Jun 20 19:39:50.454418 containerd[1530]: time="2025-06-20T19:39:50.454362596Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d\" id:\"9cd1bb580c4b1550130a6dfe7baed07344049a59b2c307dcf7fc840ce462aac0\" pid:5341 exited_at:{seconds:1750448390 nanos:452746756}" Jun 20 19:39:52.672342 containerd[1530]: time="2025-06-20T19:39:52.672244820Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c0f9fc1dcdd1e11989786466504ff51211e97c13979197a6df39ad87ccc2a7d\" id:\"7373818e8543bef6e5e52ee48ea7207a53044374f4f8f7b3c07d5d63f348e660\" pid:5385 exited_at:{seconds:1750448392 nanos:671372689}" Jun 20 19:39:52.855883 sshd[4706]: Connection closed by 172.24.4.1 port 42742 Jun 20 19:39:52.858408 sshd-session[4628]: pam_unix(sshd:session): session closed for user core Jun 20 19:39:52.870298 systemd[1]: sshd@26-172.24.4.217:22-172.24.4.1:42742.service: Deactivated successfully. Jun 20 19:39:52.880180 systemd[1]: session-29.scope: Deactivated successfully. Jun 20 19:39:52.885346 systemd-logind[1498]: Session 29 logged out. Waiting for processes to exit. Jun 20 19:39:52.891390 systemd-logind[1498]: Removed session 29. Jun 20 19:40:01.404815 containerd[1530]: time="2025-06-20T19:40:01.403319782Z" level=info msg="StopPodSandbox for \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\"" Jun 20 19:40:01.410557 containerd[1530]: time="2025-06-20T19:40:01.407573157Z" level=info msg="TearDown network for sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" successfully" Jun 20 19:40:01.410557 containerd[1530]: time="2025-06-20T19:40:01.407648599Z" level=info msg="StopPodSandbox for \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" returns successfully" Jun 20 19:40:01.410900 containerd[1530]: time="2025-06-20T19:40:01.410815961Z" level=info msg="RemovePodSandbox for \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\"" Jun 20 19:40:01.411139 containerd[1530]: time="2025-06-20T19:40:01.410948650Z" level=info msg="Forcibly stopping sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\"" Jun 20 19:40:01.411680 containerd[1530]: time="2025-06-20T19:40:01.411541676Z" level=info msg="TearDown network for sandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" successfully" Jun 20 19:40:01.419517 containerd[1530]: time="2025-06-20T19:40:01.419318172Z" level=info msg="Ensure that sandbox 1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96 in task-service has been cleanup successfully" Jun 20 19:40:01.428555 containerd[1530]: time="2025-06-20T19:40:01.428410634Z" level=info msg="RemovePodSandbox \"1f34d84524119e7624c4626a117c8a1b41d3f0aaa4b1653c14e89f544c8cfa96\" returns successfully" Jun 20 19:40:01.429949 containerd[1530]: time="2025-06-20T19:40:01.429756367Z" level=info msg="StopPodSandbox for \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\"" Jun 20 19:40:01.430796 containerd[1530]: time="2025-06-20T19:40:01.430737564Z" level=info msg="TearDown network for sandbox \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\" successfully" Jun 20 19:40:01.431158 containerd[1530]: time="2025-06-20T19:40:01.431001861Z" level=info msg="StopPodSandbox for \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\" returns successfully" Jun 20 19:40:01.432521 containerd[1530]: time="2025-06-20T19:40:01.432211748Z" level=info msg="RemovePodSandbox for \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\"" Jun 20 19:40:01.432521 containerd[1530]: time="2025-06-20T19:40:01.432375106Z" level=info msg="Forcibly stopping sandbox \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\"" Jun 20 19:40:01.432977 containerd[1530]: time="2025-06-20T19:40:01.432925973Z" level=info msg="TearDown network for sandbox \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\" successfully" Jun 20 19:40:01.436459 containerd[1530]: time="2025-06-20T19:40:01.436400442Z" level=info msg="Ensure that sandbox ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce in task-service has been cleanup successfully" Jun 20 19:40:01.451386 containerd[1530]: time="2025-06-20T19:40:01.451086902Z" level=info msg="RemovePodSandbox \"ade2489e3fd91a3c5e4181c7c614c140ac146ed71b782a0b7612193453ecbbce\" returns successfully"