Mar 21 13:23:23.087947 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 21 10:52:59 -00 2025 Mar 21 13:23:23.087981 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 13:23:23.087992 kernel: BIOS-provided physical RAM map: Mar 21 13:23:23.088001 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 21 13:23:23.088009 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 21 13:23:23.088019 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 21 13:23:23.088029 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Mar 21 13:23:23.088037 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Mar 21 13:23:23.088045 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 21 13:23:23.088053 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 21 13:23:23.088062 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Mar 21 13:23:23.088070 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 21 13:23:23.088078 kernel: NX (Execute Disable) protection: active Mar 21 13:23:23.088086 kernel: APIC: Static calls initialized Mar 21 13:23:23.088099 kernel: SMBIOS 3.0.0 present. Mar 21 13:23:23.088108 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Mar 21 13:23:23.088117 kernel: Hypervisor detected: KVM Mar 21 13:23:23.088125 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 21 13:23:23.088133 kernel: kvm-clock: using sched offset of 3685470967 cycles Mar 21 13:23:23.088142 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 21 13:23:23.088154 kernel: tsc: Detected 1996.249 MHz processor Mar 21 13:23:23.088164 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 21 13:23:23.088173 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 21 13:23:23.088182 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Mar 21 13:23:23.088191 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 21 13:23:23.088200 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 21 13:23:23.088208 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Mar 21 13:23:23.088217 kernel: ACPI: Early table checksum verification disabled Mar 21 13:23:23.088228 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Mar 21 13:23:23.088237 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 13:23:23.088246 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 13:23:23.088255 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 13:23:23.088263 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Mar 21 13:23:23.088272 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 13:23:23.088281 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 13:23:23.088290 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Mar 21 13:23:23.088299 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Mar 21 13:23:23.088310 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Mar 21 13:23:23.088318 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Mar 21 13:23:23.088327 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Mar 21 13:23:23.088340 kernel: No NUMA configuration found Mar 21 13:23:23.088349 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Mar 21 13:23:23.088358 kernel: NODE_DATA(0) allocated [mem 0x13fff7000-0x13fffcfff] Mar 21 13:23:23.088367 kernel: Zone ranges: Mar 21 13:23:23.088378 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 21 13:23:23.088388 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 21 13:23:23.088397 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Mar 21 13:23:23.088406 kernel: Movable zone start for each node Mar 21 13:23:23.088415 kernel: Early memory node ranges Mar 21 13:23:23.088424 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 21 13:23:23.088433 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Mar 21 13:23:23.088442 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Mar 21 13:23:23.088453 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Mar 21 13:23:23.088462 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 21 13:23:23.088471 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 21 13:23:23.088481 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 21 13:23:23.088490 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 21 13:23:23.088499 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 21 13:23:23.088508 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 21 13:23:23.088517 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 21 13:23:23.088527 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 21 13:23:23.088538 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 21 13:23:23.088547 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 21 13:23:23.088556 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 21 13:23:23.088565 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 21 13:23:23.088574 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 21 13:23:23.088584 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 21 13:23:23.088593 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Mar 21 13:23:23.088602 kernel: Booting paravirtualized kernel on KVM Mar 21 13:23:23.088611 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 21 13:23:23.088623 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 21 13:23:23.088632 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 21 13:23:23.088641 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 21 13:23:23.088650 kernel: pcpu-alloc: [0] 0 1 Mar 21 13:23:23.088659 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 21 13:23:23.088670 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 13:23:23.088680 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 21 13:23:23.088691 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 21 13:23:23.088702 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 21 13:23:23.088710 kernel: Fallback order for Node 0: 0 Mar 21 13:23:23.088719 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 21 13:23:23.088727 kernel: Policy zone: Normal Mar 21 13:23:23.088736 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 21 13:23:23.088744 kernel: software IO TLB: area num 2. Mar 21 13:23:23.088753 kernel: Memory: 3962108K/4193772K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43588K init, 1476K bss, 231404K reserved, 0K cma-reserved) Mar 21 13:23:23.088762 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 21 13:23:23.088771 kernel: ftrace: allocating 37985 entries in 149 pages Mar 21 13:23:23.088781 kernel: ftrace: allocated 149 pages with 4 groups Mar 21 13:23:23.088790 kernel: Dynamic Preempt: voluntary Mar 21 13:23:23.088798 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 21 13:23:23.088808 kernel: rcu: RCU event tracing is enabled. Mar 21 13:23:23.088816 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 21 13:23:23.088825 kernel: Trampoline variant of Tasks RCU enabled. Mar 21 13:23:23.088834 kernel: Rude variant of Tasks RCU enabled. Mar 21 13:23:23.088842 kernel: Tracing variant of Tasks RCU enabled. Mar 21 13:23:23.088851 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 21 13:23:23.088861 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 21 13:23:23.088870 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 21 13:23:23.088960 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 21 13:23:23.088970 kernel: Console: colour VGA+ 80x25 Mar 21 13:23:23.088978 kernel: printk: console [tty0] enabled Mar 21 13:23:23.088987 kernel: printk: console [ttyS0] enabled Mar 21 13:23:23.088995 kernel: ACPI: Core revision 20230628 Mar 21 13:23:23.089004 kernel: APIC: Switch to symmetric I/O mode setup Mar 21 13:23:23.089012 kernel: x2apic enabled Mar 21 13:23:23.089024 kernel: APIC: Switched APIC routing to: physical x2apic Mar 21 13:23:23.089033 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 21 13:23:23.089041 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 21 13:23:23.089050 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Mar 21 13:23:23.089058 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 21 13:23:23.089067 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 21 13:23:23.089076 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 21 13:23:23.089084 kernel: Spectre V2 : Mitigation: Retpolines Mar 21 13:23:23.089093 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 21 13:23:23.089103 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 21 13:23:23.089112 kernel: Speculative Store Bypass: Vulnerable Mar 21 13:23:23.089121 kernel: x86/fpu: x87 FPU will use FXSAVE Mar 21 13:23:23.089129 kernel: Freeing SMP alternatives memory: 32K Mar 21 13:23:23.089144 kernel: pid_max: default: 32768 minimum: 301 Mar 21 13:23:23.089155 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 21 13:23:23.089164 kernel: landlock: Up and running. Mar 21 13:23:23.089172 kernel: SELinux: Initializing. Mar 21 13:23:23.089181 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 13:23:23.089190 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 13:23:23.089199 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Mar 21 13:23:23.089209 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 21 13:23:23.089220 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 21 13:23:23.089229 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 21 13:23:23.089238 kernel: Performance Events: AMD PMU driver. Mar 21 13:23:23.089247 kernel: ... version: 0 Mar 21 13:23:23.089258 kernel: ... bit width: 48 Mar 21 13:23:23.089266 kernel: ... generic registers: 4 Mar 21 13:23:23.089275 kernel: ... value mask: 0000ffffffffffff Mar 21 13:23:23.089284 kernel: ... max period: 00007fffffffffff Mar 21 13:23:23.089293 kernel: ... fixed-purpose events: 0 Mar 21 13:23:23.089302 kernel: ... event mask: 000000000000000f Mar 21 13:23:23.089311 kernel: signal: max sigframe size: 1440 Mar 21 13:23:23.089320 kernel: rcu: Hierarchical SRCU implementation. Mar 21 13:23:23.089329 kernel: rcu: Max phase no-delay instances is 400. Mar 21 13:23:23.089338 kernel: smp: Bringing up secondary CPUs ... Mar 21 13:23:23.089348 kernel: smpboot: x86: Booting SMP configuration: Mar 21 13:23:23.089357 kernel: .... node #0, CPUs: #1 Mar 21 13:23:23.089366 kernel: smp: Brought up 1 node, 2 CPUs Mar 21 13:23:23.089375 kernel: smpboot: Max logical packages: 2 Mar 21 13:23:23.089384 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Mar 21 13:23:23.089393 kernel: devtmpfs: initialized Mar 21 13:23:23.089402 kernel: x86/mm: Memory block size: 128MB Mar 21 13:23:23.089411 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 21 13:23:23.089420 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 21 13:23:23.089431 kernel: pinctrl core: initialized pinctrl subsystem Mar 21 13:23:23.089439 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 21 13:23:23.089448 kernel: audit: initializing netlink subsys (disabled) Mar 21 13:23:23.089457 kernel: audit: type=2000 audit(1742563402.492:1): state=initialized audit_enabled=0 res=1 Mar 21 13:23:23.089466 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 21 13:23:23.089475 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 21 13:23:23.089484 kernel: cpuidle: using governor menu Mar 21 13:23:23.089493 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 21 13:23:23.089502 kernel: dca service started, version 1.12.1 Mar 21 13:23:23.089513 kernel: PCI: Using configuration type 1 for base access Mar 21 13:23:23.089522 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 21 13:23:23.089531 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 21 13:23:23.089540 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 21 13:23:23.089549 kernel: ACPI: Added _OSI(Module Device) Mar 21 13:23:23.089558 kernel: ACPI: Added _OSI(Processor Device) Mar 21 13:23:23.089567 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 21 13:23:23.089576 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 21 13:23:23.089585 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 21 13:23:23.089595 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 21 13:23:23.089604 kernel: ACPI: Interpreter enabled Mar 21 13:23:23.089613 kernel: ACPI: PM: (supports S0 S3 S5) Mar 21 13:23:23.089622 kernel: ACPI: Using IOAPIC for interrupt routing Mar 21 13:23:23.089631 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 21 13:23:23.089640 kernel: PCI: Using E820 reservations for host bridge windows Mar 21 13:23:23.089649 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 21 13:23:23.089658 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 21 13:23:23.089809 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 21 13:23:23.089935 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 21 13:23:23.090033 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 21 13:23:23.090047 kernel: acpiphp: Slot [3] registered Mar 21 13:23:23.090056 kernel: acpiphp: Slot [4] registered Mar 21 13:23:23.090065 kernel: acpiphp: Slot [5] registered Mar 21 13:23:23.090074 kernel: acpiphp: Slot [6] registered Mar 21 13:23:23.090083 kernel: acpiphp: Slot [7] registered Mar 21 13:23:23.090095 kernel: acpiphp: Slot [8] registered Mar 21 13:23:23.090104 kernel: acpiphp: Slot [9] registered Mar 21 13:23:23.090112 kernel: acpiphp: Slot [10] registered Mar 21 13:23:23.090121 kernel: acpiphp: Slot [11] registered Mar 21 13:23:23.090130 kernel: acpiphp: Slot [12] registered Mar 21 13:23:23.090139 kernel: acpiphp: Slot [13] registered Mar 21 13:23:23.090148 kernel: acpiphp: Slot [14] registered Mar 21 13:23:23.090156 kernel: acpiphp: Slot [15] registered Mar 21 13:23:23.090165 kernel: acpiphp: Slot [16] registered Mar 21 13:23:23.090174 kernel: acpiphp: Slot [17] registered Mar 21 13:23:23.090185 kernel: acpiphp: Slot [18] registered Mar 21 13:23:23.090194 kernel: acpiphp: Slot [19] registered Mar 21 13:23:23.090202 kernel: acpiphp: Slot [20] registered Mar 21 13:23:23.090211 kernel: acpiphp: Slot [21] registered Mar 21 13:23:23.090220 kernel: acpiphp: Slot [22] registered Mar 21 13:23:23.090229 kernel: acpiphp: Slot [23] registered Mar 21 13:23:23.090238 kernel: acpiphp: Slot [24] registered Mar 21 13:23:23.090246 kernel: acpiphp: Slot [25] registered Mar 21 13:23:23.090255 kernel: acpiphp: Slot [26] registered Mar 21 13:23:23.090266 kernel: acpiphp: Slot [27] registered Mar 21 13:23:23.090274 kernel: acpiphp: Slot [28] registered Mar 21 13:23:23.090283 kernel: acpiphp: Slot [29] registered Mar 21 13:23:23.090292 kernel: acpiphp: Slot [30] registered Mar 21 13:23:23.090301 kernel: acpiphp: Slot [31] registered Mar 21 13:23:23.090310 kernel: PCI host bridge to bus 0000:00 Mar 21 13:23:23.090404 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 21 13:23:23.090490 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 21 13:23:23.090577 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 21 13:23:23.090659 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 21 13:23:23.090756 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Mar 21 13:23:23.090843 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 21 13:23:23.091031 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 21 13:23:23.091144 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 21 13:23:23.091254 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 21 13:23:23.091360 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Mar 21 13:23:23.091461 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 21 13:23:23.091561 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 21 13:23:23.091662 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 21 13:23:23.091760 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 21 13:23:23.091859 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 21 13:23:23.092030 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 21 13:23:23.092122 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 21 13:23:23.092222 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 21 13:23:23.092315 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 21 13:23:23.092407 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 21 13:23:23.092501 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Mar 21 13:23:23.092592 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Mar 21 13:23:23.092690 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 21 13:23:23.092791 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 21 13:23:23.092925 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Mar 21 13:23:23.093022 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Mar 21 13:23:23.093114 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Mar 21 13:23:23.093204 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Mar 21 13:23:23.093303 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 21 13:23:23.093400 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 21 13:23:23.093490 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Mar 21 13:23:23.093580 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Mar 21 13:23:23.093685 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Mar 21 13:23:23.093777 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Mar 21 13:23:23.093868 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Mar 21 13:23:23.094011 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Mar 21 13:23:23.094108 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Mar 21 13:23:23.094198 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Mar 21 13:23:23.094288 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Mar 21 13:23:23.094302 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 21 13:23:23.094311 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 21 13:23:23.094321 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 21 13:23:23.094330 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 21 13:23:23.094339 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 21 13:23:23.094351 kernel: iommu: Default domain type: Translated Mar 21 13:23:23.094360 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 21 13:23:23.094369 kernel: PCI: Using ACPI for IRQ routing Mar 21 13:23:23.094379 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 21 13:23:23.094388 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 21 13:23:23.094397 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Mar 21 13:23:23.094486 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 21 13:23:23.094576 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 21 13:23:23.094677 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 21 13:23:23.094695 kernel: vgaarb: loaded Mar 21 13:23:23.094704 kernel: clocksource: Switched to clocksource kvm-clock Mar 21 13:23:23.094713 kernel: VFS: Disk quotas dquot_6.6.0 Mar 21 13:23:23.094722 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 21 13:23:23.094731 kernel: pnp: PnP ACPI init Mar 21 13:23:23.094834 kernel: pnp 00:03: [dma 2] Mar 21 13:23:23.094850 kernel: pnp: PnP ACPI: found 5 devices Mar 21 13:23:23.094860 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 21 13:23:23.094916 kernel: NET: Registered PF_INET protocol family Mar 21 13:23:23.094928 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 21 13:23:23.094938 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 21 13:23:23.094947 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 21 13:23:23.094957 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 21 13:23:23.094967 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 21 13:23:23.094977 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 21 13:23:23.094987 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 13:23:23.094997 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 13:23:23.095011 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 21 13:23:23.095020 kernel: NET: Registered PF_XDP protocol family Mar 21 13:23:23.095113 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 21 13:23:23.095200 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 21 13:23:23.095285 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 21 13:23:23.095370 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Mar 21 13:23:23.095456 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Mar 21 13:23:23.095554 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 21 13:23:23.095657 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 21 13:23:23.095672 kernel: PCI: CLS 0 bytes, default 64 Mar 21 13:23:23.095682 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 21 13:23:23.095692 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Mar 21 13:23:23.095702 kernel: Initialise system trusted keyrings Mar 21 13:23:23.095712 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 21 13:23:23.095722 kernel: Key type asymmetric registered Mar 21 13:23:23.095732 kernel: Asymmetric key parser 'x509' registered Mar 21 13:23:23.095746 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 21 13:23:23.095755 kernel: io scheduler mq-deadline registered Mar 21 13:23:23.095764 kernel: io scheduler kyber registered Mar 21 13:23:23.095773 kernel: io scheduler bfq registered Mar 21 13:23:23.095782 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 21 13:23:23.095792 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 21 13:23:23.095801 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 21 13:23:23.095810 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 21 13:23:23.095819 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 21 13:23:23.095828 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 21 13:23:23.095840 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 21 13:23:23.095851 kernel: random: crng init done Mar 21 13:23:23.095860 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 21 13:23:23.095870 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 21 13:23:23.095920 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 21 13:23:23.096022 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 21 13:23:23.096114 kernel: rtc_cmos 00:04: registered as rtc0 Mar 21 13:23:23.096129 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 21 13:23:23.096219 kernel: rtc_cmos 00:04: setting system clock to 2025-03-21T13:23:22 UTC (1742563402) Mar 21 13:23:23.096308 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 21 13:23:23.096322 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 21 13:23:23.096331 kernel: NET: Registered PF_INET6 protocol family Mar 21 13:23:23.096341 kernel: Segment Routing with IPv6 Mar 21 13:23:23.096351 kernel: In-situ OAM (IOAM) with IPv6 Mar 21 13:23:23.096360 kernel: NET: Registered PF_PACKET protocol family Mar 21 13:23:23.096370 kernel: Key type dns_resolver registered Mar 21 13:23:23.096383 kernel: IPI shorthand broadcast: enabled Mar 21 13:23:23.096393 kernel: sched_clock: Marking stable (1009007994, 168981422)->(1212982989, -34993573) Mar 21 13:23:23.096403 kernel: registered taskstats version 1 Mar 21 13:23:23.096412 kernel: Loading compiled-in X.509 certificates Mar 21 13:23:23.096422 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: d76f2258ffed89096a9428010e5ac0a0babcea9e' Mar 21 13:23:23.096432 kernel: Key type .fscrypt registered Mar 21 13:23:23.096441 kernel: Key type fscrypt-provisioning registered Mar 21 13:23:23.096451 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 21 13:23:23.096460 kernel: ima: Allocated hash algorithm: sha1 Mar 21 13:23:23.096472 kernel: ima: No architecture policies found Mar 21 13:23:23.096481 kernel: clk: Disabling unused clocks Mar 21 13:23:23.096491 kernel: Freeing unused kernel image (initmem) memory: 43588K Mar 21 13:23:23.096500 kernel: Write protecting the kernel read-only data: 40960k Mar 21 13:23:23.096510 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 21 13:23:23.096520 kernel: Run /init as init process Mar 21 13:23:23.096529 kernel: with arguments: Mar 21 13:23:23.096538 kernel: /init Mar 21 13:23:23.096548 kernel: with environment: Mar 21 13:23:23.096559 kernel: HOME=/ Mar 21 13:23:23.096568 kernel: TERM=linux Mar 21 13:23:23.096577 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 21 13:23:23.096588 systemd[1]: Successfully made /usr/ read-only. Mar 21 13:23:23.096602 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 13:23:23.096613 systemd[1]: Detected virtualization kvm. Mar 21 13:23:23.096624 systemd[1]: Detected architecture x86-64. Mar 21 13:23:23.096636 systemd[1]: Running in initrd. Mar 21 13:23:23.096646 systemd[1]: No hostname configured, using default hostname. Mar 21 13:23:23.096657 systemd[1]: Hostname set to . Mar 21 13:23:23.096667 systemd[1]: Initializing machine ID from VM UUID. Mar 21 13:23:23.096678 systemd[1]: Queued start job for default target initrd.target. Mar 21 13:23:23.096688 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 13:23:23.096699 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 13:23:23.096720 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 21 13:23:23.096733 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 13:23:23.096744 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 21 13:23:23.096756 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 21 13:23:23.096768 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 21 13:23:23.096779 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 21 13:23:23.096793 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 13:23:23.096804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 13:23:23.096815 systemd[1]: Reached target paths.target - Path Units. Mar 21 13:23:23.096826 systemd[1]: Reached target slices.target - Slice Units. Mar 21 13:23:23.096836 systemd[1]: Reached target swap.target - Swaps. Mar 21 13:23:23.096847 systemd[1]: Reached target timers.target - Timer Units. Mar 21 13:23:23.096858 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 13:23:23.096869 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 13:23:23.096919 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 21 13:23:23.096933 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 21 13:23:23.096944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 13:23:23.096955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 13:23:23.096966 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 13:23:23.096976 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 13:23:23.096987 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 21 13:23:23.096998 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 13:23:23.097009 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 21 13:23:23.097022 systemd[1]: Starting systemd-fsck-usr.service... Mar 21 13:23:23.097032 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 13:23:23.097043 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 13:23:23.097054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 13:23:23.097065 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 21 13:23:23.097076 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 13:23:23.097089 systemd[1]: Finished systemd-fsck-usr.service. Mar 21 13:23:23.097100 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 21 13:23:23.097135 systemd-journald[184]: Collecting audit messages is disabled. Mar 21 13:23:23.097165 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 21 13:23:23.097175 kernel: Bridge firewalling registered Mar 21 13:23:23.097186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 13:23:23.097198 systemd-journald[184]: Journal started Mar 21 13:23:23.097225 systemd-journald[184]: Runtime Journal (/run/log/journal/1547b9fbff214587a51ba7bab2b64454) is 8M, max 78.2M, 70.2M free. Mar 21 13:23:23.053991 systemd-modules-load[185]: Inserted module 'overlay' Mar 21 13:23:23.134264 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 13:23:23.085894 systemd-modules-load[185]: Inserted module 'br_netfilter' Mar 21 13:23:23.135088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 13:23:23.136494 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 13:23:23.143080 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 13:23:23.146091 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 13:23:23.161967 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 13:23:23.164271 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 13:23:23.173181 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 13:23:23.176947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 13:23:23.183121 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 21 13:23:23.190782 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 13:23:23.198196 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 13:23:23.212099 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 13:23:23.214932 dracut-cmdline[218]: dracut-dracut-053 Mar 21 13:23:23.219142 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 13:23:23.267690 systemd-resolved[225]: Positive Trust Anchors: Mar 21 13:23:23.267705 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 13:23:23.267749 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 13:23:23.274333 systemd-resolved[225]: Defaulting to hostname 'linux'. Mar 21 13:23:23.276235 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 13:23:23.276809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 13:23:23.292932 kernel: SCSI subsystem initialized Mar 21 13:23:23.302911 kernel: Loading iSCSI transport class v2.0-870. Mar 21 13:23:23.314939 kernel: iscsi: registered transport (tcp) Mar 21 13:23:23.370006 kernel: iscsi: registered transport (qla4xxx) Mar 21 13:23:23.370096 kernel: QLogic iSCSI HBA Driver Mar 21 13:23:23.431114 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 21 13:23:23.434212 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 21 13:23:23.492811 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 21 13:23:23.492892 kernel: device-mapper: uevent: version 1.0.3 Mar 21 13:23:23.495901 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 21 13:23:23.555955 kernel: raid6: sse2x4 gen() 5141 MB/s Mar 21 13:23:23.573966 kernel: raid6: sse2x2 gen() 5998 MB/s Mar 21 13:23:23.592261 kernel: raid6: sse2x1 gen() 9667 MB/s Mar 21 13:23:23.592323 kernel: raid6: using algorithm sse2x1 gen() 9667 MB/s Mar 21 13:23:23.611320 kernel: raid6: .... xor() 7322 MB/s, rmw enabled Mar 21 13:23:23.611391 kernel: raid6: using ssse3x2 recovery algorithm Mar 21 13:23:23.634199 kernel: xor: measuring software checksum speed Mar 21 13:23:23.634262 kernel: prefetch64-sse : 18247 MB/sec Mar 21 13:23:23.635575 kernel: generic_sse : 16675 MB/sec Mar 21 13:23:23.635636 kernel: xor: using function: prefetch64-sse (18247 MB/sec) Mar 21 13:23:23.815928 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 21 13:23:23.833989 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 21 13:23:23.839318 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 13:23:23.865536 systemd-udevd[404]: Using default interface naming scheme 'v255'. Mar 21 13:23:23.870384 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 13:23:23.877855 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 21 13:23:23.901724 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Mar 21 13:23:23.947112 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 13:23:23.952534 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 13:23:24.010691 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 13:23:24.016739 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 21 13:23:24.054630 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 21 13:23:24.066766 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 13:23:24.069340 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 13:23:24.070755 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 13:23:24.076959 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 21 13:23:24.100062 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 21 13:23:24.115413 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Mar 21 13:23:24.149288 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Mar 21 13:23:24.149415 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 21 13:23:24.149431 kernel: GPT:17805311 != 20971519 Mar 21 13:23:24.149443 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 21 13:23:24.149455 kernel: GPT:17805311 != 20971519 Mar 21 13:23:24.149466 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 21 13:23:24.149477 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 13:23:24.149488 kernel: libata version 3.00 loaded. Mar 21 13:23:24.149500 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 21 13:23:24.161369 kernel: scsi host0: ata_piix Mar 21 13:23:24.161526 kernel: scsi host1: ata_piix Mar 21 13:23:24.161649 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Mar 21 13:23:24.161664 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Mar 21 13:23:24.146292 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 13:23:24.146497 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 13:23:24.147268 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 13:23:24.148870 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 13:23:24.149033 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 13:23:24.149647 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 13:23:24.151323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 13:23:24.152241 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 21 13:23:24.207902 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (453) Mar 21 13:23:24.213908 kernel: BTRFS: device fsid c99b4410-5d95-4377-8189-88a588aa2514 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (473) Mar 21 13:23:24.231232 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 21 13:23:24.247124 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 13:23:24.258943 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 21 13:23:24.269934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 13:23:24.278692 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 21 13:23:24.279281 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 21 13:23:24.283299 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 21 13:23:24.301045 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 13:23:24.323971 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 13:23:24.324078 disk-uuid[508]: Primary Header is updated. Mar 21 13:23:24.324078 disk-uuid[508]: Secondary Entries is updated. Mar 21 13:23:24.324078 disk-uuid[508]: Secondary Header is updated. Mar 21 13:23:24.360437 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 13:23:25.362010 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 13:23:25.362073 disk-uuid[513]: The operation has completed successfully. Mar 21 13:23:25.407374 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 21 13:23:25.407559 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 21 13:23:25.486903 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 21 13:23:25.508363 sh[531]: Success Mar 21 13:23:25.526916 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Mar 21 13:23:25.588724 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 21 13:23:25.590774 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 21 13:23:25.596361 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 21 13:23:25.613928 kernel: BTRFS info (device dm-0): first mount of filesystem c99b4410-5d95-4377-8189-88a588aa2514 Mar 21 13:23:25.613986 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 21 13:23:25.614000 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 21 13:23:25.617501 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 21 13:23:25.617545 kernel: BTRFS info (device dm-0): using free space tree Mar 21 13:23:25.634829 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 21 13:23:25.636972 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 21 13:23:25.639993 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 21 13:23:25.646091 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 21 13:23:25.685937 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 13:23:25.686040 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 13:23:25.686063 kernel: BTRFS info (device vda6): using free space tree Mar 21 13:23:25.692958 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 13:23:25.699960 kernel: BTRFS info (device vda6): last unmount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 13:23:25.707742 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 21 13:23:25.712021 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 21 13:23:25.790957 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 13:23:25.796078 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 13:23:25.844604 systemd-networkd[710]: lo: Link UP Mar 21 13:23:25.844618 systemd-networkd[710]: lo: Gained carrier Mar 21 13:23:25.850106 systemd-networkd[710]: Enumeration completed Mar 21 13:23:25.850481 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 13:23:25.851469 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 13:23:25.851473 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 13:23:25.852416 systemd[1]: Reached target network.target - Network. Mar 21 13:23:25.852947 systemd-networkd[710]: eth0: Link UP Mar 21 13:23:25.852952 systemd-networkd[710]: eth0: Gained carrier Mar 21 13:23:25.852965 systemd-networkd[710]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 13:23:25.877962 systemd-networkd[710]: eth0: DHCPv4 address 172.24.4.44/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 21 13:23:25.880658 ignition[637]: Ignition 2.20.0 Mar 21 13:23:25.880672 ignition[637]: Stage: fetch-offline Mar 21 13:23:25.882206 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 13:23:25.880711 ignition[637]: no configs at "/usr/lib/ignition/base.d" Mar 21 13:23:25.880723 ignition[637]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 21 13:23:25.880823 ignition[637]: parsed url from cmdline: "" Mar 21 13:23:25.880828 ignition[637]: no config URL provided Mar 21 13:23:25.880834 ignition[637]: reading system config file "/usr/lib/ignition/user.ign" Mar 21 13:23:25.880843 ignition[637]: no config at "/usr/lib/ignition/user.ign" Mar 21 13:23:25.885997 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 21 13:23:25.880848 ignition[637]: failed to fetch config: resource requires networking Mar 21 13:23:25.881055 ignition[637]: Ignition finished successfully Mar 21 13:23:25.910329 ignition[720]: Ignition 2.20.0 Mar 21 13:23:25.910959 ignition[720]: Stage: fetch Mar 21 13:23:25.911214 ignition[720]: no configs at "/usr/lib/ignition/base.d" Mar 21 13:23:25.911229 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 21 13:23:25.911328 ignition[720]: parsed url from cmdline: "" Mar 21 13:23:25.911333 ignition[720]: no config URL provided Mar 21 13:23:25.911340 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Mar 21 13:23:25.911349 ignition[720]: no config at "/usr/lib/ignition/user.ign" Mar 21 13:23:25.911445 ignition[720]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 21 13:23:25.912465 ignition[720]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 21 13:23:25.912485 ignition[720]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 21 13:23:26.049492 ignition[720]: GET result: OK Mar 21 13:23:26.049676 ignition[720]: parsing config with SHA512: 078fd61d297990eaa18fe730d264dee2e4df2d8b4f1ba43e652f96689c67a331a9cd40bf107fefe21316f6702f806b7717830b9cd06718676ae3f839f2e11985 Mar 21 13:23:26.068699 unknown[720]: fetched base config from "system" Mar 21 13:23:26.068730 unknown[720]: fetched base config from "system" Mar 21 13:23:26.068820 unknown[720]: fetched user config from "openstack" Mar 21 13:23:26.072380 ignition[720]: fetch: fetch complete Mar 21 13:23:26.072522 ignition[720]: fetch: fetch passed Mar 21 13:23:26.072626 ignition[720]: Ignition finished successfully Mar 21 13:23:26.076259 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 21 13:23:26.080440 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 21 13:23:26.126348 ignition[727]: Ignition 2.20.0 Mar 21 13:23:26.127295 ignition[727]: Stage: kargs Mar 21 13:23:26.127687 ignition[727]: no configs at "/usr/lib/ignition/base.d" Mar 21 13:23:26.127712 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 21 13:23:26.134539 ignition[727]: kargs: kargs passed Mar 21 13:23:26.134643 ignition[727]: Ignition finished successfully Mar 21 13:23:26.139130 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 21 13:23:26.143408 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 21 13:23:26.188401 ignition[734]: Ignition 2.20.0 Mar 21 13:23:26.188437 ignition[734]: Stage: disks Mar 21 13:23:26.191139 ignition[734]: no configs at "/usr/lib/ignition/base.d" Mar 21 13:23:26.191173 ignition[734]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 21 13:23:26.197367 ignition[734]: disks: disks passed Mar 21 13:23:26.198495 ignition[734]: Ignition finished successfully Mar 21 13:23:26.200434 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 21 13:23:26.202737 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 21 13:23:26.204522 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 21 13:23:26.207327 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 13:23:26.210567 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 13:23:26.213077 systemd[1]: Reached target basic.target - Basic System. Mar 21 13:23:26.217746 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 21 13:23:26.265461 systemd-fsck[742]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 21 13:23:26.276480 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 21 13:23:26.280684 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 21 13:23:26.435912 kernel: EXT4-fs (vda9): mounted filesystem c540419e-275b-4bd7-8ebd-24b19ec75c0b r/w with ordered data mode. Quota mode: none. Mar 21 13:23:26.436905 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 21 13:23:26.438443 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 21 13:23:26.441319 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 13:23:26.443970 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 21 13:23:26.445212 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 21 13:23:26.447715 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 21 13:23:26.450013 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 21 13:23:26.451015 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 13:23:26.460510 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 21 13:23:26.466093 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 21 13:23:26.486949 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (750) Mar 21 13:23:26.505136 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 13:23:26.505204 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 13:23:26.507064 kernel: BTRFS info (device vda6): using free space tree Mar 21 13:23:26.518368 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 13:23:26.517053 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 13:23:26.583525 initrd-setup-root[779]: cut: /sysroot/etc/passwd: No such file or directory Mar 21 13:23:26.589423 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Mar 21 13:23:26.595425 initrd-setup-root[793]: cut: /sysroot/etc/shadow: No such file or directory Mar 21 13:23:26.600860 initrd-setup-root[800]: cut: /sysroot/etc/gshadow: No such file or directory Mar 21 13:23:26.690699 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 21 13:23:26.692609 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 21 13:23:26.695995 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 21 13:23:26.707716 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 21 13:23:26.710775 kernel: BTRFS info (device vda6): last unmount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 13:23:26.741228 ignition[868]: INFO : Ignition 2.20.0 Mar 21 13:23:26.741228 ignition[868]: INFO : Stage: mount Mar 21 13:23:26.744255 ignition[868]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 13:23:26.744255 ignition[868]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 21 13:23:26.744255 ignition[868]: INFO : mount: mount passed Mar 21 13:23:26.744255 ignition[868]: INFO : Ignition finished successfully Mar 21 13:23:26.745459 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 21 13:23:26.753839 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 21 13:23:27.316308 systemd-networkd[710]: eth0: Gained IPv6LL Mar 21 13:23:33.643093 coreos-metadata[752]: Mar 21 13:23:33.643 WARN failed to locate config-drive, using the metadata service API instead Mar 21 13:23:33.684101 coreos-metadata[752]: Mar 21 13:23:33.684 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 21 13:23:33.699382 coreos-metadata[752]: Mar 21 13:23:33.699 INFO Fetch successful Mar 21 13:23:33.700826 coreos-metadata[752]: Mar 21 13:23:33.700 INFO wrote hostname ci-9999-0-3-0-e42165490f.novalocal to /sysroot/etc/hostname Mar 21 13:23:33.703409 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 21 13:23:33.703691 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 21 13:23:33.711335 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 21 13:23:33.739310 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 13:23:33.775955 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (884) Mar 21 13:23:33.785095 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 13:23:33.785163 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 13:23:33.789293 kernel: BTRFS info (device vda6): using free space tree Mar 21 13:23:33.800982 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 13:23:33.805775 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 13:23:33.855713 ignition[902]: INFO : Ignition 2.20.0 Mar 21 13:23:33.855713 ignition[902]: INFO : Stage: files Mar 21 13:23:33.858678 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 13:23:33.858678 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 21 13:23:33.858678 ignition[902]: DEBUG : files: compiled without relabeling support, skipping Mar 21 13:23:33.864450 ignition[902]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 21 13:23:33.864450 ignition[902]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 21 13:23:33.868852 ignition[902]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 21 13:23:33.868852 ignition[902]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 21 13:23:33.873588 ignition[902]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 21 13:23:33.870215 unknown[902]: wrote ssh authorized keys file for user: core Mar 21 13:23:33.877690 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 21 13:23:33.877690 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 21 13:23:33.943293 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 21 13:23:34.302964 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 21 13:23:34.302964 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 21 13:23:34.302964 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 21 13:23:34.747079 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 21 13:23:35.164042 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 21 13:23:35.164042 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 21 13:23:35.164042 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 21 13:23:35.170950 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 21 13:23:35.748060 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 21 13:23:37.743132 ignition[902]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 21 13:23:37.743132 ignition[902]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 21 13:23:37.750544 ignition[902]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 13:23:37.750544 ignition[902]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 13:23:37.750544 ignition[902]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 21 13:23:37.750544 ignition[902]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 21 13:23:37.750544 ignition[902]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 21 13:23:37.750544 ignition[902]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 21 13:23:37.750544 ignition[902]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 21 13:23:37.750544 ignition[902]: INFO : files: files passed Mar 21 13:23:37.750544 ignition[902]: INFO : Ignition finished successfully Mar 21 13:23:37.747104 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 21 13:23:37.751005 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 21 13:23:37.756082 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 21 13:23:37.768383 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 21 13:23:37.768475 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 21 13:23:37.783039 initrd-setup-root-after-ignition[932]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 13:23:37.783039 initrd-setup-root-after-ignition[932]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 21 13:23:37.786582 initrd-setup-root-after-ignition[936]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 13:23:37.790451 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 13:23:37.793077 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 21 13:23:37.800658 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 21 13:23:37.858202 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 21 13:23:37.858452 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 21 13:23:37.861058 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 21 13:23:37.862382 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 21 13:23:37.864433 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 21 13:23:37.866170 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 21 13:23:37.908530 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 13:23:37.915175 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 21 13:23:37.949053 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 21 13:23:37.950697 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 13:23:37.953080 systemd[1]: Stopped target timers.target - Timer Units. Mar 21 13:23:37.954849 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 21 13:23:37.955237 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 13:23:37.958045 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 21 13:23:37.960116 systemd[1]: Stopped target basic.target - Basic System. Mar 21 13:23:37.962060 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 21 13:23:37.964273 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 13:23:37.966604 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 21 13:23:37.968852 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 21 13:23:37.971070 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 13:23:37.973406 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 21 13:23:37.975504 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 21 13:23:37.977436 systemd[1]: Stopped target swap.target - Swaps. Mar 21 13:23:37.979181 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 21 13:23:37.979543 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 21 13:23:37.981759 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 21 13:23:37.983365 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 13:23:37.985240 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 21 13:23:37.985479 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 13:23:37.987296 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 21 13:23:37.987593 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 21 13:23:37.989238 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 21 13:23:37.989380 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 13:23:37.990658 systemd[1]: ignition-files.service: Deactivated successfully. Mar 21 13:23:37.990771 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 21 13:23:37.995104 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 21 13:23:37.995668 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 21 13:23:37.995844 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 13:23:37.997971 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 21 13:23:38.000270 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 21 13:23:38.000405 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 13:23:38.002535 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 21 13:23:38.002662 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 13:23:38.010151 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 21 13:23:38.010846 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 21 13:23:38.023916 ignition[956]: INFO : Ignition 2.20.0 Mar 21 13:23:38.023916 ignition[956]: INFO : Stage: umount Mar 21 13:23:38.023916 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 13:23:38.023916 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 21 13:23:38.028779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 21 13:23:38.030860 ignition[956]: INFO : umount: umount passed Mar 21 13:23:38.030860 ignition[956]: INFO : Ignition finished successfully Mar 21 13:23:38.033025 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 21 13:23:38.033686 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 21 13:23:38.035170 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 21 13:23:38.035286 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 21 13:23:38.037082 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 21 13:23:38.037176 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 21 13:23:38.038358 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 21 13:23:38.038406 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 21 13:23:38.039383 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 21 13:23:38.039427 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 21 13:23:38.040403 systemd[1]: Stopped target network.target - Network. Mar 21 13:23:38.041368 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 21 13:23:38.041415 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 13:23:38.042421 systemd[1]: Stopped target paths.target - Path Units. Mar 21 13:23:38.043375 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 21 13:23:38.047953 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 13:23:38.049221 systemd[1]: Stopped target slices.target - Slice Units. Mar 21 13:23:38.049705 systemd[1]: Stopped target sockets.target - Socket Units. Mar 21 13:23:38.050705 systemd[1]: iscsid.socket: Deactivated successfully. Mar 21 13:23:38.050747 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 13:23:38.051734 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 21 13:23:38.051770 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 13:23:38.052738 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 21 13:23:38.052786 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 21 13:23:38.053736 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 21 13:23:38.053776 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 21 13:23:38.054741 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 21 13:23:38.054787 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 21 13:23:38.055929 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 21 13:23:38.057014 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 21 13:23:38.059805 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 21 13:23:38.059942 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 21 13:23:38.063601 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 21 13:23:38.064189 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 21 13:23:38.064257 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 13:23:38.070152 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 21 13:23:38.070405 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 21 13:23:38.070504 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 21 13:23:38.072943 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 21 13:23:38.073415 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 21 13:23:38.073595 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 21 13:23:38.076015 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 21 13:23:38.076670 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 21 13:23:38.076724 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 13:23:38.078256 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 21 13:23:38.078304 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 21 13:23:38.080913 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 21 13:23:38.080962 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 21 13:23:38.081626 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 13:23:38.083959 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 21 13:23:38.089537 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 21 13:23:38.090295 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 13:23:38.091798 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 21 13:23:38.091839 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 21 13:23:38.092366 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 21 13:23:38.092397 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 13:23:38.092864 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 21 13:23:38.095064 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 21 13:23:38.096390 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 21 13:23:38.096433 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 21 13:23:38.097377 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 13:23:38.097442 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 13:23:38.101018 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 21 13:23:38.101671 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 21 13:23:38.101733 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 13:23:38.103130 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 13:23:38.103178 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 13:23:38.109296 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 21 13:23:38.109391 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 21 13:23:38.114275 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 21 13:23:38.114386 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 21 13:23:38.115853 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 21 13:23:38.117995 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 21 13:23:38.135808 systemd[1]: Switching root. Mar 21 13:23:38.178304 systemd-journald[184]: Journal stopped Mar 21 13:23:40.118209 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Mar 21 13:23:40.118274 kernel: SELinux: policy capability network_peer_controls=1 Mar 21 13:23:40.118299 kernel: SELinux: policy capability open_perms=1 Mar 21 13:23:40.118311 kernel: SELinux: policy capability extended_socket_class=1 Mar 21 13:23:40.118327 kernel: SELinux: policy capability always_check_network=0 Mar 21 13:23:40.118339 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 21 13:23:40.118350 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 21 13:23:40.118366 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 21 13:23:40.118377 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 21 13:23:40.118388 kernel: audit: type=1403 audit(1742563418.797:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 21 13:23:40.118401 systemd[1]: Successfully loaded SELinux policy in 86.952ms. Mar 21 13:23:40.118427 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 27.661ms. Mar 21 13:23:40.118441 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 13:23:40.118453 systemd[1]: Detected virtualization kvm. Mar 21 13:23:40.118466 systemd[1]: Detected architecture x86-64. Mar 21 13:23:40.118480 systemd[1]: Detected first boot. Mar 21 13:23:40.118493 systemd[1]: Hostname set to . Mar 21 13:23:40.118505 systemd[1]: Initializing machine ID from VM UUID. Mar 21 13:23:40.118518 zram_generator::config[1002]: No configuration found. Mar 21 13:23:40.118531 kernel: Guest personality initialized and is inactive Mar 21 13:23:40.118543 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 21 13:23:40.118554 kernel: Initialized host personality Mar 21 13:23:40.118565 kernel: NET: Registered PF_VSOCK protocol family Mar 21 13:23:40.118576 systemd[1]: Populated /etc with preset unit settings. Mar 21 13:23:40.118592 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 21 13:23:40.118604 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 21 13:23:40.118634 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 21 13:23:40.118648 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 21 13:23:40.118660 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 21 13:23:40.118672 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 21 13:23:40.118685 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 21 13:23:40.118697 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 21 13:23:40.118710 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 21 13:23:40.118726 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 21 13:23:40.118738 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 21 13:23:40.118751 systemd[1]: Created slice user.slice - User and Session Slice. Mar 21 13:23:40.118763 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 13:23:40.118775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 13:23:40.118788 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 21 13:23:40.118800 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 21 13:23:40.118815 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 21 13:23:40.118828 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 13:23:40.118840 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 21 13:23:40.118853 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 13:23:40.118865 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 21 13:23:40.122180 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 21 13:23:40.122204 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 21 13:23:40.122222 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 21 13:23:40.122236 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 13:23:40.122249 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 13:23:40.122262 systemd[1]: Reached target slices.target - Slice Units. Mar 21 13:23:40.122275 systemd[1]: Reached target swap.target - Swaps. Mar 21 13:23:40.122288 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 21 13:23:40.122301 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 21 13:23:40.122314 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 21 13:23:40.122327 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 13:23:40.122340 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 13:23:40.122355 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 13:23:40.122368 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 21 13:23:40.122381 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 21 13:23:40.122393 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 21 13:23:40.122411 systemd[1]: Mounting media.mount - External Media Directory... Mar 21 13:23:40.122424 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 13:23:40.122442 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 21 13:23:40.122454 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 21 13:23:40.122468 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 21 13:23:40.122482 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 21 13:23:40.122494 systemd[1]: Reached target machines.target - Containers. Mar 21 13:23:40.122507 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 21 13:23:40.122519 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 13:23:40.122532 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 13:23:40.122544 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 21 13:23:40.122557 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 13:23:40.122570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 13:23:40.122584 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 13:23:40.122597 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 21 13:23:40.122609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 13:23:40.122639 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 21 13:23:40.122652 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 21 13:23:40.122664 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 21 13:23:40.122676 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 21 13:23:40.122689 systemd[1]: Stopped systemd-fsck-usr.service. Mar 21 13:23:40.122704 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 13:23:40.122717 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 13:23:40.122729 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 13:23:40.122742 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 21 13:23:40.122754 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 21 13:23:40.122766 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 21 13:23:40.122779 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 13:23:40.122793 kernel: loop: module loaded Mar 21 13:23:40.122807 systemd[1]: verity-setup.service: Deactivated successfully. Mar 21 13:23:40.122819 systemd[1]: Stopped verity-setup.service. Mar 21 13:23:40.122831 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 13:23:40.122844 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 21 13:23:40.122857 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 21 13:23:40.122871 kernel: fuse: init (API version 7.39) Mar 21 13:23:40.122903 systemd[1]: Mounted media.mount - External Media Directory. Mar 21 13:23:40.122916 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 21 13:23:40.122931 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 21 13:23:40.122944 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 21 13:23:40.122956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 13:23:40.122973 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 21 13:23:40.122992 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 21 13:23:40.123008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 13:23:40.123024 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 13:23:40.123042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 13:23:40.123060 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 13:23:40.123076 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 21 13:23:40.123095 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 21 13:23:40.123114 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 13:23:40.123138 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 13:23:40.123158 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 13:23:40.123213 systemd-journald[1089]: Collecting audit messages is disabled. Mar 21 13:23:40.123259 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 21 13:23:40.123282 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 21 13:23:40.123306 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 21 13:23:40.123328 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 21 13:23:40.123353 systemd-journald[1089]: Journal started Mar 21 13:23:40.123395 systemd-journald[1089]: Runtime Journal (/run/log/journal/1547b9fbff214587a51ba7bab2b64454) is 8M, max 78.2M, 70.2M free. Mar 21 13:23:40.141952 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 21 13:23:39.730679 systemd[1]: Queued start job for default target multi-user.target. Mar 21 13:23:39.743538 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 21 13:23:39.744000 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 21 13:23:40.152842 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 21 13:23:40.153858 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 13:23:40.153919 kernel: ACPI: bus type drm_connector registered Mar 21 13:23:40.158925 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 21 13:23:40.164893 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 21 13:23:40.169915 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 21 13:23:40.173922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 13:23:40.190534 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 21 13:23:40.190595 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 13:23:40.198241 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 21 13:23:40.198310 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 13:23:40.205913 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 13:23:40.209916 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 21 13:23:40.211921 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 13:23:40.220730 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 21 13:23:40.221498 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 13:23:40.221657 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 13:23:40.222406 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 21 13:23:40.223297 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 13:23:40.223945 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 21 13:23:40.224527 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 21 13:23:40.225312 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 21 13:23:40.243782 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 21 13:23:40.246087 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 21 13:23:40.249320 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 21 13:23:40.264429 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 13:23:40.267410 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 21 13:23:40.270067 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 21 13:23:40.272376 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 21 13:23:40.278487 kernel: loop0: detected capacity change from 0 to 109808 Mar 21 13:23:40.279219 systemd-journald[1089]: Time spent on flushing to /var/log/journal/1547b9fbff214587a51ba7bab2b64454 is 32.901ms for 967 entries. Mar 21 13:23:40.279219 systemd-journald[1089]: System Journal (/var/log/journal/1547b9fbff214587a51ba7bab2b64454) is 8M, max 584.8M, 576.8M free. Mar 21 13:23:40.368113 systemd-journald[1089]: Received client request to flush runtime journal. Mar 21 13:23:40.301144 udevadm[1148]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 21 13:23:40.371272 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 21 13:23:40.372357 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 21 13:23:40.374261 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 13:23:40.407807 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 21 13:23:40.405524 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 21 13:23:40.416761 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Mar 21 13:23:40.416782 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Mar 21 13:23:40.424624 kernel: loop1: detected capacity change from 0 to 8 Mar 21 13:23:40.423760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 13:23:40.441908 kernel: loop2: detected capacity change from 0 to 205544 Mar 21 13:23:40.507986 kernel: loop3: detected capacity change from 0 to 151640 Mar 21 13:23:40.586920 kernel: loop4: detected capacity change from 0 to 109808 Mar 21 13:23:40.621920 kernel: loop5: detected capacity change from 0 to 8 Mar 21 13:23:40.629933 kernel: loop6: detected capacity change from 0 to 205544 Mar 21 13:23:40.684926 kernel: loop7: detected capacity change from 0 to 151640 Mar 21 13:23:40.725329 (sd-merge)[1167]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 21 13:23:40.725776 (sd-merge)[1167]: Merged extensions into '/usr'. Mar 21 13:23:40.735737 systemd[1]: Reload requested from client PID 1122 ('systemd-sysext') (unit systemd-sysext.service)... Mar 21 13:23:40.735755 systemd[1]: Reloading... Mar 21 13:23:40.838906 zram_generator::config[1194]: No configuration found. Mar 21 13:23:41.121555 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 13:23:41.198807 ldconfig[1118]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 21 13:23:41.217648 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 21 13:23:41.217783 systemd[1]: Reloading finished in 481 ms. Mar 21 13:23:41.238194 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 21 13:23:41.239100 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 21 13:23:41.239856 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 21 13:23:41.252318 systemd[1]: Starting ensure-sysext.service... Mar 21 13:23:41.255999 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 13:23:41.258159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 13:23:41.285147 systemd[1]: Reload requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Mar 21 13:23:41.285167 systemd[1]: Reloading... Mar 21 13:23:41.312468 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 21 13:23:41.312748 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 21 13:23:41.313728 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 21 13:23:41.314058 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Mar 21 13:23:41.314120 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Mar 21 13:23:41.321754 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 13:23:41.321764 systemd-tmpfiles[1253]: Skipping /boot Mar 21 13:23:41.337540 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 13:23:41.337553 systemd-tmpfiles[1253]: Skipping /boot Mar 21 13:23:41.349163 systemd-udevd[1254]: Using default interface naming scheme 'v255'. Mar 21 13:23:41.385913 zram_generator::config[1286]: No configuration found. Mar 21 13:23:41.480134 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1316) Mar 21 13:23:41.585127 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 21 13:23:41.627855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 13:23:41.645983 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 21 13:23:41.662953 kernel: ACPI: button: Power Button [PWRF] Mar 21 13:23:41.669943 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 21 13:23:41.683913 kernel: mousedev: PS/2 mouse device common for all mice Mar 21 13:23:41.691907 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 21 13:23:41.691955 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 21 13:23:41.695890 kernel: Console: switching to colour dummy device 80x25 Mar 21 13:23:41.696901 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 21 13:23:41.696939 kernel: [drm] features: -context_init Mar 21 13:23:41.700898 kernel: [drm] number of scanouts: 1 Mar 21 13:23:41.702904 kernel: [drm] number of cap sets: 0 Mar 21 13:23:41.702942 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 21 13:23:41.710698 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 21 13:23:41.710759 kernel: Console: switching to colour frame buffer device 160x50 Mar 21 13:23:41.717921 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 21 13:23:41.761953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 13:23:41.764459 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 21 13:23:41.765110 systemd[1]: Reloading finished in 479 ms. Mar 21 13:23:41.776597 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 13:23:41.786410 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 13:23:41.814951 systemd[1]: Finished ensure-sysext.service. Mar 21 13:23:41.825542 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 21 13:23:41.839720 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 13:23:41.841008 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 13:23:41.846870 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 21 13:23:41.847121 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 13:23:41.854004 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 21 13:23:41.856341 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 13:23:41.861012 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 13:23:41.864055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 13:23:41.868582 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 13:23:41.869720 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 13:23:41.881737 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 21 13:23:41.882735 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 13:23:41.885064 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 21 13:23:41.891003 lvm[1375]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 13:23:41.897051 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 13:23:41.905744 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 13:23:41.910333 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 21 13:23:41.917023 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 21 13:23:41.923679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 13:23:41.924674 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 13:23:41.925619 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 13:23:41.925810 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 13:23:41.926153 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 13:23:41.926318 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 13:23:41.926640 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 13:23:41.926796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 13:23:41.929221 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 13:23:41.929929 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 13:23:41.939371 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 13:23:41.939444 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 13:23:41.944029 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 21 13:23:41.964628 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 21 13:23:41.967397 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 13:23:41.975932 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 21 13:23:41.981495 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 21 13:23:41.986213 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 21 13:23:42.015232 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 13:23:42.032025 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 21 13:23:42.037173 augenrules[1418]: No rules Mar 21 13:23:42.042127 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 21 13:23:42.043758 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 13:23:42.046159 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 13:23:42.057741 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 21 13:23:42.058854 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 21 13:23:42.059699 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 21 13:23:42.066557 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 21 13:23:42.071125 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 21 13:23:42.138915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 13:23:42.179619 systemd-networkd[1388]: lo: Link UP Mar 21 13:23:42.179629 systemd-networkd[1388]: lo: Gained carrier Mar 21 13:23:42.180861 systemd-networkd[1388]: Enumeration completed Mar 21 13:23:42.180985 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 13:23:42.184274 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 21 13:23:42.186087 systemd-resolved[1389]: Positive Trust Anchors: Mar 21 13:23:42.186988 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 21 13:23:42.187628 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 21 13:23:42.187935 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 13:23:42.187982 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 13:23:42.190229 systemd[1]: Reached target time-set.target - System Time Set. Mar 21 13:23:42.192097 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 13:23:42.192108 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 13:23:42.193068 systemd-networkd[1388]: eth0: Link UP Mar 21 13:23:42.193077 systemd-networkd[1388]: eth0: Gained carrier Mar 21 13:23:42.193091 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 13:23:42.198459 systemd-resolved[1389]: Using system hostname 'ci-9999-0-3-0-e42165490f.novalocal'. Mar 21 13:23:42.201014 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 13:23:42.201831 systemd[1]: Reached target network.target - Network. Mar 21 13:23:42.202348 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 13:23:42.202795 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 13:23:42.205440 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 21 13:23:42.207216 systemd-networkd[1388]: eth0: DHCPv4 address 172.24.4.44/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 21 13:23:42.207422 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 21 13:23:42.208216 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Mar 21 13:23:42.209089 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 21 13:23:42.209600 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 21 13:23:42.211605 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 21 13:23:42.213639 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 21 13:23:42.213682 systemd[1]: Reached target paths.target - Path Units. Mar 21 13:23:42.214160 systemd[1]: Reached target timers.target - Timer Units. Mar 21 13:23:42.218105 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 21 13:23:42.221685 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 21 13:23:42.227315 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 21 13:23:42.229834 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 21 13:23:42.232025 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 21 13:23:42.242722 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 21 13:23:42.246982 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 21 13:23:42.249591 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 21 13:23:42.253401 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 21 13:23:42.257205 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 13:23:42.258639 systemd[1]: Reached target basic.target - Basic System. Mar 21 13:23:42.260733 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 21 13:23:42.260766 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 21 13:23:42.263605 systemd[1]: Starting containerd.service - containerd container runtime... Mar 21 13:23:42.274749 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 21 13:23:42.290001 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 21 13:23:42.292559 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 21 13:23:42.299325 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 21 13:23:42.301441 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 21 13:23:42.307823 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 21 13:23:42.320016 jq[1451]: false Mar 21 13:23:42.314062 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 21 13:23:42.322512 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 21 13:23:42.327741 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 21 13:23:42.336449 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 21 13:23:42.343642 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 21 13:23:42.344265 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 21 13:23:42.350008 systemd[1]: Starting update-engine.service - Update Engine... Mar 21 13:23:42.353386 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 21 13:23:42.361866 extend-filesystems[1452]: Found loop4 Mar 21 13:23:42.361866 extend-filesystems[1452]: Found loop5 Mar 21 13:23:42.361866 extend-filesystems[1452]: Found loop6 Mar 21 13:23:42.361866 extend-filesystems[1452]: Found loop7 Mar 21 13:23:42.361866 extend-filesystems[1452]: Found vda Mar 21 13:23:42.361866 extend-filesystems[1452]: Found vda1 Mar 21 13:23:42.361866 extend-filesystems[1452]: Found vda2 Mar 21 13:23:42.361866 extend-filesystems[1452]: Found vda3 Mar 21 13:23:42.361866 extend-filesystems[1452]: Found usr Mar 21 13:23:42.361866 extend-filesystems[1452]: Found vda4 Mar 21 13:23:42.361866 extend-filesystems[1452]: Found vda6 Mar 21 13:23:43.301693 extend-filesystems[1452]: Found vda7 Mar 21 13:23:43.301693 extend-filesystems[1452]: Found vda9 Mar 21 13:23:43.301693 extend-filesystems[1452]: Checking size of /dev/vda9 Mar 21 13:23:42.378059 dbus-daemon[1448]: [system] SELinux support is enabled Mar 21 13:23:42.363854 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 21 13:23:42.364939 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 21 13:23:42.368172 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 21 13:23:42.368385 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 21 13:23:42.380614 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 21 13:23:43.293261 systemd-resolved[1389]: Clock change detected. Flushing caches. Mar 21 13:23:43.293292 systemd-timesyncd[1391]: Contacted time server 83.147.242.172:123 (0.flatcar.pool.ntp.org). Mar 21 13:23:43.293353 systemd-timesyncd[1391]: Initial clock synchronization to Fri 2025-03-21 13:23:43.293171 UTC. Mar 21 13:23:43.303359 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 21 13:23:43.303388 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 21 13:23:43.308187 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 21 13:23:43.308217 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 21 13:23:43.319286 update_engine[1459]: I20250321 13:23:43.318935 1459 main.cc:92] Flatcar Update Engine starting Mar 21 13:23:43.319603 jq[1460]: true Mar 21 13:23:43.320977 update_engine[1459]: I20250321 13:23:43.320822 1459 update_check_scheduler.cc:74] Next update check in 2m44s Mar 21 13:23:43.321614 systemd[1]: Started update-engine.service - Update Engine. Mar 21 13:23:43.349096 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1299) Mar 21 13:23:43.352093 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 21 13:23:43.355167 extend-filesystems[1452]: Resized partition /dev/vda9 Mar 21 13:23:43.368429 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 21 13:23:43.374074 jq[1475]: true Mar 21 13:23:43.374391 extend-filesystems[1485]: resize2fs 1.47.2 (1-Jan-2025) Mar 21 13:23:43.375230 systemd[1]: motdgen.service: Deactivated successfully. Mar 21 13:23:43.375496 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 21 13:23:43.402942 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Mar 21 13:23:43.403057 tar[1464]: linux-amd64/helm Mar 21 13:23:43.424180 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Mar 21 13:23:43.439330 systemd-logind[1458]: New seat seat0. Mar 21 13:23:43.482593 systemd-logind[1458]: Watching system buttons on /dev/input/event1 (Power Button) Mar 21 13:23:43.501325 extend-filesystems[1485]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 21 13:23:43.501325 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 21 13:23:43.501325 extend-filesystems[1485]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Mar 21 13:23:43.482614 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 21 13:23:43.520997 extend-filesystems[1452]: Resized filesystem in /dev/vda9 Mar 21 13:23:43.487123 systemd[1]: Started systemd-logind.service - User Login Management. Mar 21 13:23:43.493256 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 21 13:23:43.495088 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 21 13:23:43.537083 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Mar 21 13:23:43.539929 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 21 13:23:43.562254 systemd[1]: Starting sshkeys.service... Mar 21 13:23:43.605920 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 21 13:23:43.611163 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 21 13:23:43.661155 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 21 13:23:43.803163 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 21 13:23:43.855036 containerd[1479]: time="2025-03-21T13:23:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 21 13:23:43.856440 containerd[1479]: time="2025-03-21T13:23:43.855828385Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 21 13:23:43.862344 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 21 13:23:43.872473 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 21 13:23:43.876916 containerd[1479]: time="2025-03-21T13:23:43.876885237Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.353µs" Mar 21 13:23:43.877098 containerd[1479]: time="2025-03-21T13:23:43.877080152Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 21 13:23:43.877797 containerd[1479]: time="2025-03-21T13:23:43.877745029Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 21 13:23:43.878065 containerd[1479]: time="2025-03-21T13:23:43.877938833Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 21 13:23:43.878065 containerd[1479]: time="2025-03-21T13:23:43.877973989Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 21 13:23:43.878065 containerd[1479]: time="2025-03-21T13:23:43.878002803Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 13:23:43.878178 containerd[1479]: time="2025-03-21T13:23:43.878152564Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 13:23:43.878178 containerd[1479]: time="2025-03-21T13:23:43.878173593Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 13:23:43.878988 containerd[1479]: time="2025-03-21T13:23:43.878445934Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 13:23:43.878988 containerd[1479]: time="2025-03-21T13:23:43.878490117Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 13:23:43.878988 containerd[1479]: time="2025-03-21T13:23:43.878504143Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 13:23:43.878988 containerd[1479]: time="2025-03-21T13:23:43.878514532Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 21 13:23:43.878988 containerd[1479]: time="2025-03-21T13:23:43.878647802Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 21 13:23:43.878988 containerd[1479]: time="2025-03-21T13:23:43.878903983Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 13:23:43.878988 containerd[1479]: time="2025-03-21T13:23:43.878945200Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 13:23:43.878988 containerd[1479]: time="2025-03-21T13:23:43.878958335Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 21 13:23:43.878988 containerd[1479]: time="2025-03-21T13:23:43.878988692Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 21 13:23:43.879518 containerd[1479]: time="2025-03-21T13:23:43.879317759Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 21 13:23:43.879518 containerd[1479]: time="2025-03-21T13:23:43.879382591Z" level=info msg="metadata content store policy set" policy=shared Mar 21 13:23:43.890936 systemd[1]: issuegen.service: Deactivated successfully. Mar 21 13:23:43.891505 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896207806Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896302544Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896324725Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896362196Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896381662Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896396240Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896410346Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896444650Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896458987Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896472152Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896484255Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896497259Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896660335Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 21 13:23:43.896740 containerd[1479]: time="2025-03-21T13:23:43.896684530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896707093Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896724656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896741607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896754481Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896767296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896783586Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896796931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896809855Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896822990Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896894324Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896911175Z" level=info msg="Start snapshots syncer" Mar 21 13:23:43.899573 containerd[1479]: time="2025-03-21T13:23:43.896933317Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 21 13:23:43.897479 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 21 13:23:43.899922 containerd[1479]: time="2025-03-21T13:23:43.897213212Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 21 13:23:43.899922 containerd[1479]: time="2025-03-21T13:23:43.897270850Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897341072Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897442051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897468100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897481365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897494169Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897509728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897521801Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897533803Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897557488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897577014Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897588325Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897625395Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897647927Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 13:23:43.900070 containerd[1479]: time="2025-03-21T13:23:43.897664799Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 13:23:43.900389 containerd[1479]: time="2025-03-21T13:23:43.897683414Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 13:23:43.900389 containerd[1479]: time="2025-03-21T13:23:43.897698382Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 21 13:23:43.900389 containerd[1479]: time="2025-03-21T13:23:43.897716656Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 21 13:23:43.900389 containerd[1479]: time="2025-03-21T13:23:43.897738317Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 21 13:23:43.900389 containerd[1479]: time="2025-03-21T13:23:43.897768293Z" level=info msg="runtime interface created" Mar 21 13:23:43.900389 containerd[1479]: time="2025-03-21T13:23:43.897799712Z" level=info msg="created NRI interface" Mar 21 13:23:43.900389 containerd[1479]: time="2025-03-21T13:23:43.897819509Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 21 13:23:43.900389 containerd[1479]: time="2025-03-21T13:23:43.897838034Z" level=info msg="Connect containerd service" Mar 21 13:23:43.900389 containerd[1479]: time="2025-03-21T13:23:43.897884851Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 21 13:23:43.901693 containerd[1479]: time="2025-03-21T13:23:43.900876092Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 21 13:23:43.930331 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 21 13:23:43.944261 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 21 13:23:43.947945 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 21 13:23:43.948770 systemd[1]: Reached target getty.target - Login Prompts. Mar 21 13:23:44.085348 containerd[1479]: time="2025-03-21T13:23:44.085243452Z" level=info msg="Start subscribing containerd event" Mar 21 13:23:44.085541 containerd[1479]: time="2025-03-21T13:23:44.085503941Z" level=info msg="Start recovering state" Mar 21 13:23:44.086102 containerd[1479]: time="2025-03-21T13:23:44.085456011Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 21 13:23:44.086303 containerd[1479]: time="2025-03-21T13:23:44.086072047Z" level=info msg="Start event monitor" Mar 21 13:23:44.086303 containerd[1479]: time="2025-03-21T13:23:44.086171333Z" level=info msg="Start cni network conf syncer for default" Mar 21 13:23:44.086303 containerd[1479]: time="2025-03-21T13:23:44.086149722Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 21 13:23:44.086303 containerd[1479]: time="2025-03-21T13:23:44.086183285Z" level=info msg="Start streaming server" Mar 21 13:23:44.086303 containerd[1479]: time="2025-03-21T13:23:44.086212460Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 21 13:23:44.086303 containerd[1479]: time="2025-03-21T13:23:44.086221617Z" level=info msg="runtime interface starting up..." Mar 21 13:23:44.086303 containerd[1479]: time="2025-03-21T13:23:44.086228580Z" level=info msg="starting plugins..." Mar 21 13:23:44.086303 containerd[1479]: time="2025-03-21T13:23:44.086247516Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 21 13:23:44.088331 systemd[1]: Started containerd.service - containerd container runtime. Mar 21 13:23:44.091980 containerd[1479]: time="2025-03-21T13:23:44.090452692Z" level=info msg="containerd successfully booted in 0.235755s" Mar 21 13:23:44.159245 systemd-networkd[1388]: eth0: Gained IPv6LL Mar 21 13:23:44.163478 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 21 13:23:44.170636 systemd[1]: Reached target network-online.target - Network is Online. Mar 21 13:23:44.179225 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 13:23:44.185595 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 21 13:23:44.191331 tar[1464]: linux-amd64/LICENSE Mar 21 13:23:44.191538 tar[1464]: linux-amd64/README.md Mar 21 13:23:44.207143 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 21 13:23:44.235093 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 21 13:23:46.062582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 13:23:46.078756 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 13:23:47.547474 kubelet[1573]: E0321 13:23:47.547383 1573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 13:23:47.551709 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 13:23:47.551854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 13:23:47.552416 systemd[1]: kubelet.service: Consumed 2.235s CPU time, 238.6M memory peak. Mar 21 13:23:49.029974 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 21 13:23:49.034786 systemd[1]: Started sshd@0-172.24.4.44:22-172.24.4.1:56562.service - OpenSSH per-connection server daemon (172.24.4.1:56562). Mar 21 13:23:49.080493 login[1539]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 21 13:23:49.087558 login[1540]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 21 13:23:49.096088 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 21 13:23:49.097317 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 21 13:23:49.105110 systemd-logind[1458]: New session 2 of user core. Mar 21 13:23:49.119300 systemd-logind[1458]: New session 1 of user core. Mar 21 13:23:49.128842 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 21 13:23:49.131647 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 21 13:23:49.147522 (systemd)[1590]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 21 13:23:49.149724 systemd-logind[1458]: New session c1 of user core. Mar 21 13:23:49.304530 systemd[1590]: Queued start job for default target default.target. Mar 21 13:23:49.311364 systemd[1590]: Created slice app.slice - User Application Slice. Mar 21 13:23:49.311398 systemd[1590]: Reached target paths.target - Paths. Mar 21 13:23:49.311447 systemd[1590]: Reached target timers.target - Timers. Mar 21 13:23:49.313017 systemd[1590]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 21 13:23:49.333359 systemd[1590]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 21 13:23:49.333423 systemd[1590]: Reached target sockets.target - Sockets. Mar 21 13:23:49.333465 systemd[1590]: Reached target basic.target - Basic System. Mar 21 13:23:49.333502 systemd[1590]: Reached target default.target - Main User Target. Mar 21 13:23:49.333527 systemd[1590]: Startup finished in 178ms. Mar 21 13:23:49.334414 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 21 13:23:49.344726 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 21 13:23:49.347499 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 21 13:23:50.268628 coreos-metadata[1447]: Mar 21 13:23:50.268 WARN failed to locate config-drive, using the metadata service API instead Mar 21 13:23:50.320462 coreos-metadata[1447]: Mar 21 13:23:50.320 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 21 13:23:50.470273 coreos-metadata[1447]: Mar 21 13:23:50.470 INFO Fetch successful Mar 21 13:23:50.470273 coreos-metadata[1447]: Mar 21 13:23:50.470 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 21 13:23:50.479989 coreos-metadata[1447]: Mar 21 13:23:50.479 INFO Fetch successful Mar 21 13:23:50.479989 coreos-metadata[1447]: Mar 21 13:23:50.479 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 21 13:23:50.494215 coreos-metadata[1447]: Mar 21 13:23:50.494 INFO Fetch successful Mar 21 13:23:50.494215 coreos-metadata[1447]: Mar 21 13:23:50.494 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 21 13:23:50.508194 coreos-metadata[1447]: Mar 21 13:23:50.508 INFO Fetch successful Mar 21 13:23:50.508194 coreos-metadata[1447]: Mar 21 13:23:50.508 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 21 13:23:50.521932 coreos-metadata[1447]: Mar 21 13:23:50.521 INFO Fetch successful Mar 21 13:23:50.521932 coreos-metadata[1447]: Mar 21 13:23:50.521 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 21 13:23:50.535145 coreos-metadata[1447]: Mar 21 13:23:50.535 INFO Fetch successful Mar 21 13:23:50.560291 sshd[1587]: Accepted publickey for core from 172.24.4.1 port 56562 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:23:50.563657 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:23:50.577020 systemd-logind[1458]: New session 3 of user core. Mar 21 13:23:50.582742 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 21 13:23:50.585508 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 21 13:23:50.588142 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 21 13:23:50.734608 coreos-metadata[1513]: Mar 21 13:23:50.734 WARN failed to locate config-drive, using the metadata service API instead Mar 21 13:23:50.776927 coreos-metadata[1513]: Mar 21 13:23:50.776 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 21 13:23:50.792798 coreos-metadata[1513]: Mar 21 13:23:50.792 INFO Fetch successful Mar 21 13:23:50.792798 coreos-metadata[1513]: Mar 21 13:23:50.792 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 21 13:23:50.805928 coreos-metadata[1513]: Mar 21 13:23:50.805 INFO Fetch successful Mar 21 13:23:50.811321 unknown[1513]: wrote ssh authorized keys file for user: core Mar 21 13:23:50.855749 update-ssh-keys[1632]: Updated "/home/core/.ssh/authorized_keys" Mar 21 13:23:50.858529 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 21 13:23:50.861184 systemd[1]: Finished sshkeys.service. Mar 21 13:23:50.866626 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 21 13:23:50.866974 systemd[1]: Startup finished in 1.232s (kernel) + 15.965s (initrd) + 11.246s (userspace) = 28.444s. Mar 21 13:23:51.189975 systemd[1]: Started sshd@1-172.24.4.44:22-172.24.4.1:56578.service - OpenSSH per-connection server daemon (172.24.4.1:56578). Mar 21 13:23:52.259660 sshd[1637]: Accepted publickey for core from 172.24.4.1 port 56578 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:23:52.262322 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:23:52.274555 systemd-logind[1458]: New session 4 of user core. Mar 21 13:23:52.282501 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 21 13:23:53.045509 sshd[1639]: Connection closed by 172.24.4.1 port 56578 Mar 21 13:23:53.046681 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Mar 21 13:23:53.064458 systemd[1]: sshd@1-172.24.4.44:22-172.24.4.1:56578.service: Deactivated successfully. Mar 21 13:23:53.067780 systemd[1]: session-4.scope: Deactivated successfully. Mar 21 13:23:53.072966 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Mar 21 13:23:53.075099 systemd[1]: Started sshd@2-172.24.4.44:22-172.24.4.1:56588.service - OpenSSH per-connection server daemon (172.24.4.1:56588). Mar 21 13:23:53.077855 systemd-logind[1458]: Removed session 4. Mar 21 13:23:54.545548 sshd[1644]: Accepted publickey for core from 172.24.4.1 port 56588 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:23:54.548224 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:23:54.560517 systemd-logind[1458]: New session 5 of user core. Mar 21 13:23:54.572425 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 21 13:23:55.188982 sshd[1647]: Connection closed by 172.24.4.1 port 56588 Mar 21 13:23:55.190224 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Mar 21 13:23:55.206371 systemd[1]: sshd@2-172.24.4.44:22-172.24.4.1:56588.service: Deactivated successfully. Mar 21 13:23:55.210112 systemd[1]: session-5.scope: Deactivated successfully. Mar 21 13:23:55.213567 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Mar 21 13:23:55.216995 systemd[1]: Started sshd@3-172.24.4.44:22-172.24.4.1:34758.service - OpenSSH per-connection server daemon (172.24.4.1:34758). Mar 21 13:23:55.220041 systemd-logind[1458]: Removed session 5. Mar 21 13:23:56.726425 sshd[1652]: Accepted publickey for core from 172.24.4.1 port 34758 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:23:56.729083 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:23:56.740645 systemd-logind[1458]: New session 6 of user core. Mar 21 13:23:56.751354 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 21 13:23:57.369734 sshd[1655]: Connection closed by 172.24.4.1 port 34758 Mar 21 13:23:57.371221 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Mar 21 13:23:57.394303 systemd[1]: sshd@3-172.24.4.44:22-172.24.4.1:34758.service: Deactivated successfully. Mar 21 13:23:57.398283 systemd[1]: session-6.scope: Deactivated successfully. Mar 21 13:23:57.402673 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Mar 21 13:23:57.405895 systemd[1]: Started sshd@4-172.24.4.44:22-172.24.4.1:34768.service - OpenSSH per-connection server daemon (172.24.4.1:34768). Mar 21 13:23:57.410227 systemd-logind[1458]: Removed session 6. Mar 21 13:23:57.803424 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 21 13:23:57.807224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 13:23:58.087486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 13:23:58.099715 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 13:23:58.174715 kubelet[1671]: E0321 13:23:58.174675 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 13:23:58.181142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 13:23:58.181446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 13:23:58.182353 systemd[1]: kubelet.service: Consumed 264ms CPU time, 95.5M memory peak. Mar 21 13:23:59.000426 sshd[1660]: Accepted publickey for core from 172.24.4.1 port 34768 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:23:59.003138 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:23:59.015107 systemd-logind[1458]: New session 7 of user core. Mar 21 13:23:59.021379 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 21 13:23:59.477019 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 21 13:23:59.477703 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 13:23:59.495114 sudo[1679]: pam_unix(sudo:session): session closed for user root Mar 21 13:23:59.724679 sshd[1678]: Connection closed by 172.24.4.1 port 34768 Mar 21 13:23:59.721978 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Mar 21 13:23:59.738838 systemd[1]: sshd@4-172.24.4.44:22-172.24.4.1:34768.service: Deactivated successfully. Mar 21 13:23:59.742136 systemd[1]: session-7.scope: Deactivated successfully. Mar 21 13:23:59.745517 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Mar 21 13:23:59.748553 systemd[1]: Started sshd@5-172.24.4.44:22-172.24.4.1:34770.service - OpenSSH per-connection server daemon (172.24.4.1:34770). Mar 21 13:23:59.751924 systemd-logind[1458]: Removed session 7. Mar 21 13:24:01.056374 sshd[1684]: Accepted publickey for core from 172.24.4.1 port 34770 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:24:01.060851 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:24:01.073138 systemd-logind[1458]: New session 8 of user core. Mar 21 13:24:01.082378 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 21 13:24:01.534274 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 21 13:24:01.535771 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 13:24:01.543018 sudo[1689]: pam_unix(sudo:session): session closed for user root Mar 21 13:24:01.554685 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 21 13:24:01.555357 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 13:24:01.575992 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 13:24:01.651421 augenrules[1711]: No rules Mar 21 13:24:01.653110 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 13:24:01.653546 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 13:24:01.656478 sudo[1688]: pam_unix(sudo:session): session closed for user root Mar 21 13:24:01.933837 sshd[1687]: Connection closed by 172.24.4.1 port 34770 Mar 21 13:24:01.936647 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Mar 21 13:24:01.950102 systemd[1]: sshd@5-172.24.4.44:22-172.24.4.1:34770.service: Deactivated successfully. Mar 21 13:24:01.953635 systemd[1]: session-8.scope: Deactivated successfully. Mar 21 13:24:01.957382 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Mar 21 13:24:01.960407 systemd[1]: Started sshd@6-172.24.4.44:22-172.24.4.1:34786.service - OpenSSH per-connection server daemon (172.24.4.1:34786). Mar 21 13:24:01.963930 systemd-logind[1458]: Removed session 8. Mar 21 13:24:03.124121 sshd[1719]: Accepted publickey for core from 172.24.4.1 port 34786 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:24:03.126791 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:24:03.137627 systemd-logind[1458]: New session 9 of user core. Mar 21 13:24:03.146396 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 21 13:24:03.533274 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 21 13:24:03.533911 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 13:24:04.339504 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 21 13:24:04.354174 (dockerd)[1742]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 21 13:24:04.988731 dockerd[1742]: time="2025-03-21T13:24:04.988506126Z" level=info msg="Starting up" Mar 21 13:24:04.991599 dockerd[1742]: time="2025-03-21T13:24:04.991426804Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 21 13:24:05.047544 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1922260879-merged.mount: Deactivated successfully. Mar 21 13:24:05.090801 dockerd[1742]: time="2025-03-21T13:24:05.090724176Z" level=info msg="Loading containers: start." Mar 21 13:24:05.302198 kernel: Initializing XFRM netlink socket Mar 21 13:24:05.421648 systemd-networkd[1388]: docker0: Link UP Mar 21 13:24:05.472627 dockerd[1742]: time="2025-03-21T13:24:05.472551934Z" level=info msg="Loading containers: done." Mar 21 13:24:05.492001 dockerd[1742]: time="2025-03-21T13:24:05.491936700Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 21 13:24:05.492314 dockerd[1742]: time="2025-03-21T13:24:05.492022772Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 21 13:24:05.492314 dockerd[1742]: time="2025-03-21T13:24:05.492151193Z" level=info msg="Daemon has completed initialization" Mar 21 13:24:05.545005 dockerd[1742]: time="2025-03-21T13:24:05.544869045Z" level=info msg="API listen on /run/docker.sock" Mar 21 13:24:05.545772 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 21 13:24:07.054479 containerd[1479]: time="2025-03-21T13:24:07.054368801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 21 13:24:07.842737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55492602.mount: Deactivated successfully. Mar 21 13:24:08.375259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 21 13:24:08.378118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 13:24:08.492549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 13:24:08.500426 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 13:24:08.730131 kubelet[1997]: E0321 13:24:08.729667 1997 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 13:24:08.734535 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 13:24:08.735041 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 13:24:08.735928 systemd[1]: kubelet.service: Consumed 155ms CPU time, 95.5M memory peak. Mar 21 13:24:10.153454 containerd[1479]: time="2025-03-21T13:24:10.153408780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:10.154539 containerd[1479]: time="2025-03-21T13:24:10.154483005Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=27959276" Mar 21 13:24:10.155875 containerd[1479]: time="2025-03-21T13:24:10.155842985Z" level=info msg="ImageCreate event name:\"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:10.158876 containerd[1479]: time="2025-03-21T13:24:10.158844895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:10.159998 containerd[1479]: time="2025-03-21T13:24:10.159963233Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"27956068\" in 3.105529479s" Mar 21 13:24:10.160120 containerd[1479]: time="2025-03-21T13:24:10.160101723Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 21 13:24:10.162480 containerd[1479]: time="2025-03-21T13:24:10.162453854Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 21 13:24:12.240625 containerd[1479]: time="2025-03-21T13:24:12.240460244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:12.241550 containerd[1479]: time="2025-03-21T13:24:12.241489665Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=24713784" Mar 21 13:24:12.242907 containerd[1479]: time="2025-03-21T13:24:12.242863702Z" level=info msg="ImageCreate event name:\"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:12.245972 containerd[1479]: time="2025-03-21T13:24:12.245909484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:12.247074 containerd[1479]: time="2025-03-21T13:24:12.246899621Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"26201384\" in 2.084412726s" Mar 21 13:24:12.247074 containerd[1479]: time="2025-03-21T13:24:12.246935108Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 21 13:24:12.247343 containerd[1479]: time="2025-03-21T13:24:12.247308689Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 21 13:24:13.955985 containerd[1479]: time="2025-03-21T13:24:13.955933945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:13.957115 containerd[1479]: time="2025-03-21T13:24:13.957058133Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=18780376" Mar 21 13:24:13.958416 containerd[1479]: time="2025-03-21T13:24:13.958372819Z" level=info msg="ImageCreate event name:\"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:13.961221 containerd[1479]: time="2025-03-21T13:24:13.961179473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:13.962346 containerd[1479]: time="2025-03-21T13:24:13.962219984Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"20267994\" in 1.714882492s" Mar 21 13:24:13.962346 containerd[1479]: time="2025-03-21T13:24:13.962260019Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 21 13:24:13.963089 containerd[1479]: time="2025-03-21T13:24:13.962797097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 21 13:24:15.498093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819745612.mount: Deactivated successfully. Mar 21 13:24:16.044784 containerd[1479]: time="2025-03-21T13:24:16.044602671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:16.046019 containerd[1479]: time="2025-03-21T13:24:16.045718908Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354638" Mar 21 13:24:16.047166 containerd[1479]: time="2025-03-21T13:24:16.047104363Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:16.052835 containerd[1479]: time="2025-03-21T13:24:16.052681984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:16.053154 containerd[1479]: time="2025-03-21T13:24:16.053129769Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 2.090303557s" Mar 21 13:24:16.053623 containerd[1479]: time="2025-03-21T13:24:16.053227393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 21 13:24:16.054330 containerd[1479]: time="2025-03-21T13:24:16.054123525Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 21 13:24:16.720243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097711123.mount: Deactivated successfully. Mar 21 13:24:18.169651 containerd[1479]: time="2025-03-21T13:24:18.169429541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:18.170940 containerd[1479]: time="2025-03-21T13:24:18.170698594Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Mar 21 13:24:18.172161 containerd[1479]: time="2025-03-21T13:24:18.172090361Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:18.175113 containerd[1479]: time="2025-03-21T13:24:18.175058950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:18.176418 containerd[1479]: time="2025-03-21T13:24:18.176293248Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.122140037s" Mar 21 13:24:18.176418 containerd[1479]: time="2025-03-21T13:24:18.176343463Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 21 13:24:18.177396 containerd[1479]: time="2025-03-21T13:24:18.177291862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 21 13:24:18.731302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount227229323.mount: Deactivated successfully. Mar 21 13:24:18.741976 containerd[1479]: time="2025-03-21T13:24:18.741897630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 13:24:18.743933 containerd[1479]: time="2025-03-21T13:24:18.743822471Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 21 13:24:18.745802 containerd[1479]: time="2025-03-21T13:24:18.745655539Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 13:24:18.751010 containerd[1479]: time="2025-03-21T13:24:18.750856931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 13:24:18.753657 containerd[1479]: time="2025-03-21T13:24:18.752869508Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 575.534545ms" Mar 21 13:24:18.753657 containerd[1479]: time="2025-03-21T13:24:18.752939009Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 21 13:24:18.754341 containerd[1479]: time="2025-03-21T13:24:18.754140425Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 21 13:24:18.875759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 21 13:24:18.879143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 13:24:19.298904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 13:24:19.309359 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 13:24:19.352973 kubelet[2081]: E0321 13:24:19.352896 2081 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 13:24:19.357886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 13:24:19.358406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 13:24:19.359480 systemd[1]: kubelet.service: Consumed 200ms CPU time, 95.6M memory peak. Mar 21 13:24:19.659124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2599396757.mount: Deactivated successfully. Mar 21 13:24:22.203149 containerd[1479]: time="2025-03-21T13:24:22.203105078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:22.204688 containerd[1479]: time="2025-03-21T13:24:22.204648525Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Mar 21 13:24:22.205529 containerd[1479]: time="2025-03-21T13:24:22.205505930Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:22.208922 containerd[1479]: time="2025-03-21T13:24:22.208879556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:22.210186 containerd[1479]: time="2025-03-21T13:24:22.210158004Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.455845874s" Mar 21 13:24:22.210274 containerd[1479]: time="2025-03-21T13:24:22.210257612Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 21 13:24:25.043548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 13:24:25.044037 systemd[1]: kubelet.service: Consumed 200ms CPU time, 95.6M memory peak. Mar 21 13:24:25.048803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 13:24:25.106191 systemd[1]: Reload requested from client PID 2169 ('systemctl') (unit session-9.scope)... Mar 21 13:24:25.106208 systemd[1]: Reloading... Mar 21 13:24:25.199122 zram_generator::config[2215]: No configuration found. Mar 21 13:24:25.357425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 13:24:25.472784 systemd[1]: Reloading finished in 366 ms. Mar 21 13:24:25.524870 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 21 13:24:25.524950 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 21 13:24:25.525349 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 13:24:25.525410 systemd[1]: kubelet.service: Consumed 101ms CPU time, 83.4M memory peak. Mar 21 13:24:25.527687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 13:24:25.668740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 13:24:25.677532 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 13:24:25.721200 kubelet[2281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 13:24:25.721514 kubelet[2281]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 21 13:24:25.721564 kubelet[2281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 13:24:25.874003 kubelet[2281]: I0321 13:24:25.872931 2281 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 13:24:26.855093 kubelet[2281]: I0321 13:24:26.854425 2281 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 21 13:24:26.855093 kubelet[2281]: I0321 13:24:26.854461 2281 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 13:24:26.855093 kubelet[2281]: I0321 13:24:26.854954 2281 server.go:929] "Client rotation is on, will bootstrap in background" Mar 21 13:24:26.885383 kubelet[2281]: I0321 13:24:26.885065 2281 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 13:24:26.888846 kubelet[2281]: E0321 13:24:26.888731 2281 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.24.4.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.24.4.44:6443: connect: connection refused" logger="UnhandledError" Mar 21 13:24:26.908318 kubelet[2281]: I0321 13:24:26.908259 2281 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 21 13:24:26.918173 kubelet[2281]: I0321 13:24:26.918096 2281 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 13:24:26.918386 kubelet[2281]: I0321 13:24:26.918361 2281 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 21 13:24:26.918711 kubelet[2281]: I0321 13:24:26.918649 2281 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 13:24:26.919161 kubelet[2281]: I0321 13:24:26.918716 2281 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-0-3-0-e42165490f.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 21 13:24:26.919409 kubelet[2281]: I0321 13:24:26.919162 2281 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 13:24:26.919409 kubelet[2281]: I0321 13:24:26.919185 2281 container_manager_linux.go:300] "Creating device plugin manager" Mar 21 13:24:26.919409 kubelet[2281]: I0321 13:24:26.919377 2281 state_mem.go:36] "Initialized new in-memory state store" Mar 21 13:24:26.925032 kubelet[2281]: I0321 13:24:26.924956 2281 kubelet.go:408] "Attempting to sync node with API server" Mar 21 13:24:26.925032 kubelet[2281]: I0321 13:24:26.925014 2281 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 13:24:26.925237 kubelet[2281]: I0321 13:24:26.925105 2281 kubelet.go:314] "Adding apiserver pod source" Mar 21 13:24:26.925237 kubelet[2281]: I0321 13:24:26.925164 2281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 13:24:26.931917 kubelet[2281]: W0321 13:24:26.931832 2281 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-0-3-0-e42165490f.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Mar 21 13:24:26.932539 kubelet[2281]: E0321 13:24:26.932169 2281 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.24.4.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-9999-0-3-0-e42165490f.novalocal&limit=500&resourceVersion=0\": dial tcp 172.24.4.44:6443: connect: connection refused" logger="UnhandledError" Mar 21 13:24:26.937218 kubelet[2281]: I0321 13:24:26.937168 2281 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 13:24:26.941342 kubelet[2281]: I0321 13:24:26.941179 2281 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 13:24:26.944708 kubelet[2281]: W0321 13:24:26.943022 2281 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 21 13:24:26.944708 kubelet[2281]: I0321 13:24:26.944303 2281 server.go:1269] "Started kubelet" Mar 21 13:24:26.949219 kubelet[2281]: W0321 13:24:26.949133 2281 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Mar 21 13:24:26.949462 kubelet[2281]: E0321 13:24:26.949416 2281 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.44:6443: connect: connection refused" logger="UnhandledError" Mar 21 13:24:26.953463 kubelet[2281]: I0321 13:24:26.953381 2281 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 13:24:26.955747 kubelet[2281]: I0321 13:24:26.955712 2281 server.go:460] "Adding debug handlers to kubelet server" Mar 21 13:24:26.963983 kubelet[2281]: I0321 13:24:26.963884 2281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 13:24:26.964560 kubelet[2281]: I0321 13:24:26.964486 2281 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 13:24:26.968272 kubelet[2281]: I0321 13:24:26.968136 2281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 13:24:26.970817 kubelet[2281]: E0321 13:24:26.965171 2281 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.44:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.44:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-9999-0-3-0-e42165490f.novalocal.182ed43d9dc94735 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-9999-0-3-0-e42165490f.novalocal,UID:ci-9999-0-3-0-e42165490f.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-9999-0-3-0-e42165490f.novalocal,},FirstTimestamp:2025-03-21 13:24:26.944268085 +0000 UTC m=+1.263254811,LastTimestamp:2025-03-21 13:24:26.944268085 +0000 UTC m=+1.263254811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-9999-0-3-0-e42165490f.novalocal,}" Mar 21 13:24:26.972950 kubelet[2281]: I0321 13:24:26.972905 2281 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 21 13:24:26.974828 kubelet[2281]: I0321 13:24:26.974544 2281 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 21 13:24:26.975831 kubelet[2281]: E0321 13:24:26.975812 2281 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-9999-0-3-0-e42165490f.novalocal\" not found" Mar 21 13:24:26.976939 kubelet[2281]: I0321 13:24:26.976557 2281 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 21 13:24:26.976939 kubelet[2281]: I0321 13:24:26.976661 2281 reconciler.go:26] "Reconciler: start to sync state" Mar 21 13:24:26.977154 kubelet[2281]: W0321 13:24:26.977118 2281 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Mar 21 13:24:26.977242 kubelet[2281]: E0321 13:24:26.977225 2281 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.24.4.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.24.4.44:6443: connect: connection refused" logger="UnhandledError" Mar 21 13:24:26.977360 kubelet[2281]: E0321 13:24:26.977336 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-0-3-0-e42165490f.novalocal?timeout=10s\": dial tcp 172.24.4.44:6443: connect: connection refused" interval="200ms" Mar 21 13:24:26.980495 kubelet[2281]: I0321 13:24:26.980479 2281 factory.go:221] Registration of the containerd container factory successfully Mar 21 13:24:26.980697 kubelet[2281]: I0321 13:24:26.980687 2281 factory.go:221] Registration of the systemd container factory successfully Mar 21 13:24:26.980813 kubelet[2281]: I0321 13:24:26.980797 2281 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 13:24:26.995364 kubelet[2281]: I0321 13:24:26.995329 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 13:24:26.996616 kubelet[2281]: I0321 13:24:26.996377 2281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 13:24:26.996616 kubelet[2281]: I0321 13:24:26.996400 2281 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 21 13:24:26.996616 kubelet[2281]: I0321 13:24:26.996421 2281 kubelet.go:2321] "Starting kubelet main sync loop" Mar 21 13:24:26.996616 kubelet[2281]: E0321 13:24:26.996455 2281 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 13:24:27.005682 kubelet[2281]: E0321 13:24:27.005414 2281 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 13:24:27.005682 kubelet[2281]: W0321 13:24:27.005590 2281 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Mar 21 13:24:27.005682 kubelet[2281]: E0321 13:24:27.005636 2281 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.24.4.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.24.4.44:6443: connect: connection refused" logger="UnhandledError" Mar 21 13:24:27.010112 kubelet[2281]: I0321 13:24:27.009940 2281 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 21 13:24:27.010112 kubelet[2281]: I0321 13:24:27.010024 2281 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 21 13:24:27.010112 kubelet[2281]: I0321 13:24:27.010039 2281 state_mem.go:36] "Initialized new in-memory state store" Mar 21 13:24:27.014776 kubelet[2281]: I0321 13:24:27.014759 2281 policy_none.go:49] "None policy: Start" Mar 21 13:24:27.015357 kubelet[2281]: I0321 13:24:27.015340 2281 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 21 13:24:27.015411 kubelet[2281]: I0321 13:24:27.015373 2281 state_mem.go:35] "Initializing new in-memory state store" Mar 21 13:24:27.023607 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 21 13:24:27.040642 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 21 13:24:27.044360 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 21 13:24:27.053814 kubelet[2281]: I0321 13:24:27.053795 2281 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 13:24:27.054421 kubelet[2281]: I0321 13:24:27.054038 2281 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 21 13:24:27.054421 kubelet[2281]: I0321 13:24:27.054069 2281 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 13:24:27.054421 kubelet[2281]: I0321 13:24:27.054282 2281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 13:24:27.056374 kubelet[2281]: E0321 13:24:27.056292 2281 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-9999-0-3-0-e42165490f.novalocal\" not found" Mar 21 13:24:27.118384 systemd[1]: Created slice kubepods-burstable-podc1ddd42b184270b40973840634dddd66.slice - libcontainer container kubepods-burstable-podc1ddd42b184270b40973840634dddd66.slice. Mar 21 13:24:27.153550 systemd[1]: Created slice kubepods-burstable-pod9da1014257e0b2ef005613b77cdfda9e.slice - libcontainer container kubepods-burstable-pod9da1014257e0b2ef005613b77cdfda9e.slice. Mar 21 13:24:27.159490 kubelet[2281]: I0321 13:24:27.159391 2281 kubelet_node_status.go:72] "Attempting to register node" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.160672 kubelet[2281]: E0321 13:24:27.160624 2281 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.44:6443/api/v1/nodes\": dial tcp 172.24.4.44:6443: connect: connection refused" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.165246 systemd[1]: Created slice kubepods-burstable-podbd96602be02a991eea712b982c58359c.slice - libcontainer container kubepods-burstable-podbd96602be02a991eea712b982c58359c.slice. Mar 21 13:24:27.178436 kubelet[2281]: I0321 13:24:27.178368 2281 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1ddd42b184270b40973840634dddd66-k8s-certs\") pod \"kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"c1ddd42b184270b40973840634dddd66\") " pod="kube-system/kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.178577 kubelet[2281]: I0321 13:24:27.178453 2281 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1ddd42b184270b40973840634dddd66-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"c1ddd42b184270b40973840634dddd66\") " pod="kube-system/kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.178577 kubelet[2281]: I0321 13:24:27.178508 2281 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9da1014257e0b2ef005613b77cdfda9e-k8s-certs\") pod \"kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"9da1014257e0b2ef005613b77cdfda9e\") " pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.178577 kubelet[2281]: I0321 13:24:27.178561 2281 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9da1014257e0b2ef005613b77cdfda9e-kubeconfig\") pod \"kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"9da1014257e0b2ef005613b77cdfda9e\") " pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.178770 kubelet[2281]: I0321 13:24:27.178605 2281 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd96602be02a991eea712b982c58359c-kubeconfig\") pod \"kube-scheduler-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"bd96602be02a991eea712b982c58359c\") " pod="kube-system/kube-scheduler-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.178770 kubelet[2281]: I0321 13:24:27.178648 2281 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1ddd42b184270b40973840634dddd66-ca-certs\") pod \"kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"c1ddd42b184270b40973840634dddd66\") " pod="kube-system/kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.178770 kubelet[2281]: I0321 13:24:27.178693 2281 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9da1014257e0b2ef005613b77cdfda9e-ca-certs\") pod \"kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"9da1014257e0b2ef005613b77cdfda9e\") " pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.178770 kubelet[2281]: I0321 13:24:27.178740 2281 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9da1014257e0b2ef005613b77cdfda9e-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"9da1014257e0b2ef005613b77cdfda9e\") " pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.179139 kubelet[2281]: I0321 13:24:27.178785 2281 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9da1014257e0b2ef005613b77cdfda9e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"9da1014257e0b2ef005613b77cdfda9e\") " pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.179770 kubelet[2281]: E0321 13:24:27.179622 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-0-3-0-e42165490f.novalocal?timeout=10s\": dial tcp 172.24.4.44:6443: connect: connection refused" interval="400ms" Mar 21 13:24:27.363728 kubelet[2281]: I0321 13:24:27.363646 2281 kubelet_node_status.go:72] "Attempting to register node" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.364304 kubelet[2281]: E0321 13:24:27.364223 2281 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.44:6443/api/v1/nodes\": dial tcp 172.24.4.44:6443: connect: connection refused" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.446186 containerd[1479]: time="2025-03-21T13:24:27.446023735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal,Uid:c1ddd42b184270b40973840634dddd66,Namespace:kube-system,Attempt:0,}" Mar 21 13:24:27.462649 containerd[1479]: time="2025-03-21T13:24:27.462197797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal,Uid:9da1014257e0b2ef005613b77cdfda9e,Namespace:kube-system,Attempt:0,}" Mar 21 13:24:27.471907 containerd[1479]: time="2025-03-21T13:24:27.471817266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-0-3-0-e42165490f.novalocal,Uid:bd96602be02a991eea712b982c58359c,Namespace:kube-system,Attempt:0,}" Mar 21 13:24:27.510758 containerd[1479]: time="2025-03-21T13:24:27.510631709Z" level=info msg="connecting to shim b145fcd47dfd895b2fb173654a27ba49ee23b1f1eb307befa0c4dcb40c615a60" address="unix:///run/containerd/s/ce4e8b9294d8befffce71506da3283e1434739468df0959b9befa65b92078e36" namespace=k8s.io protocol=ttrpc version=3 Mar 21 13:24:27.567420 containerd[1479]: time="2025-03-21T13:24:27.566927204Z" level=info msg="connecting to shim 57294da8e7694de7c7546c72d4444d2e784d543f14034e94213fcd7e96a4c986" address="unix:///run/containerd/s/7ac995ddd0189b779cc59d7a60a330d6ce91a44d4dc4e120de33febfa6e68cf3" namespace=k8s.io protocol=ttrpc version=3 Mar 21 13:24:27.580255 systemd[1]: Started cri-containerd-b145fcd47dfd895b2fb173654a27ba49ee23b1f1eb307befa0c4dcb40c615a60.scope - libcontainer container b145fcd47dfd895b2fb173654a27ba49ee23b1f1eb307befa0c4dcb40c615a60. Mar 21 13:24:27.581472 containerd[1479]: time="2025-03-21T13:24:27.581444219Z" level=info msg="connecting to shim 96875b92844968d15eadb2861e277bd0abee2bb35f1d71f2d6e3cfe7ed0bd278" address="unix:///run/containerd/s/2345c99f90b568d688bef03f2959b846d4ca90bd5f8f5365d7ed79209f247e40" namespace=k8s.io protocol=ttrpc version=3 Mar 21 13:24:27.582277 kubelet[2281]: E0321 13:24:27.582247 2281 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-9999-0-3-0-e42165490f.novalocal?timeout=10s\": dial tcp 172.24.4.44:6443: connect: connection refused" interval="800ms" Mar 21 13:24:27.614330 systemd[1]: Started cri-containerd-57294da8e7694de7c7546c72d4444d2e784d543f14034e94213fcd7e96a4c986.scope - libcontainer container 57294da8e7694de7c7546c72d4444d2e784d543f14034e94213fcd7e96a4c986. Mar 21 13:24:27.620000 systemd[1]: Started cri-containerd-96875b92844968d15eadb2861e277bd0abee2bb35f1d71f2d6e3cfe7ed0bd278.scope - libcontainer container 96875b92844968d15eadb2861e277bd0abee2bb35f1d71f2d6e3cfe7ed0bd278. Mar 21 13:24:27.673770 containerd[1479]: time="2025-03-21T13:24:27.673595178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal,Uid:c1ddd42b184270b40973840634dddd66,Namespace:kube-system,Attempt:0,} returns sandbox id \"b145fcd47dfd895b2fb173654a27ba49ee23b1f1eb307befa0c4dcb40c615a60\"" Mar 21 13:24:27.682833 containerd[1479]: time="2025-03-21T13:24:27.682115460Z" level=info msg="CreateContainer within sandbox \"b145fcd47dfd895b2fb173654a27ba49ee23b1f1eb307befa0c4dcb40c615a60\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 21 13:24:27.697539 containerd[1479]: time="2025-03-21T13:24:27.696549799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal,Uid:9da1014257e0b2ef005613b77cdfda9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"57294da8e7694de7c7546c72d4444d2e784d543f14034e94213fcd7e96a4c986\"" Mar 21 13:24:27.700208 containerd[1479]: time="2025-03-21T13:24:27.700164230Z" level=info msg="CreateContainer within sandbox \"57294da8e7694de7c7546c72d4444d2e784d543f14034e94213fcd7e96a4c986\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 21 13:24:27.701605 containerd[1479]: time="2025-03-21T13:24:27.701575716Z" level=info msg="Container 66906434fc01a233943bedcc7cf92f74535d7b48ded4314fb87d933bf2e73dbf: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:24:27.704359 containerd[1479]: time="2025-03-21T13:24:27.704332434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-9999-0-3-0-e42165490f.novalocal,Uid:bd96602be02a991eea712b982c58359c,Namespace:kube-system,Attempt:0,} returns sandbox id \"96875b92844968d15eadb2861e277bd0abee2bb35f1d71f2d6e3cfe7ed0bd278\"" Mar 21 13:24:27.707835 containerd[1479]: time="2025-03-21T13:24:27.707806330Z" level=info msg="CreateContainer within sandbox \"96875b92844968d15eadb2861e277bd0abee2bb35f1d71f2d6e3cfe7ed0bd278\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 21 13:24:27.714451 containerd[1479]: time="2025-03-21T13:24:27.714323743Z" level=info msg="CreateContainer within sandbox \"b145fcd47dfd895b2fb173654a27ba49ee23b1f1eb307befa0c4dcb40c615a60\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"66906434fc01a233943bedcc7cf92f74535d7b48ded4314fb87d933bf2e73dbf\"" Mar 21 13:24:27.715105 containerd[1479]: time="2025-03-21T13:24:27.715073533Z" level=info msg="StartContainer for \"66906434fc01a233943bedcc7cf92f74535d7b48ded4314fb87d933bf2e73dbf\"" Mar 21 13:24:27.716177 containerd[1479]: time="2025-03-21T13:24:27.716143427Z" level=info msg="connecting to shim 66906434fc01a233943bedcc7cf92f74535d7b48ded4314fb87d933bf2e73dbf" address="unix:///run/containerd/s/ce4e8b9294d8befffce71506da3283e1434739468df0959b9befa65b92078e36" protocol=ttrpc version=3 Mar 21 13:24:27.717987 containerd[1479]: time="2025-03-21T13:24:27.717964944Z" level=info msg="Container 464eb23c28d4ea5da355c11e31098c41bd2a7e4ab6a79cdbf900ebb78aa9675e: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:24:27.724541 containerd[1479]: time="2025-03-21T13:24:27.724467569Z" level=info msg="Container 0e5d567353bf8e7dffb9869da02d5a89123485c73307716048228e9f4659a461: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:24:27.736289 containerd[1479]: time="2025-03-21T13:24:27.736246943Z" level=info msg="CreateContainer within sandbox \"57294da8e7694de7c7546c72d4444d2e784d543f14034e94213fcd7e96a4c986\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"464eb23c28d4ea5da355c11e31098c41bd2a7e4ab6a79cdbf900ebb78aa9675e\"" Mar 21 13:24:27.736916 containerd[1479]: time="2025-03-21T13:24:27.736889231Z" level=info msg="StartContainer for \"464eb23c28d4ea5da355c11e31098c41bd2a7e4ab6a79cdbf900ebb78aa9675e\"" Mar 21 13:24:27.739615 containerd[1479]: time="2025-03-21T13:24:27.739572150Z" level=info msg="connecting to shim 464eb23c28d4ea5da355c11e31098c41bd2a7e4ab6a79cdbf900ebb78aa9675e" address="unix:///run/containerd/s/7ac995ddd0189b779cc59d7a60a330d6ce91a44d4dc4e120de33febfa6e68cf3" protocol=ttrpc version=3 Mar 21 13:24:27.740755 systemd[1]: Started cri-containerd-66906434fc01a233943bedcc7cf92f74535d7b48ded4314fb87d933bf2e73dbf.scope - libcontainer container 66906434fc01a233943bedcc7cf92f74535d7b48ded4314fb87d933bf2e73dbf. Mar 21 13:24:27.759397 containerd[1479]: time="2025-03-21T13:24:27.758235476Z" level=info msg="CreateContainer within sandbox \"96875b92844968d15eadb2861e277bd0abee2bb35f1d71f2d6e3cfe7ed0bd278\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0e5d567353bf8e7dffb9869da02d5a89123485c73307716048228e9f4659a461\"" Mar 21 13:24:27.760390 containerd[1479]: time="2025-03-21T13:24:27.760362528Z" level=info msg="StartContainer for \"0e5d567353bf8e7dffb9869da02d5a89123485c73307716048228e9f4659a461\"" Mar 21 13:24:27.761578 containerd[1479]: time="2025-03-21T13:24:27.761555283Z" level=info msg="connecting to shim 0e5d567353bf8e7dffb9869da02d5a89123485c73307716048228e9f4659a461" address="unix:///run/containerd/s/2345c99f90b568d688bef03f2959b846d4ca90bd5f8f5365d7ed79209f247e40" protocol=ttrpc version=3 Mar 21 13:24:27.767068 kubelet[2281]: I0321 13:24:27.766966 2281 kubelet_node_status.go:72] "Attempting to register node" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.768095 kubelet[2281]: E0321 13:24:27.767369 2281 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.24.4.44:6443/api/v1/nodes\": dial tcp 172.24.4.44:6443: connect: connection refused" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:27.777226 systemd[1]: Started cri-containerd-464eb23c28d4ea5da355c11e31098c41bd2a7e4ab6a79cdbf900ebb78aa9675e.scope - libcontainer container 464eb23c28d4ea5da355c11e31098c41bd2a7e4ab6a79cdbf900ebb78aa9675e. Mar 21 13:24:27.787221 systemd[1]: Started cri-containerd-0e5d567353bf8e7dffb9869da02d5a89123485c73307716048228e9f4659a461.scope - libcontainer container 0e5d567353bf8e7dffb9869da02d5a89123485c73307716048228e9f4659a461. Mar 21 13:24:27.824794 containerd[1479]: time="2025-03-21T13:24:27.824764466Z" level=info msg="StartContainer for \"66906434fc01a233943bedcc7cf92f74535d7b48ded4314fb87d933bf2e73dbf\" returns successfully" Mar 21 13:24:27.878291 containerd[1479]: time="2025-03-21T13:24:27.878248391Z" level=info msg="StartContainer for \"0e5d567353bf8e7dffb9869da02d5a89123485c73307716048228e9f4659a461\" returns successfully" Mar 21 13:24:27.889530 kubelet[2281]: W0321 13:24:27.889434 2281 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.24.4.44:6443: connect: connection refused Mar 21 13:24:27.889530 kubelet[2281]: E0321 13:24:27.889500 2281 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.24.4.44:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.24.4.44:6443: connect: connection refused" logger="UnhandledError" Mar 21 13:24:27.890298 containerd[1479]: time="2025-03-21T13:24:27.890199027Z" level=info msg="StartContainer for \"464eb23c28d4ea5da355c11e31098c41bd2a7e4ab6a79cdbf900ebb78aa9675e\" returns successfully" Mar 21 13:24:28.422649 update_engine[1459]: I20250321 13:24:28.422080 1459 update_attempter.cc:509] Updating boot flags... Mar 21 13:24:28.474068 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2556) Mar 21 13:24:28.577111 kubelet[2281]: I0321 13:24:28.577080 2281 kubelet_node_status.go:72] "Attempting to register node" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:28.599374 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2559) Mar 21 13:24:28.697104 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2559) Mar 21 13:24:30.072314 kubelet[2281]: E0321 13:24:30.072280 2281 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-9999-0-3-0-e42165490f.novalocal\" not found" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:30.160147 kubelet[2281]: I0321 13:24:30.160123 2281 kubelet_node_status.go:75] "Successfully registered node" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:30.160298 kubelet[2281]: E0321 13:24:30.160283 2281 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-9999-0-3-0-e42165490f.novalocal\": node \"ci-9999-0-3-0-e42165490f.novalocal\" not found" Mar 21 13:24:30.611602 kubelet[2281]: E0321 13:24:30.611153 2281 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:30.940650 kubelet[2281]: I0321 13:24:30.940235 2281 apiserver.go:52] "Watching apiserver" Mar 21 13:24:30.975311 kubelet[2281]: I0321 13:24:30.975191 2281 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 21 13:24:32.330669 kubelet[2281]: W0321 13:24:32.329150 2281 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 21 13:24:32.375944 systemd[1]: Reload requested from client PID 2567 ('systemctl') (unit session-9.scope)... Mar 21 13:24:32.375976 systemd[1]: Reloading... Mar 21 13:24:32.498080 zram_generator::config[2613]: No configuration found. Mar 21 13:24:32.639034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 13:24:32.775837 systemd[1]: Reloading finished in 399 ms. Mar 21 13:24:32.802610 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 13:24:32.814395 systemd[1]: kubelet.service: Deactivated successfully. Mar 21 13:24:32.814637 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 13:24:32.814708 systemd[1]: kubelet.service: Consumed 1.647s CPU time, 117.2M memory peak. Mar 21 13:24:32.817123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 13:24:33.022868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 13:24:33.033716 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 13:24:33.086701 kubelet[2676]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 13:24:33.088184 kubelet[2676]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 21 13:24:33.088184 kubelet[2676]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 13:24:33.088184 kubelet[2676]: I0321 13:24:33.087158 2676 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 13:24:33.097111 kubelet[2676]: I0321 13:24:33.096278 2676 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 21 13:24:33.097111 kubelet[2676]: I0321 13:24:33.096299 2676 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 13:24:33.097111 kubelet[2676]: I0321 13:24:33.096690 2676 server.go:929] "Client rotation is on, will bootstrap in background" Mar 21 13:24:33.103191 kubelet[2676]: I0321 13:24:33.099587 2676 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 21 13:24:33.103383 kubelet[2676]: I0321 13:24:33.103205 2676 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 13:24:33.108621 kubelet[2676]: I0321 13:24:33.108583 2676 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 21 13:24:33.111087 kubelet[2676]: I0321 13:24:33.111021 2676 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 13:24:33.111204 kubelet[2676]: I0321 13:24:33.111166 2676 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 21 13:24:33.111299 kubelet[2676]: I0321 13:24:33.111252 2676 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 13:24:33.111449 kubelet[2676]: I0321 13:24:33.111275 2676 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-9999-0-3-0-e42165490f.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 21 13:24:33.111649 kubelet[2676]: I0321 13:24:33.111441 2676 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 13:24:33.111649 kubelet[2676]: I0321 13:24:33.111466 2676 container_manager_linux.go:300] "Creating device plugin manager" Mar 21 13:24:33.111649 kubelet[2676]: I0321 13:24:33.111492 2676 state_mem.go:36] "Initialized new in-memory state store" Mar 21 13:24:33.111649 kubelet[2676]: I0321 13:24:33.111587 2676 kubelet.go:408] "Attempting to sync node with API server" Mar 21 13:24:33.111649 kubelet[2676]: I0321 13:24:33.111601 2676 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 13:24:33.114121 kubelet[2676]: I0321 13:24:33.114087 2676 kubelet.go:314] "Adding apiserver pod source" Mar 21 13:24:33.114121 kubelet[2676]: I0321 13:24:33.114114 2676 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 13:24:33.117143 kubelet[2676]: I0321 13:24:33.115426 2676 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 13:24:33.117143 kubelet[2676]: I0321 13:24:33.115834 2676 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 13:24:33.117143 kubelet[2676]: I0321 13:24:33.116213 2676 server.go:1269] "Started kubelet" Mar 21 13:24:33.120884 kubelet[2676]: I0321 13:24:33.119813 2676 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 13:24:33.122835 kubelet[2676]: I0321 13:24:33.122792 2676 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 13:24:33.124784 kubelet[2676]: I0321 13:24:33.124753 2676 server.go:460] "Adding debug handlers to kubelet server" Mar 21 13:24:33.125611 kubelet[2676]: I0321 13:24:33.125500 2676 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 13:24:33.125728 kubelet[2676]: I0321 13:24:33.125678 2676 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 13:24:33.126273 kubelet[2676]: I0321 13:24:33.125937 2676 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 21 13:24:33.129654 kubelet[2676]: I0321 13:24:33.129626 2676 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 21 13:24:33.129850 kubelet[2676]: E0321 13:24:33.129820 2676 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-9999-0-3-0-e42165490f.novalocal\" not found" Mar 21 13:24:33.131021 kubelet[2676]: I0321 13:24:33.130902 2676 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 21 13:24:33.131021 kubelet[2676]: I0321 13:24:33.131012 2676 reconciler.go:26] "Reconciler: start to sync state" Mar 21 13:24:33.132791 kubelet[2676]: I0321 13:24:33.132771 2676 factory.go:221] Registration of the systemd container factory successfully Mar 21 13:24:33.132922 kubelet[2676]: I0321 13:24:33.132851 2676 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 13:24:33.144735 kubelet[2676]: I0321 13:24:33.144702 2676 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 13:24:33.145107 kubelet[2676]: E0321 13:24:33.145086 2676 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 13:24:33.151569 kubelet[2676]: I0321 13:24:33.151534 2676 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 13:24:33.151569 kubelet[2676]: I0321 13:24:33.151569 2676 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 21 13:24:33.151657 kubelet[2676]: I0321 13:24:33.151586 2676 kubelet.go:2321] "Starting kubelet main sync loop" Mar 21 13:24:33.151657 kubelet[2676]: E0321 13:24:33.151628 2676 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 13:24:33.155433 kubelet[2676]: I0321 13:24:33.155400 2676 factory.go:221] Registration of the containerd container factory successfully Mar 21 13:24:33.212463 kubelet[2676]: I0321 13:24:33.212194 2676 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 21 13:24:33.212463 kubelet[2676]: I0321 13:24:33.212212 2676 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 21 13:24:33.212463 kubelet[2676]: I0321 13:24:33.212227 2676 state_mem.go:36] "Initialized new in-memory state store" Mar 21 13:24:33.212463 kubelet[2676]: I0321 13:24:33.212362 2676 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 21 13:24:33.212463 kubelet[2676]: I0321 13:24:33.212372 2676 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 21 13:24:33.212463 kubelet[2676]: I0321 13:24:33.212389 2676 policy_none.go:49] "None policy: Start" Mar 21 13:24:33.213266 kubelet[2676]: I0321 13:24:33.212940 2676 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 21 13:24:33.213266 kubelet[2676]: I0321 13:24:33.212960 2676 state_mem.go:35] "Initializing new in-memory state store" Mar 21 13:24:33.213266 kubelet[2676]: I0321 13:24:33.213117 2676 state_mem.go:75] "Updated machine memory state" Mar 21 13:24:33.217694 kubelet[2676]: I0321 13:24:33.217074 2676 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 13:24:33.217694 kubelet[2676]: I0321 13:24:33.217209 2676 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 21 13:24:33.217694 kubelet[2676]: I0321 13:24:33.217220 2676 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 13:24:33.217694 kubelet[2676]: I0321 13:24:33.217457 2676 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 13:24:33.261894 kubelet[2676]: W0321 13:24:33.261862 2676 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 21 13:24:33.262030 kubelet[2676]: W0321 13:24:33.261865 2676 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 21 13:24:33.262297 kubelet[2676]: E0321 13:24:33.262159 2676 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-9999-0-3-0-e42165490f.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.262297 kubelet[2676]: W0321 13:24:33.262065 2676 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 21 13:24:33.323147 kubelet[2676]: I0321 13:24:33.322959 2676 kubelet_node_status.go:72] "Attempting to register node" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.343939 kubelet[2676]: I0321 13:24:33.343855 2676 kubelet_node_status.go:111] "Node was previously registered" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.344197 kubelet[2676]: I0321 13:24:33.344003 2676 kubelet_node_status.go:75] "Successfully registered node" node="ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.381607 sudo[2706]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 21 13:24:33.382406 sudo[2706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 21 13:24:33.433631 kubelet[2676]: I0321 13:24:33.432918 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9da1014257e0b2ef005613b77cdfda9e-ca-certs\") pod \"kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"9da1014257e0b2ef005613b77cdfda9e\") " pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.433631 kubelet[2676]: I0321 13:24:33.433008 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9da1014257e0b2ef005613b77cdfda9e-flexvolume-dir\") pod \"kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"9da1014257e0b2ef005613b77cdfda9e\") " pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.433631 kubelet[2676]: I0321 13:24:33.433099 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9da1014257e0b2ef005613b77cdfda9e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"9da1014257e0b2ef005613b77cdfda9e\") " pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.433631 kubelet[2676]: I0321 13:24:33.433151 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd96602be02a991eea712b982c58359c-kubeconfig\") pod \"kube-scheduler-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"bd96602be02a991eea712b982c58359c\") " pod="kube-system/kube-scheduler-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.434130 kubelet[2676]: I0321 13:24:33.433195 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9da1014257e0b2ef005613b77cdfda9e-k8s-certs\") pod \"kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"9da1014257e0b2ef005613b77cdfda9e\") " pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.434130 kubelet[2676]: I0321 13:24:33.433237 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9da1014257e0b2ef005613b77cdfda9e-kubeconfig\") pod \"kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"9da1014257e0b2ef005613b77cdfda9e\") " pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.434130 kubelet[2676]: I0321 13:24:33.433280 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1ddd42b184270b40973840634dddd66-ca-certs\") pod \"kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"c1ddd42b184270b40973840634dddd66\") " pod="kube-system/kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.434130 kubelet[2676]: I0321 13:24:33.433322 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1ddd42b184270b40973840634dddd66-k8s-certs\") pod \"kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"c1ddd42b184270b40973840634dddd66\") " pod="kube-system/kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.434798 kubelet[2676]: I0321 13:24:33.434461 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1ddd42b184270b40973840634dddd66-usr-share-ca-certificates\") pod \"kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal\" (UID: \"c1ddd42b184270b40973840634dddd66\") " pod="kube-system/kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal" Mar 21 13:24:33.963024 sudo[2706]: pam_unix(sudo:session): session closed for user root Mar 21 13:24:34.114959 kubelet[2676]: I0321 13:24:34.114934 2676 apiserver.go:52] "Watching apiserver" Mar 21 13:24:34.131805 kubelet[2676]: I0321 13:24:34.131781 2676 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 21 13:24:34.279140 kubelet[2676]: I0321 13:24:34.278984 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-9999-0-3-0-e42165490f.novalocal" podStartSLOduration=2.278966642 podStartE2EDuration="2.278966642s" podCreationTimestamp="2025-03-21 13:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 13:24:34.266693385 +0000 UTC m=+1.222516162" watchObservedRunningTime="2025-03-21 13:24:34.278966642 +0000 UTC m=+1.234789428" Mar 21 13:24:34.291207 kubelet[2676]: I0321 13:24:34.291156 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-9999-0-3-0-e42165490f.novalocal" podStartSLOduration=1.291138407 podStartE2EDuration="1.291138407s" podCreationTimestamp="2025-03-21 13:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 13:24:34.280040099 +0000 UTC m=+1.235862885" watchObservedRunningTime="2025-03-21 13:24:34.291138407 +0000 UTC m=+1.246961193" Mar 21 13:24:34.306627 kubelet[2676]: I0321 13:24:34.306571 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-9999-0-3-0-e42165490f.novalocal" podStartSLOduration=1.3065370170000001 podStartE2EDuration="1.306537017s" podCreationTimestamp="2025-03-21 13:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 13:24:34.291590186 +0000 UTC m=+1.247412972" watchObservedRunningTime="2025-03-21 13:24:34.306537017 +0000 UTC m=+1.262359823" Mar 21 13:24:36.062023 sudo[1723]: pam_unix(sudo:session): session closed for user root Mar 21 13:24:36.209705 sshd[1722]: Connection closed by 172.24.4.1 port 34786 Mar 21 13:24:36.212438 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Mar 21 13:24:36.218506 systemd[1]: sshd@6-172.24.4.44:22-172.24.4.1:34786.service: Deactivated successfully. Mar 21 13:24:36.223966 systemd[1]: session-9.scope: Deactivated successfully. Mar 21 13:24:36.224879 systemd[1]: session-9.scope: Consumed 5.914s CPU time, 263.4M memory peak. Mar 21 13:24:36.231452 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Mar 21 13:24:36.233602 systemd-logind[1458]: Removed session 9. Mar 21 13:24:38.419735 kubelet[2676]: I0321 13:24:38.419678 2676 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 21 13:24:38.420216 containerd[1479]: time="2025-03-21T13:24:38.420070415Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 21 13:24:38.420573 kubelet[2676]: I0321 13:24:38.420221 2676 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 21 13:24:39.277870 systemd[1]: Created slice kubepods-besteffort-podca5e7b2b_2cc6_49ca_8e9a_b7a4c9f42e94.slice - libcontainer container kubepods-besteffort-podca5e7b2b_2cc6_49ca_8e9a_b7a4c9f42e94.slice. Mar 21 13:24:39.280792 kubelet[2676]: W0321 13:24:39.280754 2676 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-9999-0-3-0-e42165490f.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-9999-0-3-0-e42165490f.novalocal' and this object Mar 21 13:24:39.280926 kubelet[2676]: E0321 13:24:39.280799 2676 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-9999-0-3-0-e42165490f.novalocal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-9999-0-3-0-e42165490f.novalocal' and this object" logger="UnhandledError" Mar 21 13:24:39.299017 systemd[1]: Created slice kubepods-burstable-podfda36d26_036b_4460_9f20_1cf0beea2104.slice - libcontainer container kubepods-burstable-podfda36d26_036b_4460_9f20_1cf0beea2104.slice. Mar 21 13:24:39.374904 kubelet[2676]: I0321 13:24:39.374862 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-host-proc-sys-kernel\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375084 kubelet[2676]: I0321 13:24:39.374944 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fda36d26-036b-4460-9f20-1cf0beea2104-clustermesh-secrets\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375084 kubelet[2676]: I0321 13:24:39.374968 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h6pl\" (UniqueName: \"kubernetes.io/projected/fda36d26-036b-4460-9f20-1cf0beea2104-kube-api-access-4h6pl\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375084 kubelet[2676]: I0321 13:24:39.374992 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-hostproc\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375084 kubelet[2676]: I0321 13:24:39.375009 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-xtables-lock\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375084 kubelet[2676]: I0321 13:24:39.375027 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs6s5\" (UniqueName: \"kubernetes.io/projected/ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94-kube-api-access-cs6s5\") pod \"kube-proxy-vlnz2\" (UID: \"ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94\") " pod="kube-system/kube-proxy-vlnz2" Mar 21 13:24:39.375236 kubelet[2676]: I0321 13:24:39.375118 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fda36d26-036b-4460-9f20-1cf0beea2104-hubble-tls\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375236 kubelet[2676]: I0321 13:24:39.375140 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cni-path\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375236 kubelet[2676]: I0321 13:24:39.375195 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-lib-modules\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375236 kubelet[2676]: I0321 13:24:39.375217 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-cgroup\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375236 kubelet[2676]: I0321 13:24:39.375234 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-etc-cni-netd\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375366 kubelet[2676]: I0321 13:24:39.375281 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94-kube-proxy\") pod \"kube-proxy-vlnz2\" (UID: \"ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94\") " pod="kube-system/kube-proxy-vlnz2" Mar 21 13:24:39.375366 kubelet[2676]: I0321 13:24:39.375308 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94-xtables-lock\") pod \"kube-proxy-vlnz2\" (UID: \"ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94\") " pod="kube-system/kube-proxy-vlnz2" Mar 21 13:24:39.375366 kubelet[2676]: I0321 13:24:39.375361 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94-lib-modules\") pod \"kube-proxy-vlnz2\" (UID: \"ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94\") " pod="kube-system/kube-proxy-vlnz2" Mar 21 13:24:39.375449 kubelet[2676]: I0321 13:24:39.375381 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-host-proc-sys-net\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375449 kubelet[2676]: I0321 13:24:39.375400 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-run\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375449 kubelet[2676]: I0321 13:24:39.375430 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-bpf-maps\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.375531 kubelet[2676]: I0321 13:24:39.375447 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-config-path\") pod \"cilium-ksqrz\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " pod="kube-system/cilium-ksqrz" Mar 21 13:24:39.593584 systemd[1]: Created slice kubepods-besteffort-podfe53d1c1_d35c_46d5_b696_0ff0ce0dea00.slice - libcontainer container kubepods-besteffort-podfe53d1c1_d35c_46d5_b696_0ff0ce0dea00.slice. Mar 21 13:24:39.603936 containerd[1479]: time="2025-03-21T13:24:39.603890107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ksqrz,Uid:fda36d26-036b-4460-9f20-1cf0beea2104,Namespace:kube-system,Attempt:0,}" Mar 21 13:24:39.635969 containerd[1479]: time="2025-03-21T13:24:39.635691190Z" level=info msg="connecting to shim dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02" address="unix:///run/containerd/s/d169d39394993d55917b45f3725758bc5f3fcbd8dd20dc33e43b6393616894ca" namespace=k8s.io protocol=ttrpc version=3 Mar 21 13:24:39.658232 systemd[1]: Started cri-containerd-dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02.scope - libcontainer container dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02. Mar 21 13:24:39.677623 kubelet[2676]: I0321 13:24:39.677526 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe53d1c1-d35c-46d5-b696-0ff0ce0dea00-cilium-config-path\") pod \"cilium-operator-5d85765b45-skcnv\" (UID: \"fe53d1c1-d35c-46d5-b696-0ff0ce0dea00\") " pod="kube-system/cilium-operator-5d85765b45-skcnv" Mar 21 13:24:39.677623 kubelet[2676]: I0321 13:24:39.677575 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r64ms\" (UniqueName: \"kubernetes.io/projected/fe53d1c1-d35c-46d5-b696-0ff0ce0dea00-kube-api-access-r64ms\") pod \"cilium-operator-5d85765b45-skcnv\" (UID: \"fe53d1c1-d35c-46d5-b696-0ff0ce0dea00\") " pod="kube-system/cilium-operator-5d85765b45-skcnv" Mar 21 13:24:39.684916 containerd[1479]: time="2025-03-21T13:24:39.684759946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ksqrz,Uid:fda36d26-036b-4460-9f20-1cf0beea2104,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\"" Mar 21 13:24:39.686542 containerd[1479]: time="2025-03-21T13:24:39.686515884Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 21 13:24:39.899265 containerd[1479]: time="2025-03-21T13:24:39.899176175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-skcnv,Uid:fe53d1c1-d35c-46d5-b696-0ff0ce0dea00,Namespace:kube-system,Attempt:0,}" Mar 21 13:24:39.936701 containerd[1479]: time="2025-03-21T13:24:39.936574700Z" level=info msg="connecting to shim 87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c" address="unix:///run/containerd/s/578e8459a8023fd54b6754253540062b49caa6c72ed70b6bdbf7aa6738da5a11" namespace=k8s.io protocol=ttrpc version=3 Mar 21 13:24:39.984418 systemd[1]: Started cri-containerd-87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c.scope - libcontainer container 87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c. Mar 21 13:24:40.051541 containerd[1479]: time="2025-03-21T13:24:40.051439543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-skcnv,Uid:fe53d1c1-d35c-46d5-b696-0ff0ce0dea00,Namespace:kube-system,Attempt:0,} returns sandbox id \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\"" Mar 21 13:24:40.476709 kubelet[2676]: E0321 13:24:40.476596 2676 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 21 13:24:40.476709 kubelet[2676]: E0321 13:24:40.476680 2676 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94-kube-proxy podName:ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94 nodeName:}" failed. No retries permitted until 2025-03-21 13:24:40.976659471 +0000 UTC m=+7.932482257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94-kube-proxy") pod "kube-proxy-vlnz2" (UID: "ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94") : failed to sync configmap cache: timed out waiting for the condition Mar 21 13:24:41.088141 containerd[1479]: time="2025-03-21T13:24:41.087556190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vlnz2,Uid:ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94,Namespace:kube-system,Attempt:0,}" Mar 21 13:24:41.138981 containerd[1479]: time="2025-03-21T13:24:41.138875877Z" level=info msg="connecting to shim faff261b5adea22585f4f02f5b1b9cfd7fea923cd07345c6915238b276d08e2d" address="unix:///run/containerd/s/d3b0d713f8cf6ea40dc5d26ee4255c32512a070760e0979f3c4af6183d5c8e82" namespace=k8s.io protocol=ttrpc version=3 Mar 21 13:24:41.201397 systemd[1]: Started cri-containerd-faff261b5adea22585f4f02f5b1b9cfd7fea923cd07345c6915238b276d08e2d.scope - libcontainer container faff261b5adea22585f4f02f5b1b9cfd7fea923cd07345c6915238b276d08e2d. Mar 21 13:24:41.233565 containerd[1479]: time="2025-03-21T13:24:41.233530645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vlnz2,Uid:ca5e7b2b-2cc6-49ca-8e9a-b7a4c9f42e94,Namespace:kube-system,Attempt:0,} returns sandbox id \"faff261b5adea22585f4f02f5b1b9cfd7fea923cd07345c6915238b276d08e2d\"" Mar 21 13:24:41.237189 containerd[1479]: time="2025-03-21T13:24:41.237151194Z" level=info msg="CreateContainer within sandbox \"faff261b5adea22585f4f02f5b1b9cfd7fea923cd07345c6915238b276d08e2d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 21 13:24:41.251523 containerd[1479]: time="2025-03-21T13:24:41.251492787Z" level=info msg="Container fbe09dcdaab9cf504cfe23bcf516252a148b7b303ca601dd32f54e5659fb4f3f: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:24:41.271058 containerd[1479]: time="2025-03-21T13:24:41.269864799Z" level=info msg="CreateContainer within sandbox \"faff261b5adea22585f4f02f5b1b9cfd7fea923cd07345c6915238b276d08e2d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fbe09dcdaab9cf504cfe23bcf516252a148b7b303ca601dd32f54e5659fb4f3f\"" Mar 21 13:24:41.272790 containerd[1479]: time="2025-03-21T13:24:41.272764004Z" level=info msg="StartContainer for \"fbe09dcdaab9cf504cfe23bcf516252a148b7b303ca601dd32f54e5659fb4f3f\"" Mar 21 13:24:41.275891 containerd[1479]: time="2025-03-21T13:24:41.275858054Z" level=info msg="connecting to shim fbe09dcdaab9cf504cfe23bcf516252a148b7b303ca601dd32f54e5659fb4f3f" address="unix:///run/containerd/s/d3b0d713f8cf6ea40dc5d26ee4255c32512a070760e0979f3c4af6183d5c8e82" protocol=ttrpc version=3 Mar 21 13:24:41.299204 systemd[1]: Started cri-containerd-fbe09dcdaab9cf504cfe23bcf516252a148b7b303ca601dd32f54e5659fb4f3f.scope - libcontainer container fbe09dcdaab9cf504cfe23bcf516252a148b7b303ca601dd32f54e5659fb4f3f. Mar 21 13:24:41.343192 containerd[1479]: time="2025-03-21T13:24:41.343080005Z" level=info msg="StartContainer for \"fbe09dcdaab9cf504cfe23bcf516252a148b7b303ca601dd32f54e5659fb4f3f\" returns successfully" Mar 21 13:24:42.252519 kubelet[2676]: I0321 13:24:42.252253 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vlnz2" podStartSLOduration=3.252188399 podStartE2EDuration="3.252188399s" podCreationTimestamp="2025-03-21 13:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 13:24:42.252034259 +0000 UTC m=+9.207857115" watchObservedRunningTime="2025-03-21 13:24:42.252188399 +0000 UTC m=+9.208011235" Mar 21 13:24:48.300300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760906944.mount: Deactivated successfully. Mar 21 13:24:50.503231 containerd[1479]: time="2025-03-21T13:24:50.503168983Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:50.504760 containerd[1479]: time="2025-03-21T13:24:50.504704375Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 21 13:24:50.506343 containerd[1479]: time="2025-03-21T13:24:50.506242882Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:50.513653 containerd[1479]: time="2025-03-21T13:24:50.513588649Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.827026137s" Mar 21 13:24:50.514117 containerd[1479]: time="2025-03-21T13:24:50.513860480Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 21 13:24:50.517170 containerd[1479]: time="2025-03-21T13:24:50.516735025Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 21 13:24:50.520479 containerd[1479]: time="2025-03-21T13:24:50.519469867Z" level=info msg="CreateContainer within sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 21 13:24:50.538651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount817774516.mount: Deactivated successfully. Mar 21 13:24:50.540619 containerd[1479]: time="2025-03-21T13:24:50.539727278Z" level=info msg="Container 81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:24:50.550968 containerd[1479]: time="2025-03-21T13:24:50.550913183Z" level=info msg="CreateContainer within sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\"" Mar 21 13:24:50.551380 containerd[1479]: time="2025-03-21T13:24:50.551331588Z" level=info msg="StartContainer for \"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\"" Mar 21 13:24:50.552145 containerd[1479]: time="2025-03-21T13:24:50.552016313Z" level=info msg="connecting to shim 81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e" address="unix:///run/containerd/s/d169d39394993d55917b45f3725758bc5f3fcbd8dd20dc33e43b6393616894ca" protocol=ttrpc version=3 Mar 21 13:24:50.584191 systemd[1]: Started cri-containerd-81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e.scope - libcontainer container 81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e. Mar 21 13:24:50.626910 containerd[1479]: time="2025-03-21T13:24:50.626816264Z" level=info msg="StartContainer for \"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\" returns successfully" Mar 21 13:24:50.635565 systemd[1]: cri-containerd-81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e.scope: Deactivated successfully. Mar 21 13:24:50.637461 containerd[1479]: time="2025-03-21T13:24:50.637416990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\" id:\"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\" pid:3085 exited_at:{seconds:1742563490 nanos:636814490}" Mar 21 13:24:50.637683 containerd[1479]: time="2025-03-21T13:24:50.637552876Z" level=info msg="received exit event container_id:\"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\" id:\"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\" pid:3085 exited_at:{seconds:1742563490 nanos:636814490}" Mar 21 13:24:50.657925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e-rootfs.mount: Deactivated successfully. Mar 21 13:24:52.283888 containerd[1479]: time="2025-03-21T13:24:52.280510925Z" level=info msg="CreateContainer within sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 21 13:24:52.309089 containerd[1479]: time="2025-03-21T13:24:52.305390336Z" level=info msg="Container accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:24:52.330268 containerd[1479]: time="2025-03-21T13:24:52.329617063Z" level=info msg="CreateContainer within sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\"" Mar 21 13:24:52.332255 containerd[1479]: time="2025-03-21T13:24:52.332205962Z" level=info msg="StartContainer for \"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\"" Mar 21 13:24:52.336731 containerd[1479]: time="2025-03-21T13:24:52.336635404Z" level=info msg="connecting to shim accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48" address="unix:///run/containerd/s/d169d39394993d55917b45f3725758bc5f3fcbd8dd20dc33e43b6393616894ca" protocol=ttrpc version=3 Mar 21 13:24:52.374207 systemd[1]: Started cri-containerd-accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48.scope - libcontainer container accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48. Mar 21 13:24:52.415759 containerd[1479]: time="2025-03-21T13:24:52.415720214Z" level=info msg="StartContainer for \"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\" returns successfully" Mar 21 13:24:52.424550 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 21 13:24:52.424813 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 21 13:24:52.424958 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 21 13:24:52.428362 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 13:24:52.431610 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 21 13:24:52.432151 systemd[1]: cri-containerd-accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48.scope: Deactivated successfully. Mar 21 13:24:52.433104 containerd[1479]: time="2025-03-21T13:24:52.432945360Z" level=info msg="TaskExit event in podsandbox handler container_id:\"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\" id:\"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\" pid:3132 exited_at:{seconds:1742563492 nanos:432647852}" Mar 21 13:24:52.433232 containerd[1479]: time="2025-03-21T13:24:52.433214035Z" level=info msg="received exit event container_id:\"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\" id:\"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\" pid:3132 exited_at:{seconds:1742563492 nanos:432647852}" Mar 21 13:24:52.448730 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 13:24:53.295760 containerd[1479]: time="2025-03-21T13:24:53.294896341Z" level=info msg="CreateContainer within sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 21 13:24:53.309451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48-rootfs.mount: Deactivated successfully. Mar 21 13:24:53.354531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount731655841.mount: Deactivated successfully. Mar 21 13:24:53.359351 containerd[1479]: time="2025-03-21T13:24:53.359119396Z" level=info msg="Container 21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:24:53.376257 containerd[1479]: time="2025-03-21T13:24:53.376214677Z" level=info msg="CreateContainer within sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\"" Mar 21 13:24:53.376955 containerd[1479]: time="2025-03-21T13:24:53.376892880Z" level=info msg="StartContainer for \"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\"" Mar 21 13:24:53.381286 containerd[1479]: time="2025-03-21T13:24:53.381221964Z" level=info msg="connecting to shim 21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009" address="unix:///run/containerd/s/d169d39394993d55917b45f3725758bc5f3fcbd8dd20dc33e43b6393616894ca" protocol=ttrpc version=3 Mar 21 13:24:53.412189 systemd[1]: Started cri-containerd-21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009.scope - libcontainer container 21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009. Mar 21 13:24:53.451035 systemd[1]: cri-containerd-21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009.scope: Deactivated successfully. Mar 21 13:24:53.452851 containerd[1479]: time="2025-03-21T13:24:53.452809257Z" level=info msg="received exit event container_id:\"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\" id:\"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\" pid:3181 exited_at:{seconds:1742563493 nanos:451644301}" Mar 21 13:24:53.453284 containerd[1479]: time="2025-03-21T13:24:53.453263660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\" id:\"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\" pid:3181 exited_at:{seconds:1742563493 nanos:451644301}" Mar 21 13:24:53.456482 containerd[1479]: time="2025-03-21T13:24:53.456464205Z" level=info msg="StartContainer for \"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\" returns successfully" Mar 21 13:24:54.312786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009-rootfs.mount: Deactivated successfully. Mar 21 13:24:54.325741 containerd[1479]: time="2025-03-21T13:24:54.325375392Z" level=info msg="CreateContainer within sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 21 13:24:54.362037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2130960282.mount: Deactivated successfully. Mar 21 13:24:54.371546 containerd[1479]: time="2025-03-21T13:24:54.371469712Z" level=info msg="Container 354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:24:54.386202 containerd[1479]: time="2025-03-21T13:24:54.384714897Z" level=info msg="CreateContainer within sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\"" Mar 21 13:24:54.386648 containerd[1479]: time="2025-03-21T13:24:54.386620904Z" level=info msg="StartContainer for \"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\"" Mar 21 13:24:54.387891 containerd[1479]: time="2025-03-21T13:24:54.387858136Z" level=info msg="connecting to shim 354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47" address="unix:///run/containerd/s/d169d39394993d55917b45f3725758bc5f3fcbd8dd20dc33e43b6393616894ca" protocol=ttrpc version=3 Mar 21 13:24:54.415208 systemd[1]: Started cri-containerd-354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47.scope - libcontainer container 354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47. Mar 21 13:24:54.446770 systemd[1]: cri-containerd-354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47.scope: Deactivated successfully. Mar 21 13:24:54.448797 containerd[1479]: time="2025-03-21T13:24:54.447605506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\" id:\"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\" pid:3223 exited_at:{seconds:1742563494 nanos:447023453}" Mar 21 13:24:54.451937 containerd[1479]: time="2025-03-21T13:24:54.451907398Z" level=info msg="received exit event container_id:\"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\" id:\"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\" pid:3223 exited_at:{seconds:1742563494 nanos:447023453}" Mar 21 13:24:54.455211 containerd[1479]: time="2025-03-21T13:24:54.455190109Z" level=info msg="StartContainer for \"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\" returns successfully" Mar 21 13:24:55.164099 containerd[1479]: time="2025-03-21T13:24:55.163915477Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:55.165378 containerd[1479]: time="2025-03-21T13:24:55.165210247Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 21 13:24:55.166605 containerd[1479]: time="2025-03-21T13:24:55.166545382Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 13:24:55.168225 containerd[1479]: time="2025-03-21T13:24:55.167864777Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.651088825s" Mar 21 13:24:55.168225 containerd[1479]: time="2025-03-21T13:24:55.167902588Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 21 13:24:55.169990 containerd[1479]: time="2025-03-21T13:24:55.169959289Z" level=info msg="CreateContainer within sandbox \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 21 13:24:55.183114 containerd[1479]: time="2025-03-21T13:24:55.182566986Z" level=info msg="Container 550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:24:55.196474 containerd[1479]: time="2025-03-21T13:24:55.194748675Z" level=info msg="CreateContainer within sandbox \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\"" Mar 21 13:24:55.196474 containerd[1479]: time="2025-03-21T13:24:55.196288034Z" level=info msg="StartContainer for \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\"" Mar 21 13:24:55.199446 containerd[1479]: time="2025-03-21T13:24:55.199412337Z" level=info msg="connecting to shim 550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7" address="unix:///run/containerd/s/578e8459a8023fd54b6754253540062b49caa6c72ed70b6bdbf7aa6738da5a11" protocol=ttrpc version=3 Mar 21 13:24:55.220190 systemd[1]: Started cri-containerd-550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7.scope - libcontainer container 550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7. Mar 21 13:24:55.253251 containerd[1479]: time="2025-03-21T13:24:55.253204197Z" level=info msg="StartContainer for \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\" returns successfully" Mar 21 13:24:55.321781 containerd[1479]: time="2025-03-21T13:24:55.321688808Z" level=info msg="CreateContainer within sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 21 13:24:55.327890 kubelet[2676]: I0321 13:24:55.327060 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-skcnv" podStartSLOduration=1.212585549 podStartE2EDuration="16.327001368s" podCreationTimestamp="2025-03-21 13:24:39 +0000 UTC" firstStartedPulling="2025-03-21 13:24:40.054160524 +0000 UTC m=+7.009983300" lastFinishedPulling="2025-03-21 13:24:55.168576343 +0000 UTC m=+22.124399119" observedRunningTime="2025-03-21 13:24:55.326309529 +0000 UTC m=+22.282132315" watchObservedRunningTime="2025-03-21 13:24:55.327001368 +0000 UTC m=+22.282824144" Mar 21 13:24:55.342694 containerd[1479]: time="2025-03-21T13:24:55.342649681Z" level=info msg="Container 02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:24:55.358806 containerd[1479]: time="2025-03-21T13:24:55.358759972Z" level=info msg="CreateContainer within sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\"" Mar 21 13:24:55.360439 containerd[1479]: time="2025-03-21T13:24:55.360408987Z" level=info msg="StartContainer for \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\"" Mar 21 13:24:55.361915 containerd[1479]: time="2025-03-21T13:24:55.361880869Z" level=info msg="connecting to shim 02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8" address="unix:///run/containerd/s/d169d39394993d55917b45f3725758bc5f3fcbd8dd20dc33e43b6393616894ca" protocol=ttrpc version=3 Mar 21 13:24:55.398719 systemd[1]: Started cri-containerd-02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8.scope - libcontainer container 02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8. Mar 21 13:24:55.510176 containerd[1479]: time="2025-03-21T13:24:55.509612366Z" level=info msg="StartContainer for \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" returns successfully" Mar 21 13:24:55.667937 containerd[1479]: time="2025-03-21T13:24:55.667888221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" id:\"3506329193ace1f7aaa525fb7b4a9c7583d707ecb34307ea253a01f26757322a\" pid:3328 exited_at:{seconds:1742563495 nanos:667568340}" Mar 21 13:24:55.727673 kubelet[2676]: I0321 13:24:55.727612 2676 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 21 13:24:55.774727 systemd[1]: Created slice kubepods-burstable-pod3e427c2c_85d8_4c40_8412_3a03669f20fd.slice - libcontainer container kubepods-burstable-pod3e427c2c_85d8_4c40_8412_3a03669f20fd.slice. Mar 21 13:24:55.787743 kubelet[2676]: I0321 13:24:55.787702 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e427c2c-85d8-4c40-8412-3a03669f20fd-config-volume\") pod \"coredns-6f6b679f8f-cfx6b\" (UID: \"3e427c2c-85d8-4c40-8412-3a03669f20fd\") " pod="kube-system/coredns-6f6b679f8f-cfx6b" Mar 21 13:24:55.787743 kubelet[2676]: I0321 13:24:55.787748 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lzpr\" (UniqueName: \"kubernetes.io/projected/3e427c2c-85d8-4c40-8412-3a03669f20fd-kube-api-access-7lzpr\") pod \"coredns-6f6b679f8f-cfx6b\" (UID: \"3e427c2c-85d8-4c40-8412-3a03669f20fd\") " pod="kube-system/coredns-6f6b679f8f-cfx6b" Mar 21 13:24:55.816092 systemd[1]: Created slice kubepods-burstable-podadd17029_8b81_4c78_a99d_c3074ea12388.slice - libcontainer container kubepods-burstable-podadd17029_8b81_4c78_a99d_c3074ea12388.slice. Mar 21 13:24:55.889239 kubelet[2676]: I0321 13:24:55.887922 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/add17029-8b81-4c78-a99d-c3074ea12388-config-volume\") pod \"coredns-6f6b679f8f-v4klj\" (UID: \"add17029-8b81-4c78-a99d-c3074ea12388\") " pod="kube-system/coredns-6f6b679f8f-v4klj" Mar 21 13:24:55.889239 kubelet[2676]: I0321 13:24:55.887973 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpdh\" (UniqueName: \"kubernetes.io/projected/add17029-8b81-4c78-a99d-c3074ea12388-kube-api-access-wrpdh\") pod \"coredns-6f6b679f8f-v4klj\" (UID: \"add17029-8b81-4c78-a99d-c3074ea12388\") " pod="kube-system/coredns-6f6b679f8f-v4klj" Mar 21 13:24:56.080555 containerd[1479]: time="2025-03-21T13:24:56.080431051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cfx6b,Uid:3e427c2c-85d8-4c40-8412-3a03669f20fd,Namespace:kube-system,Attempt:0,}" Mar 21 13:24:56.122103 containerd[1479]: time="2025-03-21T13:24:56.120379032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v4klj,Uid:add17029-8b81-4c78-a99d-c3074ea12388,Namespace:kube-system,Attempt:0,}" Mar 21 13:24:59.017387 systemd-networkd[1388]: cilium_host: Link UP Mar 21 13:24:59.019811 systemd-networkd[1388]: cilium_net: Link UP Mar 21 13:24:59.025021 systemd-networkd[1388]: cilium_net: Gained carrier Mar 21 13:24:59.026541 systemd-networkd[1388]: cilium_host: Gained carrier Mar 21 13:24:59.124307 systemd-networkd[1388]: cilium_vxlan: Link UP Mar 21 13:24:59.125200 systemd-networkd[1388]: cilium_vxlan: Gained carrier Mar 21 13:24:59.223294 systemd-networkd[1388]: cilium_host: Gained IPv6LL Mar 21 13:24:59.392178 systemd-networkd[1388]: cilium_net: Gained IPv6LL Mar 21 13:24:59.439105 kernel: NET: Registered PF_ALG protocol family Mar 21 13:25:00.200810 systemd-networkd[1388]: lxc_health: Link UP Mar 21 13:25:00.218430 systemd-networkd[1388]: lxc_health: Gained carrier Mar 21 13:25:00.511336 systemd-networkd[1388]: cilium_vxlan: Gained IPv6LL Mar 21 13:25:00.624547 kernel: eth0: renamed from tmp4cdd8 Mar 21 13:25:00.624316 systemd-networkd[1388]: lxc1fb78fee5ade: Link UP Mar 21 13:25:00.631464 systemd-networkd[1388]: lxc1fb78fee5ade: Gained carrier Mar 21 13:25:00.668185 kernel: eth0: renamed from tmp7bf2c Mar 21 13:25:00.675421 systemd-networkd[1388]: lxc0b836916d7ba: Link UP Mar 21 13:25:00.678885 systemd-networkd[1388]: lxc0b836916d7ba: Gained carrier Mar 21 13:25:01.639334 kubelet[2676]: I0321 13:25:01.639245 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ksqrz" podStartSLOduration=11.809319066 podStartE2EDuration="22.639226021s" podCreationTimestamp="2025-03-21 13:24:39 +0000 UTC" firstStartedPulling="2025-03-21 13:24:39.686130982 +0000 UTC m=+6.641953768" lastFinishedPulling="2025-03-21 13:24:50.516037897 +0000 UTC m=+17.471860723" observedRunningTime="2025-03-21 13:24:56.378183916 +0000 UTC m=+23.334006702" watchObservedRunningTime="2025-03-21 13:25:01.639226021 +0000 UTC m=+28.595048807" Mar 21 13:25:01.793237 systemd-networkd[1388]: lxc1fb78fee5ade: Gained IPv6LL Mar 21 13:25:02.111186 systemd-networkd[1388]: lxc_health: Gained IPv6LL Mar 21 13:25:02.239170 systemd-networkd[1388]: lxc0b836916d7ba: Gained IPv6LL Mar 21 13:25:05.113076 containerd[1479]: time="2025-03-21T13:25:05.112575581Z" level=info msg="connecting to shim 7bf2c52986cade2b609885d4b75a0eaa2e9c5c82bdd79651449df091e47562dc" address="unix:///run/containerd/s/0db40e0339c278a8074a5f2329f36b7b9ed460aaddd849c2c14631e1a0eb5337" namespace=k8s.io protocol=ttrpc version=3 Mar 21 13:25:05.149189 containerd[1479]: time="2025-03-21T13:25:05.149128855Z" level=info msg="connecting to shim 4cdd89e61f0bcff2c3ef48680111b50351938532e6859b7d188b45b2e38b9b61" address="unix:///run/containerd/s/dc8ff17cccd23a2eb9a97a0f3fce36ca5112a38077024bd617119af241d59b3e" namespace=k8s.io protocol=ttrpc version=3 Mar 21 13:25:05.175224 systemd[1]: Started cri-containerd-7bf2c52986cade2b609885d4b75a0eaa2e9c5c82bdd79651449df091e47562dc.scope - libcontainer container 7bf2c52986cade2b609885d4b75a0eaa2e9c5c82bdd79651449df091e47562dc. Mar 21 13:25:05.187210 systemd[1]: Started cri-containerd-4cdd89e61f0bcff2c3ef48680111b50351938532e6859b7d188b45b2e38b9b61.scope - libcontainer container 4cdd89e61f0bcff2c3ef48680111b50351938532e6859b7d188b45b2e38b9b61. Mar 21 13:25:05.247903 containerd[1479]: time="2025-03-21T13:25:05.247802380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v4klj,Uid:add17029-8b81-4c78-a99d-c3074ea12388,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bf2c52986cade2b609885d4b75a0eaa2e9c5c82bdd79651449df091e47562dc\"" Mar 21 13:25:05.253728 containerd[1479]: time="2025-03-21T13:25:05.252407108Z" level=info msg="CreateContainer within sandbox \"7bf2c52986cade2b609885d4b75a0eaa2e9c5c82bdd79651449df091e47562dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 13:25:05.271162 containerd[1479]: time="2025-03-21T13:25:05.271128590Z" level=info msg="Container 5743b2c5c955425d659f7850a071b5519b5caaa25c21c2334116716a0b514ed7: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:25:05.284374 containerd[1479]: time="2025-03-21T13:25:05.284345475Z" level=info msg="CreateContainer within sandbox \"7bf2c52986cade2b609885d4b75a0eaa2e9c5c82bdd79651449df091e47562dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5743b2c5c955425d659f7850a071b5519b5caaa25c21c2334116716a0b514ed7\"" Mar 21 13:25:05.286227 containerd[1479]: time="2025-03-21T13:25:05.286172252Z" level=info msg="StartContainer for \"5743b2c5c955425d659f7850a071b5519b5caaa25c21c2334116716a0b514ed7\"" Mar 21 13:25:05.288316 containerd[1479]: time="2025-03-21T13:25:05.288281579Z" level=info msg="connecting to shim 5743b2c5c955425d659f7850a071b5519b5caaa25c21c2334116716a0b514ed7" address="unix:///run/containerd/s/0db40e0339c278a8074a5f2329f36b7b9ed460aaddd849c2c14631e1a0eb5337" protocol=ttrpc version=3 Mar 21 13:25:05.299440 containerd[1479]: time="2025-03-21T13:25:05.299399166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-cfx6b,Uid:3e427c2c-85d8-4c40-8412-3a03669f20fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cdd89e61f0bcff2c3ef48680111b50351938532e6859b7d188b45b2e38b9b61\"" Mar 21 13:25:05.304852 containerd[1479]: time="2025-03-21T13:25:05.304823212Z" level=info msg="CreateContainer within sandbox \"4cdd89e61f0bcff2c3ef48680111b50351938532e6859b7d188b45b2e38b9b61\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 13:25:05.324006 containerd[1479]: time="2025-03-21T13:25:05.323951988Z" level=info msg="Container fd380be3c9f897c6c90fbe5a6b937e9e372a17d67b0d2c9f606bd9eda4caae5a: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:25:05.324788 systemd[1]: Started cri-containerd-5743b2c5c955425d659f7850a071b5519b5caaa25c21c2334116716a0b514ed7.scope - libcontainer container 5743b2c5c955425d659f7850a071b5519b5caaa25c21c2334116716a0b514ed7. Mar 21 13:25:05.332484 containerd[1479]: time="2025-03-21T13:25:05.332443949Z" level=info msg="CreateContainer within sandbox \"4cdd89e61f0bcff2c3ef48680111b50351938532e6859b7d188b45b2e38b9b61\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd380be3c9f897c6c90fbe5a6b937e9e372a17d67b0d2c9f606bd9eda4caae5a\"" Mar 21 13:25:05.334472 containerd[1479]: time="2025-03-21T13:25:05.333148972Z" level=info msg="StartContainer for \"fd380be3c9f897c6c90fbe5a6b937e9e372a17d67b0d2c9f606bd9eda4caae5a\"" Mar 21 13:25:05.334472 containerd[1479]: time="2025-03-21T13:25:05.333998495Z" level=info msg="connecting to shim fd380be3c9f897c6c90fbe5a6b937e9e372a17d67b0d2c9f606bd9eda4caae5a" address="unix:///run/containerd/s/dc8ff17cccd23a2eb9a97a0f3fce36ca5112a38077024bd617119af241d59b3e" protocol=ttrpc version=3 Mar 21 13:25:05.363280 systemd[1]: Started cri-containerd-fd380be3c9f897c6c90fbe5a6b937e9e372a17d67b0d2c9f606bd9eda4caae5a.scope - libcontainer container fd380be3c9f897c6c90fbe5a6b937e9e372a17d67b0d2c9f606bd9eda4caae5a. Mar 21 13:25:05.372763 containerd[1479]: time="2025-03-21T13:25:05.372073725Z" level=info msg="StartContainer for \"5743b2c5c955425d659f7850a071b5519b5caaa25c21c2334116716a0b514ed7\" returns successfully" Mar 21 13:25:05.408147 containerd[1479]: time="2025-03-21T13:25:05.407909234Z" level=info msg="StartContainer for \"fd380be3c9f897c6c90fbe5a6b937e9e372a17d67b0d2c9f606bd9eda4caae5a\" returns successfully" Mar 21 13:25:06.402099 kubelet[2676]: I0321 13:25:06.401949 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-cfx6b" podStartSLOduration=27.401917563 podStartE2EDuration="27.401917563s" podCreationTimestamp="2025-03-21 13:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 13:25:06.39795555 +0000 UTC m=+33.353778387" watchObservedRunningTime="2025-03-21 13:25:06.401917563 +0000 UTC m=+33.357740389" Mar 21 13:26:20.767564 systemd[1]: Started sshd@7-172.24.4.44:22-172.24.4.1:41014.service - OpenSSH per-connection server daemon (172.24.4.1:41014). Mar 21 13:26:21.972107 sshd[3989]: Accepted publickey for core from 172.24.4.1 port 41014 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:26:21.974386 sshd-session[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:26:21.985350 systemd-logind[1458]: New session 10 of user core. Mar 21 13:26:21.996640 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 21 13:26:22.731136 sshd[3991]: Connection closed by 172.24.4.1 port 41014 Mar 21 13:26:22.732274 sshd-session[3989]: pam_unix(sshd:session): session closed for user core Mar 21 13:26:22.742167 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Mar 21 13:26:22.744267 systemd[1]: sshd@7-172.24.4.44:22-172.24.4.1:41014.service: Deactivated successfully. Mar 21 13:26:22.748907 systemd[1]: session-10.scope: Deactivated successfully. Mar 21 13:26:22.752164 systemd-logind[1458]: Removed session 10. Mar 21 13:26:27.423462 update_engine[1459]: I20250321 13:26:27.423329 1459 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 21 13:26:27.423462 update_engine[1459]: I20250321 13:26:27.423419 1459 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 21 13:26:27.424423 update_engine[1459]: I20250321 13:26:27.423812 1459 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 21 13:26:27.425911 update_engine[1459]: I20250321 13:26:27.425021 1459 omaha_request_params.cc:62] Current group set to developer Mar 21 13:26:27.425911 update_engine[1459]: I20250321 13:26:27.425293 1459 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 21 13:26:27.425911 update_engine[1459]: I20250321 13:26:27.425329 1459 update_attempter.cc:643] Scheduling an action processor start. Mar 21 13:26:27.425911 update_engine[1459]: I20250321 13:26:27.425366 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 21 13:26:27.425911 update_engine[1459]: I20250321 13:26:27.425441 1459 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 21 13:26:27.425911 update_engine[1459]: I20250321 13:26:27.425599 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 21 13:26:27.425911 update_engine[1459]: I20250321 13:26:27.425634 1459 omaha_request_action.cc:272] Request: Mar 21 13:26:27.425911 update_engine[1459]: Mar 21 13:26:27.425911 update_engine[1459]: Mar 21 13:26:27.425911 update_engine[1459]: Mar 21 13:26:27.425911 update_engine[1459]: Mar 21 13:26:27.425911 update_engine[1459]: Mar 21 13:26:27.425911 update_engine[1459]: Mar 21 13:26:27.425911 update_engine[1459]: Mar 21 13:26:27.425911 update_engine[1459]: Mar 21 13:26:27.425911 update_engine[1459]: I20250321 13:26:27.425654 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 21 13:26:27.427622 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 21 13:26:27.429537 update_engine[1459]: I20250321 13:26:27.429451 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 21 13:26:27.430439 update_engine[1459]: I20250321 13:26:27.430348 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 21 13:26:27.437606 update_engine[1459]: E20250321 13:26:27.437403 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 21 13:26:27.437606 update_engine[1459]: I20250321 13:26:27.437552 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 21 13:26:27.755354 systemd[1]: Started sshd@8-172.24.4.44:22-172.24.4.1:37254.service - OpenSSH per-connection server daemon (172.24.4.1:37254). Mar 21 13:26:29.233801 sshd[4005]: Accepted publickey for core from 172.24.4.1 port 37254 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:26:29.236431 sshd-session[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:26:29.249929 systemd-logind[1458]: New session 11 of user core. Mar 21 13:26:29.253360 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 21 13:26:30.150746 sshd[4007]: Connection closed by 172.24.4.1 port 37254 Mar 21 13:26:30.152729 sshd-session[4005]: pam_unix(sshd:session): session closed for user core Mar 21 13:26:30.159483 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Mar 21 13:26:30.161013 systemd[1]: sshd@8-172.24.4.44:22-172.24.4.1:37254.service: Deactivated successfully. Mar 21 13:26:30.164811 systemd[1]: session-11.scope: Deactivated successfully. Mar 21 13:26:30.167758 systemd-logind[1458]: Removed session 11. Mar 21 13:26:35.176727 systemd[1]: Started sshd@9-172.24.4.44:22-172.24.4.1:38802.service - OpenSSH per-connection server daemon (172.24.4.1:38802). Mar 21 13:26:36.355886 sshd[4022]: Accepted publickey for core from 172.24.4.1 port 38802 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:26:36.358892 sshd-session[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:26:36.370578 systemd-logind[1458]: New session 12 of user core. Mar 21 13:26:36.376541 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 21 13:26:37.197078 sshd[4024]: Connection closed by 172.24.4.1 port 38802 Mar 21 13:26:37.200927 sshd-session[4022]: pam_unix(sshd:session): session closed for user core Mar 21 13:26:37.220772 systemd[1]: sshd@9-172.24.4.44:22-172.24.4.1:38802.service: Deactivated successfully. Mar 21 13:26:37.232786 systemd[1]: session-12.scope: Deactivated successfully. Mar 21 13:26:37.236264 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Mar 21 13:26:37.240879 systemd[1]: Started sshd@10-172.24.4.44:22-172.24.4.1:38814.service - OpenSSH per-connection server daemon (172.24.4.1:38814). Mar 21 13:26:37.244497 systemd-logind[1458]: Removed session 12. Mar 21 13:26:37.423191 update_engine[1459]: I20250321 13:26:37.422382 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 21 13:26:37.423191 update_engine[1459]: I20250321 13:26:37.422767 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 21 13:26:37.423974 update_engine[1459]: I20250321 13:26:37.423244 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 21 13:26:37.428756 update_engine[1459]: E20250321 13:26:37.428675 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 21 13:26:37.428883 update_engine[1459]: I20250321 13:26:37.428807 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 21 13:26:38.425551 sshd[4035]: Accepted publickey for core from 172.24.4.1 port 38814 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:26:38.430717 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:26:38.446311 systemd-logind[1458]: New session 13 of user core. Mar 21 13:26:38.457445 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 21 13:26:39.308093 sshd[4038]: Connection closed by 172.24.4.1 port 38814 Mar 21 13:26:39.309228 sshd-session[4035]: pam_unix(sshd:session): session closed for user core Mar 21 13:26:39.325558 systemd[1]: sshd@10-172.24.4.44:22-172.24.4.1:38814.service: Deactivated successfully. Mar 21 13:26:39.330335 systemd[1]: session-13.scope: Deactivated successfully. Mar 21 13:26:39.332627 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Mar 21 13:26:39.338613 systemd[1]: Started sshd@11-172.24.4.44:22-172.24.4.1:38824.service - OpenSSH per-connection server daemon (172.24.4.1:38824). Mar 21 13:26:39.341347 systemd-logind[1458]: Removed session 13. Mar 21 13:26:40.698230 sshd[4047]: Accepted publickey for core from 172.24.4.1 port 38824 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:26:40.700368 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:26:40.709515 systemd-logind[1458]: New session 14 of user core. Mar 21 13:26:40.721355 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 21 13:26:41.660822 sshd[4050]: Connection closed by 172.24.4.1 port 38824 Mar 21 13:26:41.661969 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Mar 21 13:26:41.669367 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Mar 21 13:26:41.669971 systemd[1]: sshd@11-172.24.4.44:22-172.24.4.1:38824.service: Deactivated successfully. Mar 21 13:26:41.674012 systemd[1]: session-14.scope: Deactivated successfully. Mar 21 13:26:41.678522 systemd-logind[1458]: Removed session 14. Mar 21 13:26:46.686264 systemd[1]: Started sshd@12-172.24.4.44:22-172.24.4.1:46430.service - OpenSSH per-connection server daemon (172.24.4.1:46430). Mar 21 13:26:47.428205 update_engine[1459]: I20250321 13:26:47.427937 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 21 13:26:47.429220 update_engine[1459]: I20250321 13:26:47.428426 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 21 13:26:47.429220 update_engine[1459]: I20250321 13:26:47.428886 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 21 13:26:47.434596 update_engine[1459]: E20250321 13:26:47.434515 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 21 13:26:47.434738 update_engine[1459]: I20250321 13:26:47.434653 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 21 13:26:47.904096 sshd[4064]: Accepted publickey for core from 172.24.4.1 port 46430 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:26:47.907001 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:26:47.917271 systemd-logind[1458]: New session 15 of user core. Mar 21 13:26:47.928463 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 21 13:26:48.602097 sshd[4066]: Connection closed by 172.24.4.1 port 46430 Mar 21 13:26:48.601165 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Mar 21 13:26:48.608508 systemd[1]: sshd@12-172.24.4.44:22-172.24.4.1:46430.service: Deactivated successfully. Mar 21 13:26:48.613667 systemd[1]: session-15.scope: Deactivated successfully. Mar 21 13:26:48.618110 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Mar 21 13:26:48.620423 systemd-logind[1458]: Removed session 15. Mar 21 13:26:53.625742 systemd[1]: Started sshd@13-172.24.4.44:22-172.24.4.1:48138.service - OpenSSH per-connection server daemon (172.24.4.1:48138). Mar 21 13:26:54.855613 sshd[4077]: Accepted publickey for core from 172.24.4.1 port 48138 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:26:54.859028 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:26:54.871745 systemd-logind[1458]: New session 16 of user core. Mar 21 13:26:54.879398 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 21 13:26:55.743175 sshd[4079]: Connection closed by 172.24.4.1 port 48138 Mar 21 13:26:55.742503 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Mar 21 13:26:55.762859 systemd[1]: sshd@13-172.24.4.44:22-172.24.4.1:48138.service: Deactivated successfully. Mar 21 13:26:55.769542 systemd[1]: session-16.scope: Deactivated successfully. Mar 21 13:26:55.772034 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Mar 21 13:26:55.777244 systemd[1]: Started sshd@14-172.24.4.44:22-172.24.4.1:48154.service - OpenSSH per-connection server daemon (172.24.4.1:48154). Mar 21 13:26:55.780869 systemd-logind[1458]: Removed session 16. Mar 21 13:26:57.106944 sshd[4090]: Accepted publickey for core from 172.24.4.1 port 48154 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:26:57.109621 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:26:57.122138 systemd-logind[1458]: New session 17 of user core. Mar 21 13:26:57.129375 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 21 13:26:57.422633 update_engine[1459]: I20250321 13:26:57.422463 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 21 13:26:57.423297 update_engine[1459]: I20250321 13:26:57.422920 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 21 13:26:57.423464 update_engine[1459]: I20250321 13:26:57.423385 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 21 13:26:57.428753 update_engine[1459]: E20250321 13:26:57.428674 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 21 13:26:57.428900 update_engine[1459]: I20250321 13:26:57.428778 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 21 13:26:57.428900 update_engine[1459]: I20250321 13:26:57.428798 1459 omaha_request_action.cc:617] Omaha request response: Mar 21 13:26:57.429022 update_engine[1459]: E20250321 13:26:57.428925 1459 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 21 13:26:57.429022 update_engine[1459]: I20250321 13:26:57.428987 1459 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 21 13:26:57.429022 update_engine[1459]: I20250321 13:26:57.429000 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 21 13:26:57.429022 update_engine[1459]: I20250321 13:26:57.429013 1459 update_attempter.cc:306] Processing Done. Mar 21 13:26:57.429889 update_engine[1459]: E20250321 13:26:57.429035 1459 update_attempter.cc:619] Update failed. Mar 21 13:26:57.429889 update_engine[1459]: I20250321 13:26:57.429092 1459 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 21 13:26:57.429889 update_engine[1459]: I20250321 13:26:57.429107 1459 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 21 13:26:57.429889 update_engine[1459]: I20250321 13:26:57.429119 1459 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 21 13:26:57.429889 update_engine[1459]: I20250321 13:26:57.429256 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 21 13:26:57.429889 update_engine[1459]: I20250321 13:26:57.429301 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 21 13:26:57.429889 update_engine[1459]: I20250321 13:26:57.429315 1459 omaha_request_action.cc:272] Request: Mar 21 13:26:57.429889 update_engine[1459]: Mar 21 13:26:57.429889 update_engine[1459]: Mar 21 13:26:57.429889 update_engine[1459]: Mar 21 13:26:57.429889 update_engine[1459]: Mar 21 13:26:57.429889 update_engine[1459]: Mar 21 13:26:57.429889 update_engine[1459]: Mar 21 13:26:57.429889 update_engine[1459]: I20250321 13:26:57.429328 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 21 13:26:57.429889 update_engine[1459]: I20250321 13:26:57.429735 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 21 13:26:57.430815 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 21 13:26:57.431370 update_engine[1459]: I20250321 13:26:57.430168 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 21 13:26:57.435596 update_engine[1459]: E20250321 13:26:57.435503 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 21 13:26:57.435769 update_engine[1459]: I20250321 13:26:57.435657 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 21 13:26:57.435769 update_engine[1459]: I20250321 13:26:57.435681 1459 omaha_request_action.cc:617] Omaha request response: Mar 21 13:26:57.435769 update_engine[1459]: I20250321 13:26:57.435696 1459 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 21 13:26:57.435769 update_engine[1459]: I20250321 13:26:57.435708 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 21 13:26:57.435769 update_engine[1459]: I20250321 13:26:57.435720 1459 update_attempter.cc:306] Processing Done. Mar 21 13:26:57.435769 update_engine[1459]: I20250321 13:26:57.435734 1459 update_attempter.cc:310] Error event sent. Mar 21 13:26:57.435769 update_engine[1459]: I20250321 13:26:57.435753 1459 update_check_scheduler.cc:74] Next update check in 44m8s Mar 21 13:26:57.436518 locksmithd[1483]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 21 13:26:57.945655 sshd[4094]: Connection closed by 172.24.4.1 port 48154 Mar 21 13:26:57.946453 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Mar 21 13:26:57.962292 systemd[1]: sshd@14-172.24.4.44:22-172.24.4.1:48154.service: Deactivated successfully. Mar 21 13:26:57.966758 systemd[1]: session-17.scope: Deactivated successfully. Mar 21 13:26:57.970405 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Mar 21 13:26:57.974198 systemd[1]: Started sshd@15-172.24.4.44:22-172.24.4.1:48160.service - OpenSSH per-connection server daemon (172.24.4.1:48160). Mar 21 13:26:57.977929 systemd-logind[1458]: Removed session 17. Mar 21 13:26:59.102235 sshd[4102]: Accepted publickey for core from 172.24.4.1 port 48160 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:26:59.104959 sshd-session[4102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:26:59.117827 systemd-logind[1458]: New session 18 of user core. Mar 21 13:26:59.125414 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 21 13:27:02.168091 sshd[4105]: Connection closed by 172.24.4.1 port 48160 Mar 21 13:27:02.169030 sshd-session[4102]: pam_unix(sshd:session): session closed for user core Mar 21 13:27:02.181490 systemd[1]: sshd@15-172.24.4.44:22-172.24.4.1:48160.service: Deactivated successfully. Mar 21 13:27:02.185169 systemd[1]: session-18.scope: Deactivated successfully. Mar 21 13:27:02.187723 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Mar 21 13:27:02.192869 systemd[1]: Started sshd@16-172.24.4.44:22-172.24.4.1:48164.service - OpenSSH per-connection server daemon (172.24.4.1:48164). Mar 21 13:27:02.200674 systemd-logind[1458]: Removed session 18. Mar 21 13:27:03.671799 sshd[4121]: Accepted publickey for core from 172.24.4.1 port 48164 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:27:03.674881 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:27:03.686364 systemd-logind[1458]: New session 19 of user core. Mar 21 13:27:03.695452 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 21 13:27:04.702031 sshd[4124]: Connection closed by 172.24.4.1 port 48164 Mar 21 13:27:04.701674 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Mar 21 13:27:04.720320 systemd[1]: sshd@16-172.24.4.44:22-172.24.4.1:48164.service: Deactivated successfully. Mar 21 13:27:04.723802 systemd[1]: session-19.scope: Deactivated successfully. Mar 21 13:27:04.726246 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Mar 21 13:27:04.730922 systemd[1]: Started sshd@17-172.24.4.44:22-172.24.4.1:36386.service - OpenSSH per-connection server daemon (172.24.4.1:36386). Mar 21 13:27:04.734410 systemd-logind[1458]: Removed session 19. Mar 21 13:27:05.933771 sshd[4133]: Accepted publickey for core from 172.24.4.1 port 36386 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:27:05.936983 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:27:05.951168 systemd-logind[1458]: New session 20 of user core. Mar 21 13:27:05.962480 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 21 13:27:06.681411 sshd[4136]: Connection closed by 172.24.4.1 port 36386 Mar 21 13:27:06.680715 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Mar 21 13:27:06.685090 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Mar 21 13:27:06.686167 systemd[1]: sshd@17-172.24.4.44:22-172.24.4.1:36386.service: Deactivated successfully. Mar 21 13:27:06.689227 systemd[1]: session-20.scope: Deactivated successfully. Mar 21 13:27:06.692495 systemd-logind[1458]: Removed session 20. Mar 21 13:27:11.705579 systemd[1]: Started sshd@18-172.24.4.44:22-172.24.4.1:36388.service - OpenSSH per-connection server daemon (172.24.4.1:36388). Mar 21 13:27:12.865325 sshd[4152]: Accepted publickey for core from 172.24.4.1 port 36388 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:27:12.868468 sshd-session[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:27:12.881163 systemd-logind[1458]: New session 21 of user core. Mar 21 13:27:12.886375 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 21 13:27:13.668827 sshd[4154]: Connection closed by 172.24.4.1 port 36388 Mar 21 13:27:13.670163 sshd-session[4152]: pam_unix(sshd:session): session closed for user core Mar 21 13:27:13.678779 systemd[1]: sshd@18-172.24.4.44:22-172.24.4.1:36388.service: Deactivated successfully. Mar 21 13:27:13.687281 systemd[1]: session-21.scope: Deactivated successfully. Mar 21 13:27:13.690502 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Mar 21 13:27:13.693932 systemd-logind[1458]: Removed session 21. Mar 21 13:27:18.691166 systemd[1]: Started sshd@19-172.24.4.44:22-172.24.4.1:51450.service - OpenSSH per-connection server daemon (172.24.4.1:51450). Mar 21 13:27:20.047808 sshd[4166]: Accepted publickey for core from 172.24.4.1 port 51450 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:27:20.050323 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:27:20.063705 systemd-logind[1458]: New session 22 of user core. Mar 21 13:27:20.069363 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 21 13:27:20.823870 sshd[4168]: Connection closed by 172.24.4.1 port 51450 Mar 21 13:27:20.824962 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Mar 21 13:27:20.832033 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Mar 21 13:27:20.833367 systemd[1]: sshd@19-172.24.4.44:22-172.24.4.1:51450.service: Deactivated successfully. Mar 21 13:27:20.837882 systemd[1]: session-22.scope: Deactivated successfully. Mar 21 13:27:20.842941 systemd-logind[1458]: Removed session 22. Mar 21 13:27:25.846413 systemd[1]: Started sshd@20-172.24.4.44:22-172.24.4.1:45514.service - OpenSSH per-connection server daemon (172.24.4.1:45514). Mar 21 13:27:27.054614 sshd[4180]: Accepted publickey for core from 172.24.4.1 port 45514 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:27:27.056690 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:27:27.068171 systemd-logind[1458]: New session 23 of user core. Mar 21 13:27:27.073724 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 21 13:27:27.800768 sshd[4182]: Connection closed by 172.24.4.1 port 45514 Mar 21 13:27:27.803090 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Mar 21 13:27:27.819275 systemd[1]: sshd@20-172.24.4.44:22-172.24.4.1:45514.service: Deactivated successfully. Mar 21 13:27:27.822992 systemd[1]: session-23.scope: Deactivated successfully. Mar 21 13:27:27.829471 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Mar 21 13:27:27.831694 systemd[1]: Started sshd@21-172.24.4.44:22-172.24.4.1:45522.service - OpenSSH per-connection server daemon (172.24.4.1:45522). Mar 21 13:27:27.835679 systemd-logind[1458]: Removed session 23. Mar 21 13:27:29.133957 sshd[4192]: Accepted publickey for core from 172.24.4.1 port 45522 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:27:29.136809 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:27:29.148788 systemd-logind[1458]: New session 24 of user core. Mar 21 13:27:29.158348 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 21 13:27:31.137117 kubelet[2676]: I0321 13:27:31.136452 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v4klj" podStartSLOduration=172.136339109 podStartE2EDuration="2m52.136339109s" podCreationTimestamp="2025-03-21 13:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 13:25:06.475012178 +0000 UTC m=+33.430834974" watchObservedRunningTime="2025-03-21 13:27:31.136339109 +0000 UTC m=+178.092161935" Mar 21 13:27:31.178292 containerd[1479]: time="2025-03-21T13:27:31.177924412Z" level=info msg="StopContainer for \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\" with timeout 30 (s)" Mar 21 13:27:31.179169 containerd[1479]: time="2025-03-21T13:27:31.178654607Z" level=info msg="Stop container \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\" with signal terminated" Mar 21 13:27:31.190076 containerd[1479]: time="2025-03-21T13:27:31.189917797Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 21 13:27:31.197514 containerd[1479]: time="2025-03-21T13:27:31.197467039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" id:\"e84e0db6c3b5c66ee3ed353e6e120d6c1154ae7ac537800f4c3a9b3c612b5261\" pid:4214 exited_at:{seconds:1742563651 nanos:197100374}" Mar 21 13:27:31.200228 systemd[1]: cri-containerd-550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7.scope: Deactivated successfully. Mar 21 13:27:31.202287 containerd[1479]: time="2025-03-21T13:27:31.201418802Z" level=info msg="StopContainer for \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" with timeout 2 (s)" Mar 21 13:27:31.202979 containerd[1479]: time="2025-03-21T13:27:31.202952069Z" level=info msg="Stop container \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" with signal terminated" Mar 21 13:27:31.205783 containerd[1479]: time="2025-03-21T13:27:31.205747980Z" level=info msg="received exit event container_id:\"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\" id:\"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\" pid:3268 exited_at:{seconds:1742563651 nanos:205120647}" Mar 21 13:27:31.206220 containerd[1479]: time="2025-03-21T13:27:31.206141757Z" level=info msg="TaskExit event in podsandbox handler container_id:\"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\" id:\"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\" pid:3268 exited_at:{seconds:1742563651 nanos:205120647}" Mar 21 13:27:31.216350 systemd-networkd[1388]: lxc_health: Link DOWN Mar 21 13:27:31.216357 systemd-networkd[1388]: lxc_health: Lost carrier Mar 21 13:27:31.239293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7-rootfs.mount: Deactivated successfully. Mar 21 13:27:31.242882 containerd[1479]: time="2025-03-21T13:27:31.240730039Z" level=info msg="received exit event container_id:\"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" id:\"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" pid:3300 exited_at:{seconds:1742563651 nanos:240079011}" Mar 21 13:27:31.242882 containerd[1479]: time="2025-03-21T13:27:31.241076187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" id:\"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" pid:3300 exited_at:{seconds:1742563651 nanos:240079011}" Mar 21 13:27:31.241401 systemd[1]: cri-containerd-02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8.scope: Deactivated successfully. Mar 21 13:27:31.242394 systemd[1]: cri-containerd-02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8.scope: Consumed 8.742s CPU time, 125.6M memory peak, 144K read from disk, 13.3M written to disk. Mar 21 13:27:31.253532 containerd[1479]: time="2025-03-21T13:27:31.253489488Z" level=info msg="StopContainer for \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\" returns successfully" Mar 21 13:27:31.254827 containerd[1479]: time="2025-03-21T13:27:31.254786474Z" level=info msg="StopPodSandbox for \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\"" Mar 21 13:27:31.255087 containerd[1479]: time="2025-03-21T13:27:31.255011365Z" level=info msg="Container to stop \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 13:27:31.268744 systemd[1]: cri-containerd-87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c.scope: Deactivated successfully. Mar 21 13:27:31.277075 containerd[1479]: time="2025-03-21T13:27:31.276828457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\" id:\"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\" pid:2824 exit_status:137 exited_at:{seconds:1742563651 nanos:274272315}" Mar 21 13:27:31.281647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8-rootfs.mount: Deactivated successfully. Mar 21 13:27:31.300072 containerd[1479]: time="2025-03-21T13:27:31.300014410Z" level=info msg="StopContainer for \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" returns successfully" Mar 21 13:27:31.301802 containerd[1479]: time="2025-03-21T13:27:31.301767359Z" level=info msg="StopPodSandbox for \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\"" Mar 21 13:27:31.301901 containerd[1479]: time="2025-03-21T13:27:31.301836860Z" level=info msg="Container to stop \"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 13:27:31.301901 containerd[1479]: time="2025-03-21T13:27:31.301852669Z" level=info msg="Container to stop \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 13:27:31.301901 containerd[1479]: time="2025-03-21T13:27:31.301864671Z" level=info msg="Container to stop \"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 13:27:31.301901 containerd[1479]: time="2025-03-21T13:27:31.301875311Z" level=info msg="Container to stop \"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 13:27:31.301901 containerd[1479]: time="2025-03-21T13:27:31.301885390Z" level=info msg="Container to stop \"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 21 13:27:31.318477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c-rootfs.mount: Deactivated successfully. Mar 21 13:27:31.320394 systemd[1]: cri-containerd-dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02.scope: Deactivated successfully. Mar 21 13:27:31.339067 containerd[1479]: time="2025-03-21T13:27:31.338951889Z" level=info msg="shim disconnected" id=87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c namespace=k8s.io Mar 21 13:27:31.339500 containerd[1479]: time="2025-03-21T13:27:31.339038902Z" level=warning msg="cleaning up after shim disconnected" id=87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c namespace=k8s.io Mar 21 13:27:31.339500 containerd[1479]: time="2025-03-21T13:27:31.339414474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 21 13:27:31.352647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02-rootfs.mount: Deactivated successfully. Mar 21 13:27:31.367089 containerd[1479]: time="2025-03-21T13:27:31.365321257Z" level=info msg="TearDown network for sandbox \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\" successfully" Mar 21 13:27:31.367089 containerd[1479]: time="2025-03-21T13:27:31.365363986Z" level=info msg="StopPodSandbox for \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\" returns successfully" Mar 21 13:27:31.368917 containerd[1479]: time="2025-03-21T13:27:31.367242280Z" level=info msg="received exit event sandbox_id:\"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\" exit_status:137 exited_at:{seconds:1742563651 nanos:274272315}" Mar 21 13:27:31.368917 containerd[1479]: time="2025-03-21T13:27:31.367725795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" id:\"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" pid:2781 exit_status:137 exited_at:{seconds:1742563651 nanos:319714172}" Mar 21 13:27:31.369687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c-shm.mount: Deactivated successfully. Mar 21 13:27:31.375350 containerd[1479]: time="2025-03-21T13:27:31.375089650Z" level=info msg="shim disconnected" id=dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02 namespace=k8s.io Mar 21 13:27:31.375350 containerd[1479]: time="2025-03-21T13:27:31.375201891Z" level=warning msg="cleaning up after shim disconnected" id=dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02 namespace=k8s.io Mar 21 13:27:31.375350 containerd[1479]: time="2025-03-21T13:27:31.375221678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 21 13:27:31.376942 containerd[1479]: time="2025-03-21T13:27:31.375685114Z" level=info msg="received exit event sandbox_id:\"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" exit_status:137 exited_at:{seconds:1742563651 nanos:319714172}" Mar 21 13:27:31.378245 containerd[1479]: time="2025-03-21T13:27:31.378162318Z" level=info msg="TearDown network for sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" successfully" Mar 21 13:27:31.378245 containerd[1479]: time="2025-03-21T13:27:31.378191272Z" level=info msg="StopPodSandbox for \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" returns successfully" Mar 21 13:27:31.425457 kubelet[2676]: I0321 13:27:31.425329 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cni-path\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425457 kubelet[2676]: I0321 13:27:31.425386 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-hostproc\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425457 kubelet[2676]: I0321 13:27:31.425416 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fda36d26-036b-4460-9f20-1cf0beea2104-hubble-tls\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425643 kubelet[2676]: I0321 13:27:31.425467 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-cgroup\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425643 kubelet[2676]: I0321 13:27:31.425527 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-config-path\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425643 kubelet[2676]: I0321 13:27:31.425549 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-host-proc-sys-net\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425643 kubelet[2676]: I0321 13:27:31.425569 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-run\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425643 kubelet[2676]: I0321 13:27:31.425586 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-bpf-maps\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425643 kubelet[2676]: I0321 13:27:31.425608 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fda36d26-036b-4460-9f20-1cf0beea2104-clustermesh-secrets\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425873 kubelet[2676]: I0321 13:27:31.425625 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-lib-modules\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425873 kubelet[2676]: I0321 13:27:31.425643 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-host-proc-sys-kernel\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425873 kubelet[2676]: I0321 13:27:31.425664 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-etc-cni-netd\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.425873 kubelet[2676]: I0321 13:27:31.425688 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe53d1c1-d35c-46d5-b696-0ff0ce0dea00-cilium-config-path\") pod \"fe53d1c1-d35c-46d5-b696-0ff0ce0dea00\" (UID: \"fe53d1c1-d35c-46d5-b696-0ff0ce0dea00\") " Mar 21 13:27:31.425873 kubelet[2676]: I0321 13:27:31.425709 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r64ms\" (UniqueName: \"kubernetes.io/projected/fe53d1c1-d35c-46d5-b696-0ff0ce0dea00-kube-api-access-r64ms\") pod \"fe53d1c1-d35c-46d5-b696-0ff0ce0dea00\" (UID: \"fe53d1c1-d35c-46d5-b696-0ff0ce0dea00\") " Mar 21 13:27:31.425873 kubelet[2676]: I0321 13:27:31.425736 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h6pl\" (UniqueName: \"kubernetes.io/projected/fda36d26-036b-4460-9f20-1cf0beea2104-kube-api-access-4h6pl\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.426037 kubelet[2676]: I0321 13:27:31.425756 2676 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-xtables-lock\") pod \"fda36d26-036b-4460-9f20-1cf0beea2104\" (UID: \"fda36d26-036b-4460-9f20-1cf0beea2104\") " Mar 21 13:27:31.426037 kubelet[2676]: I0321 13:27:31.425841 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 13:27:31.426037 kubelet[2676]: I0321 13:27:31.425884 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cni-path" (OuterVolumeSpecName: "cni-path") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 13:27:31.426037 kubelet[2676]: I0321 13:27:31.425905 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-hostproc" (OuterVolumeSpecName: "hostproc") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 13:27:31.431922 kubelet[2676]: I0321 13:27:31.430631 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 13:27:31.431922 kubelet[2676]: I0321 13:27:31.431411 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 13:27:31.431922 kubelet[2676]: I0321 13:27:31.431460 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 13:27:31.431922 kubelet[2676]: I0321 13:27:31.431481 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 13:27:31.431922 kubelet[2676]: I0321 13:27:31.431497 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 13:27:31.433081 kubelet[2676]: I0321 13:27:31.432474 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 13:27:31.433081 kubelet[2676]: I0321 13:27:31.432508 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 21 13:27:31.435323 kubelet[2676]: I0321 13:27:31.435275 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 13:27:31.436767 kubelet[2676]: I0321 13:27:31.436745 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda36d26-036b-4460-9f20-1cf0beea2104-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 13:27:31.436961 kubelet[2676]: I0321 13:27:31.436944 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda36d26-036b-4460-9f20-1cf0beea2104-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 21 13:27:31.438533 kubelet[2676]: I0321 13:27:31.438495 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe53d1c1-d35c-46d5-b696-0ff0ce0dea00-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fe53d1c1-d35c-46d5-b696-0ff0ce0dea00" (UID: "fe53d1c1-d35c-46d5-b696-0ff0ce0dea00"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 21 13:27:31.439119 kubelet[2676]: I0321 13:27:31.439061 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda36d26-036b-4460-9f20-1cf0beea2104-kube-api-access-4h6pl" (OuterVolumeSpecName: "kube-api-access-4h6pl") pod "fda36d26-036b-4460-9f20-1cf0beea2104" (UID: "fda36d26-036b-4460-9f20-1cf0beea2104"). InnerVolumeSpecName "kube-api-access-4h6pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 13:27:31.439644 kubelet[2676]: I0321 13:27:31.439604 2676 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe53d1c1-d35c-46d5-b696-0ff0ce0dea00-kube-api-access-r64ms" (OuterVolumeSpecName: "kube-api-access-r64ms") pod "fe53d1c1-d35c-46d5-b696-0ff0ce0dea00" (UID: "fe53d1c1-d35c-46d5-b696-0ff0ce0dea00"). InnerVolumeSpecName "kube-api-access-r64ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 21 13:27:31.526478 kubelet[2676]: I0321 13:27:31.526349 2676 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cni-path\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.526478 kubelet[2676]: I0321 13:27:31.526481 2676 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-hostproc\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.526851 kubelet[2676]: I0321 13:27:31.526559 2676 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fda36d26-036b-4460-9f20-1cf0beea2104-hubble-tls\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.526851 kubelet[2676]: I0321 13:27:31.526590 2676 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-cgroup\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.526851 kubelet[2676]: I0321 13:27:31.526662 2676 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-host-proc-sys-net\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.526851 kubelet[2676]: I0321 13:27:31.526690 2676 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-run\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.526851 kubelet[2676]: I0321 13:27:31.526713 2676 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-bpf-maps\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.526851 kubelet[2676]: I0321 13:27:31.526776 2676 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fda36d26-036b-4460-9f20-1cf0beea2104-cilium-config-path\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.526851 kubelet[2676]: I0321 13:27:31.526800 2676 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-lib-modules\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.527335 kubelet[2676]: I0321 13:27:31.526862 2676 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fda36d26-036b-4460-9f20-1cf0beea2104-clustermesh-secrets\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.527335 kubelet[2676]: I0321 13:27:31.526889 2676 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-host-proc-sys-kernel\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.527335 kubelet[2676]: I0321 13:27:31.526951 2676 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe53d1c1-d35c-46d5-b696-0ff0ce0dea00-cilium-config-path\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.527335 kubelet[2676]: I0321 13:27:31.526978 2676 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r64ms\" (UniqueName: \"kubernetes.io/projected/fe53d1c1-d35c-46d5-b696-0ff0ce0dea00-kube-api-access-r64ms\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.527335 kubelet[2676]: I0321 13:27:31.527002 2676 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-etc-cni-netd\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.527335 kubelet[2676]: I0321 13:27:31.527073 2676 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4h6pl\" (UniqueName: \"kubernetes.io/projected/fda36d26-036b-4460-9f20-1cf0beea2104-kube-api-access-4h6pl\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.527335 kubelet[2676]: I0321 13:27:31.527099 2676 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fda36d26-036b-4460-9f20-1cf0beea2104-xtables-lock\") on node \"ci-9999-0-3-0-e42165490f.novalocal\" DevicePath \"\"" Mar 21 13:27:31.880995 kubelet[2676]: I0321 13:27:31.880929 2676 scope.go:117] "RemoveContainer" containerID="02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8" Mar 21 13:27:31.891781 containerd[1479]: time="2025-03-21T13:27:31.891597510Z" level=info msg="RemoveContainer for \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\"" Mar 21 13:27:31.907189 systemd[1]: Removed slice kubepods-burstable-podfda36d26_036b_4460_9f20_1cf0beea2104.slice - libcontainer container kubepods-burstable-podfda36d26_036b_4460_9f20_1cf0beea2104.slice. Mar 21 13:27:31.907501 systemd[1]: kubepods-burstable-podfda36d26_036b_4460_9f20_1cf0beea2104.slice: Consumed 8.835s CPU time, 126M memory peak, 144K read from disk, 13.3M written to disk. Mar 21 13:27:31.914758 systemd[1]: Removed slice kubepods-besteffort-podfe53d1c1_d35c_46d5_b696_0ff0ce0dea00.slice - libcontainer container kubepods-besteffort-podfe53d1c1_d35c_46d5_b696_0ff0ce0dea00.slice. Mar 21 13:27:31.921254 containerd[1479]: time="2025-03-21T13:27:31.921194847Z" level=info msg="RemoveContainer for \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" returns successfully" Mar 21 13:27:31.922649 kubelet[2676]: I0321 13:27:31.922561 2676 scope.go:117] "RemoveContainer" containerID="354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47" Mar 21 13:27:31.928211 containerd[1479]: time="2025-03-21T13:27:31.928106246Z" level=info msg="RemoveContainer for \"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\"" Mar 21 13:27:31.941884 containerd[1479]: time="2025-03-21T13:27:31.941814760Z" level=info msg="RemoveContainer for \"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\" returns successfully" Mar 21 13:27:31.942462 kubelet[2676]: I0321 13:27:31.942193 2676 scope.go:117] "RemoveContainer" containerID="21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009" Mar 21 13:27:31.949253 containerd[1479]: time="2025-03-21T13:27:31.949142748Z" level=info msg="RemoveContainer for \"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\"" Mar 21 13:27:31.957354 containerd[1479]: time="2025-03-21T13:27:31.957215119Z" level=info msg="RemoveContainer for \"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\" returns successfully" Mar 21 13:27:31.962121 kubelet[2676]: I0321 13:27:31.958117 2676 scope.go:117] "RemoveContainer" containerID="accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48" Mar 21 13:27:31.965914 containerd[1479]: time="2025-03-21T13:27:31.965874648Z" level=info msg="RemoveContainer for \"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\"" Mar 21 13:27:31.972519 containerd[1479]: time="2025-03-21T13:27:31.972460978Z" level=info msg="RemoveContainer for \"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\" returns successfully" Mar 21 13:27:31.973357 kubelet[2676]: I0321 13:27:31.972788 2676 scope.go:117] "RemoveContainer" containerID="81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e" Mar 21 13:27:31.975643 containerd[1479]: time="2025-03-21T13:27:31.975600532Z" level=info msg="RemoveContainer for \"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\"" Mar 21 13:27:31.980736 containerd[1479]: time="2025-03-21T13:27:31.980350768Z" level=info msg="RemoveContainer for \"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\" returns successfully" Mar 21 13:27:31.981131 kubelet[2676]: I0321 13:27:31.981115 2676 scope.go:117] "RemoveContainer" containerID="02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8" Mar 21 13:27:31.982438 containerd[1479]: time="2025-03-21T13:27:31.982398228Z" level=error msg="ContainerStatus for \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\": not found" Mar 21 13:27:31.983099 kubelet[2676]: E0321 13:27:31.982767 2676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\": not found" containerID="02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8" Mar 21 13:27:31.983099 kubelet[2676]: I0321 13:27:31.982797 2676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8"} err="failed to get container status \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"02e9991790eefa683ec81e265b77656740f82f4345072db1c8148ebaa38072a8\": not found" Mar 21 13:27:31.983099 kubelet[2676]: I0321 13:27:31.982872 2676 scope.go:117] "RemoveContainer" containerID="354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47" Mar 21 13:27:31.983613 kubelet[2676]: E0321 13:27:31.983428 2676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\": not found" containerID="354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47" Mar 21 13:27:31.983613 kubelet[2676]: I0321 13:27:31.983455 2676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47"} err="failed to get container status \"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\": rpc error: code = NotFound desc = an error occurred when try to find container \"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\": not found" Mar 21 13:27:31.983695 containerd[1479]: time="2025-03-21T13:27:31.983221177Z" level=error msg="ContainerStatus for \"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"354ac8e9ef58d36779288cafc2de0caf74d74d042850730c0af2b86ecd300c47\": not found" Mar 21 13:27:31.986061 kubelet[2676]: I0321 13:27:31.983471 2676 scope.go:117] "RemoveContainer" containerID="21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009" Mar 21 13:27:31.986061 kubelet[2676]: E0321 13:27:31.985150 2676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\": not found" containerID="21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009" Mar 21 13:27:31.986061 kubelet[2676]: I0321 13:27:31.985176 2676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009"} err="failed to get container status \"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\": rpc error: code = NotFound desc = an error occurred when try to find container \"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\": not found" Mar 21 13:27:31.986061 kubelet[2676]: I0321 13:27:31.985197 2676 scope.go:117] "RemoveContainer" containerID="accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48" Mar 21 13:27:31.986061 kubelet[2676]: E0321 13:27:31.985659 2676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\": not found" containerID="accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48" Mar 21 13:27:31.986061 kubelet[2676]: I0321 13:27:31.985720 2676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48"} err="failed to get container status \"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\": rpc error: code = NotFound desc = an error occurred when try to find container \"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\": not found" Mar 21 13:27:31.986061 kubelet[2676]: I0321 13:27:31.985738 2676 scope.go:117] "RemoveContainer" containerID="81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e" Mar 21 13:27:31.986278 containerd[1479]: time="2025-03-21T13:27:31.984856476Z" level=error msg="ContainerStatus for \"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21b01d41bbda9c073c222df7e1cec66793acf633016f02d8b0100e3bd80f9009\": not found" Mar 21 13:27:31.986278 containerd[1479]: time="2025-03-21T13:27:31.985560043Z" level=error msg="ContainerStatus for \"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"accf5fdfbd7bd7b98352d5c1a292205f06d0c54539869c96f27cde2fee969b48\": not found" Mar 21 13:27:31.986334 containerd[1479]: time="2025-03-21T13:27:31.986288976Z" level=error msg="ContainerStatus for \"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\": not found" Mar 21 13:27:31.986672 kubelet[2676]: E0321 13:27:31.986499 2676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\": not found" containerID="81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e" Mar 21 13:27:31.986717 kubelet[2676]: I0321 13:27:31.986686 2676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e"} err="failed to get container status \"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\": rpc error: code = NotFound desc = an error occurred when try to find container \"81c1fb7e72798441375c410f1ce4f4a1424fb4989d04d5f6edfcaed5432d653e\": not found" Mar 21 13:27:31.986879 kubelet[2676]: I0321 13:27:31.986846 2676 scope.go:117] "RemoveContainer" containerID="550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7" Mar 21 13:27:31.989814 containerd[1479]: time="2025-03-21T13:27:31.989742676Z" level=info msg="RemoveContainer for \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\"" Mar 21 13:27:31.995036 containerd[1479]: time="2025-03-21T13:27:31.994990463Z" level=info msg="RemoveContainer for \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\" returns successfully" Mar 21 13:27:31.995312 kubelet[2676]: I0321 13:27:31.995271 2676 scope.go:117] "RemoveContainer" containerID="550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7" Mar 21 13:27:31.995777 containerd[1479]: time="2025-03-21T13:27:31.995722452Z" level=error msg="ContainerStatus for \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\": not found" Mar 21 13:27:31.995975 kubelet[2676]: E0321 13:27:31.995910 2676 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\": not found" containerID="550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7" Mar 21 13:27:31.995975 kubelet[2676]: I0321 13:27:31.995950 2676 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7"} err="failed to get container status \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"550a662354eb19e82923c07593c774788f431429d6bc2ecc1cf3ec0bae30c9e7\": not found" Mar 21 13:27:32.236753 systemd[1]: var-lib-kubelet-pods-fe53d1c1\x2dd35c\x2d46d5\x2db696\x2d0ff0ce0dea00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr64ms.mount: Deactivated successfully. Mar 21 13:27:32.237035 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02-shm.mount: Deactivated successfully. Mar 21 13:27:32.237280 systemd[1]: var-lib-kubelet-pods-fda36d26\x2d036b\x2d4460\x2d9f20\x2d1cf0beea2104-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4h6pl.mount: Deactivated successfully. Mar 21 13:27:32.237465 systemd[1]: var-lib-kubelet-pods-fda36d26\x2d036b\x2d4460\x2d9f20\x2d1cf0beea2104-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 21 13:27:32.239193 systemd[1]: var-lib-kubelet-pods-fda36d26\x2d036b\x2d4460\x2d9f20\x2d1cf0beea2104-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 21 13:27:33.157560 kubelet[2676]: I0321 13:27:33.157463 2676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda36d26-036b-4460-9f20-1cf0beea2104" path="/var/lib/kubelet/pods/fda36d26-036b-4460-9f20-1cf0beea2104/volumes" Mar 21 13:27:33.159500 kubelet[2676]: I0321 13:27:33.159305 2676 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe53d1c1-d35c-46d5-b696-0ff0ce0dea00" path="/var/lib/kubelet/pods/fe53d1c1-d35c-46d5-b696-0ff0ce0dea00/volumes" Mar 21 13:27:33.164809 containerd[1479]: time="2025-03-21T13:27:33.164138333Z" level=info msg="StopPodSandbox for \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\"" Mar 21 13:27:33.164809 containerd[1479]: time="2025-03-21T13:27:33.164392819Z" level=info msg="TearDown network for sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" successfully" Mar 21 13:27:33.164809 containerd[1479]: time="2025-03-21T13:27:33.164425941Z" level=info msg="StopPodSandbox for \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" returns successfully" Mar 21 13:27:33.166521 containerd[1479]: time="2025-03-21T13:27:33.165414079Z" level=info msg="RemovePodSandbox for \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\"" Mar 21 13:27:33.166521 containerd[1479]: time="2025-03-21T13:27:33.165463322Z" level=info msg="Forcibly stopping sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\"" Mar 21 13:27:33.166521 containerd[1479]: time="2025-03-21T13:27:33.165602902Z" level=info msg="TearDown network for sandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" successfully" Mar 21 13:27:33.169578 containerd[1479]: time="2025-03-21T13:27:33.168984729Z" level=info msg="Ensure that sandbox dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02 in task-service has been cleanup successfully" Mar 21 13:27:33.174495 containerd[1479]: time="2025-03-21T13:27:33.174238347Z" level=info msg="RemovePodSandbox \"dc45670fa21f0976073b6463aa621b80002ca536931075b715899810ecb6ef02\" returns successfully" Mar 21 13:27:33.175093 containerd[1479]: time="2025-03-21T13:27:33.174998349Z" level=info msg="StopPodSandbox for \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\"" Mar 21 13:27:33.175717 containerd[1479]: time="2025-03-21T13:27:33.175564047Z" level=info msg="TearDown network for sandbox \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\" successfully" Mar 21 13:27:33.175717 containerd[1479]: time="2025-03-21T13:27:33.175607408Z" level=info msg="StopPodSandbox for \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\" returns successfully" Mar 21 13:27:33.178146 containerd[1479]: time="2025-03-21T13:27:33.176801151Z" level=info msg="RemovePodSandbox for \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\"" Mar 21 13:27:33.178146 containerd[1479]: time="2025-03-21T13:27:33.176857737Z" level=info msg="Forcibly stopping sandbox \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\"" Mar 21 13:27:33.178146 containerd[1479]: time="2025-03-21T13:27:33.176997969Z" level=info msg="TearDown network for sandbox \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\" successfully" Mar 21 13:27:33.179450 containerd[1479]: time="2025-03-21T13:27:33.179402558Z" level=info msg="Ensure that sandbox 87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c in task-service has been cleanup successfully" Mar 21 13:27:33.184384 containerd[1479]: time="2025-03-21T13:27:33.184334544Z" level=info msg="RemovePodSandbox \"87cc456671b42fbdd61f08f609cb584f56212341f402759525047196bf1a493c\" returns successfully" Mar 21 13:27:33.262100 sshd[4195]: Connection closed by 172.24.4.1 port 45522 Mar 21 13:27:33.265253 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Mar 21 13:27:33.287220 systemd[1]: sshd@21-172.24.4.44:22-172.24.4.1:45522.service: Deactivated successfully. Mar 21 13:27:33.291793 kubelet[2676]: E0321 13:27:33.291701 2676 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 21 13:27:33.293011 systemd[1]: session-24.scope: Deactivated successfully. Mar 21 13:27:33.293744 systemd[1]: session-24.scope: Consumed 1.105s CPU time, 22.1M memory peak. Mar 21 13:27:33.297683 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. Mar 21 13:27:33.301269 systemd[1]: Started sshd@22-172.24.4.44:22-172.24.4.1:45528.service - OpenSSH per-connection server daemon (172.24.4.1:45528). Mar 21 13:27:33.304545 systemd-logind[1458]: Removed session 24. Mar 21 13:27:34.443806 sshd[4347]: Accepted publickey for core from 172.24.4.1 port 45528 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:27:34.446502 sshd-session[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:27:34.459139 systemd-logind[1458]: New session 25 of user core. Mar 21 13:27:34.464358 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 21 13:27:35.606030 kubelet[2676]: E0321 13:27:35.605985 2676 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fda36d26-036b-4460-9f20-1cf0beea2104" containerName="mount-bpf-fs" Mar 21 13:27:35.606030 kubelet[2676]: E0321 13:27:35.606014 2676 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fe53d1c1-d35c-46d5-b696-0ff0ce0dea00" containerName="cilium-operator" Mar 21 13:27:35.606030 kubelet[2676]: E0321 13:27:35.606022 2676 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fda36d26-036b-4460-9f20-1cf0beea2104" containerName="cilium-agent" Mar 21 13:27:35.606030 kubelet[2676]: E0321 13:27:35.606032 2676 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fda36d26-036b-4460-9f20-1cf0beea2104" containerName="mount-cgroup" Mar 21 13:27:35.606030 kubelet[2676]: E0321 13:27:35.606038 2676 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fda36d26-036b-4460-9f20-1cf0beea2104" containerName="clean-cilium-state" Mar 21 13:27:35.606579 kubelet[2676]: E0321 13:27:35.606086 2676 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fda36d26-036b-4460-9f20-1cf0beea2104" containerName="apply-sysctl-overwrites" Mar 21 13:27:35.606579 kubelet[2676]: I0321 13:27:35.606119 2676 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe53d1c1-d35c-46d5-b696-0ff0ce0dea00" containerName="cilium-operator" Mar 21 13:27:35.606579 kubelet[2676]: I0321 13:27:35.606132 2676 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda36d26-036b-4460-9f20-1cf0beea2104" containerName="cilium-agent" Mar 21 13:27:35.616982 systemd[1]: Created slice kubepods-burstable-pod2eb11b5d_0e9a_484c_908d_e6c2d1d57aaf.slice - libcontainer container kubepods-burstable-pod2eb11b5d_0e9a_484c_908d_e6c2d1d57aaf.slice. Mar 21 13:27:35.656927 kubelet[2676]: I0321 13:27:35.656884 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xwrz\" (UniqueName: \"kubernetes.io/projected/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-kube-api-access-4xwrz\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657176 kubelet[2676]: I0321 13:27:35.656944 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-hubble-tls\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657176 kubelet[2676]: I0321 13:27:35.656981 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-xtables-lock\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657176 kubelet[2676]: I0321 13:27:35.657010 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-cilium-ipsec-secrets\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657176 kubelet[2676]: I0321 13:27:35.657063 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-cilium-cgroup\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657176 kubelet[2676]: I0321 13:27:35.657093 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-lib-modules\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657176 kubelet[2676]: I0321 13:27:35.657129 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-host-proc-sys-kernel\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657400 kubelet[2676]: I0321 13:27:35.657158 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-bpf-maps\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657400 kubelet[2676]: I0321 13:27:35.657184 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-clustermesh-secrets\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657400 kubelet[2676]: I0321 13:27:35.657218 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-etc-cni-netd\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657400 kubelet[2676]: I0321 13:27:35.657256 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-cilium-config-path\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657400 kubelet[2676]: I0321 13:27:35.657295 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-host-proc-sys-net\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657400 kubelet[2676]: I0321 13:27:35.657325 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-cni-path\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657556 kubelet[2676]: I0321 13:27:35.657350 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-cilium-run\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.657556 kubelet[2676]: I0321 13:27:35.657380 2676 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf-hostproc\") pod \"cilium-hxq2d\" (UID: \"2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf\") " pod="kube-system/cilium-hxq2d" Mar 21 13:27:35.903336 sshd[4350]: Connection closed by 172.24.4.1 port 45528 Mar 21 13:27:35.903260 sshd-session[4347]: pam_unix(sshd:session): session closed for user core Mar 21 13:27:35.912895 systemd[1]: sshd@22-172.24.4.44:22-172.24.4.1:45528.service: Deactivated successfully. Mar 21 13:27:35.914584 systemd[1]: session-25.scope: Deactivated successfully. Mar 21 13:27:35.915569 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. Mar 21 13:27:35.917588 systemd[1]: Started sshd@23-172.24.4.44:22-172.24.4.1:39692.service - OpenSSH per-connection server daemon (172.24.4.1:39692). Mar 21 13:27:35.919402 systemd-logind[1458]: Removed session 25. Mar 21 13:27:35.923213 containerd[1479]: time="2025-03-21T13:27:35.923183370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hxq2d,Uid:2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf,Namespace:kube-system,Attempt:0,}" Mar 21 13:27:35.954156 containerd[1479]: time="2025-03-21T13:27:35.954105486Z" level=info msg="connecting to shim 20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00" address="unix:///run/containerd/s/c111bd6cf9b56970c6d1533073ac14b6d3a6ac9109c1a0211d24d78172e4f94d" namespace=k8s.io protocol=ttrpc version=3 Mar 21 13:27:35.984210 systemd[1]: Started cri-containerd-20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00.scope - libcontainer container 20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00. Mar 21 13:27:36.008383 containerd[1479]: time="2025-03-21T13:27:36.008350394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hxq2d,Uid:2eb11b5d-0e9a-484c-908d-e6c2d1d57aaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00\"" Mar 21 13:27:36.011521 containerd[1479]: time="2025-03-21T13:27:36.011495939Z" level=info msg="CreateContainer within sandbox \"20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 21 13:27:36.018961 containerd[1479]: time="2025-03-21T13:27:36.018926961Z" level=info msg="Container e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:27:36.032479 containerd[1479]: time="2025-03-21T13:27:36.032375599Z" level=info msg="CreateContainer within sandbox \"20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b\"" Mar 21 13:27:36.033996 containerd[1479]: time="2025-03-21T13:27:36.033892737Z" level=info msg="StartContainer for \"e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b\"" Mar 21 13:27:36.035095 containerd[1479]: time="2025-03-21T13:27:36.034991082Z" level=info msg="connecting to shim e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b" address="unix:///run/containerd/s/c111bd6cf9b56970c6d1533073ac14b6d3a6ac9109c1a0211d24d78172e4f94d" protocol=ttrpc version=3 Mar 21 13:27:36.061209 systemd[1]: Started cri-containerd-e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b.scope - libcontainer container e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b. Mar 21 13:27:36.103529 containerd[1479]: time="2025-03-21T13:27:36.102441484Z" level=info msg="StartContainer for \"e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b\" returns successfully" Mar 21 13:27:36.110675 systemd[1]: cri-containerd-e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b.scope: Deactivated successfully. Mar 21 13:27:36.114434 containerd[1479]: time="2025-03-21T13:27:36.114395316Z" level=info msg="received exit event container_id:\"e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b\" id:\"e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b\" pid:4423 exited_at:{seconds:1742563656 nanos:113429380}" Mar 21 13:27:36.114647 containerd[1479]: time="2025-03-21T13:27:36.114486457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b\" id:\"e45029ca22cbaa16c8ae035c2879ee925113fdc78284055cea4cca182ae3d45b\" pid:4423 exited_at:{seconds:1742563656 nanos:113429380}" Mar 21 13:27:36.922392 containerd[1479]: time="2025-03-21T13:27:36.921585918Z" level=info msg="CreateContainer within sandbox \"20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 21 13:27:36.939389 containerd[1479]: time="2025-03-21T13:27:36.939302412Z" level=info msg="Container 850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:27:36.963408 containerd[1479]: time="2025-03-21T13:27:36.963321204Z" level=info msg="CreateContainer within sandbox \"20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f\"" Mar 21 13:27:36.968122 containerd[1479]: time="2025-03-21T13:27:36.966335774Z" level=info msg="StartContainer for \"850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f\"" Mar 21 13:27:36.968826 containerd[1479]: time="2025-03-21T13:27:36.968693083Z" level=info msg="connecting to shim 850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f" address="unix:///run/containerd/s/c111bd6cf9b56970c6d1533073ac14b6d3a6ac9109c1a0211d24d78172e4f94d" protocol=ttrpc version=3 Mar 21 13:27:36.999162 kubelet[2676]: I0321 13:27:36.999019 2676 setters.go:600] "Node became not ready" node="ci-9999-0-3-0-e42165490f.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-21T13:27:36Z","lastTransitionTime":"2025-03-21T13:27:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 21 13:27:37.014292 systemd[1]: Started cri-containerd-850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f.scope - libcontainer container 850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f. Mar 21 13:27:37.052900 containerd[1479]: time="2025-03-21T13:27:37.052859077Z" level=info msg="StartContainer for \"850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f\" returns successfully" Mar 21 13:27:37.054455 systemd[1]: cri-containerd-850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f.scope: Deactivated successfully. Mar 21 13:27:37.057335 containerd[1479]: time="2025-03-21T13:27:37.057040089Z" level=info msg="received exit event container_id:\"850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f\" id:\"850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f\" pid:4467 exited_at:{seconds:1742563657 nanos:56656511}" Mar 21 13:27:37.057615 containerd[1479]: time="2025-03-21T13:27:37.057483889Z" level=info msg="TaskExit event in podsandbox handler container_id:\"850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f\" id:\"850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f\" pid:4467 exited_at:{seconds:1742563657 nanos:56656511}" Mar 21 13:27:37.079436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-850e7d6711e89afeffaf8d9a01789eb71c3b0ba42b08a0fe80e1a80dbf25e18f-rootfs.mount: Deactivated successfully. Mar 21 13:27:37.410275 sshd[4365]: Accepted publickey for core from 172.24.4.1 port 39692 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:27:37.413184 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:27:37.423252 systemd-logind[1458]: New session 26 of user core. Mar 21 13:27:37.429359 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 21 13:27:37.928113 containerd[1479]: time="2025-03-21T13:27:37.927447085Z" level=info msg="CreateContainer within sandbox \"20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 21 13:27:37.964095 containerd[1479]: time="2025-03-21T13:27:37.961303061Z" level=info msg="Container bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:27:37.999001 containerd[1479]: time="2025-03-21T13:27:37.998929641Z" level=info msg="CreateContainer within sandbox \"20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923\"" Mar 21 13:27:38.003260 containerd[1479]: time="2025-03-21T13:27:38.003128646Z" level=info msg="StartContainer for \"bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923\"" Mar 21 13:27:38.009830 containerd[1479]: time="2025-03-21T13:27:38.009754461Z" level=info msg="connecting to shim bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923" address="unix:///run/containerd/s/c111bd6cf9b56970c6d1533073ac14b6d3a6ac9109c1a0211d24d78172e4f94d" protocol=ttrpc version=3 Mar 21 13:27:38.017151 sshd[4498]: Connection closed by 172.24.4.1 port 39692 Mar 21 13:27:38.017787 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Mar 21 13:27:38.030641 systemd[1]: sshd@23-172.24.4.44:22-172.24.4.1:39692.service: Deactivated successfully. Mar 21 13:27:38.034361 systemd[1]: session-26.scope: Deactivated successfully. Mar 21 13:27:38.035867 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. Mar 21 13:27:38.043221 systemd[1]: Started cri-containerd-bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923.scope - libcontainer container bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923. Mar 21 13:27:38.044829 systemd[1]: Started sshd@24-172.24.4.44:22-172.24.4.1:39704.service - OpenSSH per-connection server daemon (172.24.4.1:39704). Mar 21 13:27:38.047881 systemd-logind[1458]: Removed session 26. Mar 21 13:27:38.088931 systemd[1]: cri-containerd-bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923.scope: Deactivated successfully. Mar 21 13:27:38.093577 containerd[1479]: time="2025-03-21T13:27:38.093533592Z" level=info msg="received exit event container_id:\"bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923\" id:\"bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923\" pid:4518 exited_at:{seconds:1742563658 nanos:93250422}" Mar 21 13:27:38.094305 containerd[1479]: time="2025-03-21T13:27:38.094272614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923\" id:\"bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923\" pid:4518 exited_at:{seconds:1742563658 nanos:93250422}" Mar 21 13:27:38.096334 containerd[1479]: time="2025-03-21T13:27:38.096308514Z" level=info msg="StartContainer for \"bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923\" returns successfully" Mar 21 13:27:38.122765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc51f12aeb1e0151601a020da383a0c49e423665334fd6bbf8f349af62511923-rootfs.mount: Deactivated successfully. Mar 21 13:27:38.152732 kubelet[2676]: E0321 13:27:38.152683 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-v4klj" podUID="add17029-8b81-4c78-a99d-c3074ea12388" Mar 21 13:27:38.294405 kubelet[2676]: E0321 13:27:38.293987 2676 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 21 13:27:38.946680 containerd[1479]: time="2025-03-21T13:27:38.946353550Z" level=info msg="CreateContainer within sandbox \"20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 21 13:27:38.968610 containerd[1479]: time="2025-03-21T13:27:38.968292442Z" level=info msg="Container 1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:27:38.999703 containerd[1479]: time="2025-03-21T13:27:38.999525161Z" level=info msg="CreateContainer within sandbox \"20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a\"" Mar 21 13:27:39.002756 containerd[1479]: time="2025-03-21T13:27:39.002677899Z" level=info msg="StartContainer for \"1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a\"" Mar 21 13:27:39.005062 containerd[1479]: time="2025-03-21T13:27:39.004988312Z" level=info msg="connecting to shim 1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a" address="unix:///run/containerd/s/c111bd6cf9b56970c6d1533073ac14b6d3a6ac9109c1a0211d24d78172e4f94d" protocol=ttrpc version=3 Mar 21 13:27:39.033196 systemd[1]: Started cri-containerd-1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a.scope - libcontainer container 1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a. Mar 21 13:27:39.065285 systemd[1]: cri-containerd-1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a.scope: Deactivated successfully. Mar 21 13:27:39.066946 containerd[1479]: time="2025-03-21T13:27:39.066892601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a\" id:\"1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a\" pid:4558 exited_at:{seconds:1742563659 nanos:66330940}" Mar 21 13:27:39.069304 containerd[1479]: time="2025-03-21T13:27:39.069182374Z" level=info msg="received exit event container_id:\"1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a\" id:\"1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a\" pid:4558 exited_at:{seconds:1742563659 nanos:66330940}" Mar 21 13:27:39.078567 containerd[1479]: time="2025-03-21T13:27:39.078367568Z" level=info msg="StartContainer for \"1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a\" returns successfully" Mar 21 13:27:39.097400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b46814ef02af6994f729e2b06f50024be453213ee032c1fd20b0d7116bbe28a-rootfs.mount: Deactivated successfully. Mar 21 13:27:39.155562 kubelet[2676]: E0321 13:27:39.153618 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-cfx6b" podUID="3e427c2c-85d8-4c40-8412-3a03669f20fd" Mar 21 13:27:39.240193 sshd[4516]: Accepted publickey for core from 172.24.4.1 port 39704 ssh2: RSA SHA256:ANmj2OjS2Xp1ZpeGOmqKkJesIrogOd1e6RUrzk4ButI Mar 21 13:27:39.242577 sshd-session[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 13:27:39.263454 systemd-logind[1458]: New session 27 of user core. Mar 21 13:27:39.267772 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 21 13:27:39.960470 containerd[1479]: time="2025-03-21T13:27:39.958852231Z" level=info msg="CreateContainer within sandbox \"20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 21 13:27:39.984648 containerd[1479]: time="2025-03-21T13:27:39.984582115Z" level=info msg="Container 4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c: CDI devices from CRI Config.CDIDevices: []" Mar 21 13:27:40.019966 containerd[1479]: time="2025-03-21T13:27:40.019706766Z" level=info msg="CreateContainer within sandbox \"20b01f5bc8b9cc3d02f226c24bb77b7455f5e9968cf0d80a03146571e9011e00\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c\"" Mar 21 13:27:40.020554 containerd[1479]: time="2025-03-21T13:27:40.020446170Z" level=info msg="StartContainer for \"4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c\"" Mar 21 13:27:40.022755 containerd[1479]: time="2025-03-21T13:27:40.022700016Z" level=info msg="connecting to shim 4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c" address="unix:///run/containerd/s/c111bd6cf9b56970c6d1533073ac14b6d3a6ac9109c1a0211d24d78172e4f94d" protocol=ttrpc version=3 Mar 21 13:27:40.059281 systemd[1]: Started cri-containerd-4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c.scope - libcontainer container 4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c. Mar 21 13:27:40.100431 containerd[1479]: time="2025-03-21T13:27:40.100329384Z" level=info msg="StartContainer for \"4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c\" returns successfully" Mar 21 13:27:40.152466 kubelet[2676]: E0321 13:27:40.152173 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-v4klj" podUID="add17029-8b81-4c78-a99d-c3074ea12388" Mar 21 13:27:40.172874 containerd[1479]: time="2025-03-21T13:27:40.172741872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c\" id:\"0bd31cc7efc278072a6da9fb3c0e68cd9154289eb25cf5a359953294c076da59\" pid:4630 exited_at:{seconds:1742563660 nanos:172236999}" Mar 21 13:27:40.532122 kernel: cryptd: max_cpu_qlen set to 1000 Mar 21 13:27:40.583211 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Mar 21 13:27:41.037219 kubelet[2676]: I0321 13:27:41.037061 2676 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hxq2d" podStartSLOduration=6.036966709 podStartE2EDuration="6.036966709s" podCreationTimestamp="2025-03-21 13:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 13:27:41.036009548 +0000 UTC m=+187.991832395" watchObservedRunningTime="2025-03-21 13:27:41.036966709 +0000 UTC m=+187.992789515" Mar 21 13:27:41.154468 kubelet[2676]: E0321 13:27:41.152389 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-cfx6b" podUID="3e427c2c-85d8-4c40-8412-3a03669f20fd" Mar 21 13:27:41.839617 containerd[1479]: time="2025-03-21T13:27:41.839558531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c\" id:\"e86f139724ada412855dcd91aff229c85c92e7363e973e1faf7a7bffa8f24c42\" pid:4763 exit_status:1 exited_at:{seconds:1742563661 nanos:839239445}" Mar 21 13:27:42.152311 kubelet[2676]: E0321 13:27:42.152243 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-v4klj" podUID="add17029-8b81-4c78-a99d-c3074ea12388" Mar 21 13:27:43.152608 kubelet[2676]: E0321 13:27:43.152249 2676 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-cfx6b" podUID="3e427c2c-85d8-4c40-8412-3a03669f20fd" Mar 21 13:27:44.015263 systemd-networkd[1388]: lxc_health: Link UP Mar 21 13:27:44.017531 systemd-networkd[1388]: lxc_health: Gained carrier Mar 21 13:27:44.111090 containerd[1479]: time="2025-03-21T13:27:44.110919687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c\" id:\"4d66436927dac008915ddd6b0a2101142f6793dcf4e2a15bb583ba94b33250ea\" pid:5182 exit_status:1 exited_at:{seconds:1742563664 nanos:108617809}" Mar 21 13:27:46.015222 systemd-networkd[1388]: lxc_health: Gained IPv6LL Mar 21 13:27:46.260572 containerd[1479]: time="2025-03-21T13:27:46.260501729Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c\" id:\"dde8142b253c9ee9709fb054b97b2da3e6ad32e58c88278f701eda0f5e1d9511\" pid:5238 exited_at:{seconds:1742563666 nanos:256784615}" Mar 21 13:27:48.452350 containerd[1479]: time="2025-03-21T13:27:48.451980285Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c\" id:\"c8c1fd94314bcc5fe7edbc4822c6d09744e83a064e0539a29539b202f2186d08\" pid:5263 exited_at:{seconds:1742563668 nanos:451655476}" Mar 21 13:27:50.655709 containerd[1479]: time="2025-03-21T13:27:50.655573051Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4adfb9de1d90bc92828056892b8597c0e7f50198d594ee24fa527c802ebc187c\" id:\"a9d89705d2b7a6f3501ccb66c96f2cb7cb9678fb3ad64f6cbce80b5cdd4ff4c0\" pid:5293 exited_at:{seconds:1742563670 nanos:655233726}" Mar 21 13:27:50.839193 sshd[4582]: Connection closed by 172.24.4.1 port 39704 Mar 21 13:27:50.840153 sshd-session[4516]: pam_unix(sshd:session): session closed for user core Mar 21 13:27:50.847949 systemd[1]: sshd@24-172.24.4.44:22-172.24.4.1:39704.service: Deactivated successfully. Mar 21 13:27:50.853496 systemd[1]: session-27.scope: Deactivated successfully. Mar 21 13:27:50.855735 systemd-logind[1458]: Session 27 logged out. Waiting for processes to exit. Mar 21 13:27:50.858074 systemd-logind[1458]: Removed session 27.