Mar 25 01:58:35.952134 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 24 23:38:35 -00 2025 Mar 25 01:58:35.952165 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:58:35.952178 kernel: BIOS-provided physical RAM map: Mar 25 01:58:35.952187 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 25 01:58:35.952196 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 25 01:58:35.952207 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 25 01:58:35.952218 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdcfff] usable Mar 25 01:58:35.952227 kernel: BIOS-e820: [mem 0x00000000bffdd000-0x00000000bfffffff] reserved Mar 25 01:58:35.952237 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 25 01:58:35.952246 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 25 01:58:35.952255 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000013fffffff] usable Mar 25 01:58:35.952264 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 25 01:58:35.952273 kernel: NX (Execute Disable) protection: active Mar 25 01:58:35.952283 kernel: APIC: Static calls initialized Mar 25 01:58:35.952296 kernel: SMBIOS 3.0.0 present. Mar 25 01:58:35.952306 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.16.3-debian-1.16.3-2 04/01/2014 Mar 25 01:58:35.952315 kernel: Hypervisor detected: KVM Mar 25 01:58:35.952325 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 25 01:58:35.952335 kernel: kvm-clock: using sched offset of 3630522260 cycles Mar 25 01:58:35.952345 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 25 01:58:35.952357 kernel: tsc: Detected 1996.249 MHz processor Mar 25 01:58:35.952367 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 25 01:58:35.952378 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 25 01:58:35.952388 kernel: last_pfn = 0x140000 max_arch_pfn = 0x400000000 Mar 25 01:58:35.952398 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 25 01:58:35.952408 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 25 01:58:35.952418 kernel: last_pfn = 0xbffdd max_arch_pfn = 0x400000000 Mar 25 01:58:35.953588 kernel: ACPI: Early table checksum verification disabled Mar 25 01:58:35.953602 kernel: ACPI: RSDP 0x00000000000F51E0 000014 (v00 BOCHS ) Mar 25 01:58:35.953611 kernel: ACPI: RSDT 0x00000000BFFE1B65 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:58:35.953621 kernel: ACPI: FACP 0x00000000BFFE1A49 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:58:35.953630 kernel: ACPI: DSDT 0x00000000BFFE0040 001A09 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:58:35.953638 kernel: ACPI: FACS 0x00000000BFFE0000 000040 Mar 25 01:58:35.953647 kernel: ACPI: APIC 0x00000000BFFE1ABD 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:58:35.953656 kernel: ACPI: WAET 0x00000000BFFE1B3D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 25 01:58:35.953665 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1a49-0xbffe1abc] Mar 25 01:58:35.953675 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffe0040-0xbffe1a48] Mar 25 01:58:35.953686 kernel: ACPI: Reserving FACS table memory at [mem 0xbffe0000-0xbffe003f] Mar 25 01:58:35.953694 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe1abd-0xbffe1b3c] Mar 25 01:58:35.953703 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1b3d-0xbffe1b64] Mar 25 01:58:35.953716 kernel: No NUMA configuration found Mar 25 01:58:35.953725 kernel: Faking a node at [mem 0x0000000000000000-0x000000013fffffff] Mar 25 01:58:35.953735 kernel: NODE_DATA(0) allocated [mem 0x13fffa000-0x13fffffff] Mar 25 01:58:35.953744 kernel: Zone ranges: Mar 25 01:58:35.953755 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 25 01:58:35.953765 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 25 01:58:35.953774 kernel: Normal [mem 0x0000000100000000-0x000000013fffffff] Mar 25 01:58:35.953783 kernel: Movable zone start for each node Mar 25 01:58:35.953792 kernel: Early memory node ranges Mar 25 01:58:35.953801 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 25 01:58:35.953810 kernel: node 0: [mem 0x0000000000100000-0x00000000bffdcfff] Mar 25 01:58:35.953819 kernel: node 0: [mem 0x0000000100000000-0x000000013fffffff] Mar 25 01:58:35.953831 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000013fffffff] Mar 25 01:58:35.953840 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 25 01:58:35.953850 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 25 01:58:35.953859 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Mar 25 01:58:35.953868 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 25 01:58:35.953878 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 25 01:58:35.953887 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 25 01:58:35.953897 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 25 01:58:35.953906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 25 01:58:35.953917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 25 01:58:35.953927 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 25 01:58:35.953936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 25 01:58:35.953945 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 25 01:58:35.953954 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 25 01:58:35.953964 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 25 01:58:35.953973 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Mar 25 01:58:35.953982 kernel: Booting paravirtualized kernel on KVM Mar 25 01:58:35.953992 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 25 01:58:35.954003 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 25 01:58:35.954013 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 25 01:58:35.954022 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 25 01:58:35.954031 kernel: pcpu-alloc: [0] 0 1 Mar 25 01:58:35.954040 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 25 01:58:35.954051 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:58:35.954061 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:58:35.954070 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 25 01:58:35.954082 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 25 01:58:35.954091 kernel: Fallback order for Node 0: 0 Mar 25 01:58:35.954101 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Mar 25 01:58:35.954110 kernel: Policy zone: Normal Mar 25 01:58:35.954119 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:58:35.954128 kernel: software IO TLB: area num 2. Mar 25 01:58:35.954138 kernel: Memory: 3962120K/4193772K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 231392K reserved, 0K cma-reserved) Mar 25 01:58:35.954147 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 25 01:58:35.954156 kernel: ftrace: allocating 37985 entries in 149 pages Mar 25 01:58:35.954168 kernel: ftrace: allocated 149 pages with 4 groups Mar 25 01:58:35.954177 kernel: Dynamic Preempt: voluntary Mar 25 01:58:35.954187 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:58:35.954197 kernel: rcu: RCU event tracing is enabled. Mar 25 01:58:35.954207 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 25 01:58:35.954216 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:58:35.954225 kernel: Rude variant of Tasks RCU enabled. Mar 25 01:58:35.954234 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:58:35.954244 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:58:35.954255 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 25 01:58:35.954264 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 25 01:58:35.954273 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:58:35.954283 kernel: Console: colour VGA+ 80x25 Mar 25 01:58:35.954292 kernel: printk: console [tty0] enabled Mar 25 01:58:35.954301 kernel: printk: console [ttyS0] enabled Mar 25 01:58:35.954310 kernel: ACPI: Core revision 20230628 Mar 25 01:58:35.954319 kernel: APIC: Switch to symmetric I/O mode setup Mar 25 01:58:35.954328 kernel: x2apic enabled Mar 25 01:58:35.954340 kernel: APIC: Switched APIC routing to: physical x2apic Mar 25 01:58:35.954349 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 25 01:58:35.954359 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 25 01:58:35.954368 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Mar 25 01:58:35.954377 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 25 01:58:35.954386 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 25 01:58:35.954396 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 25 01:58:35.954405 kernel: Spectre V2 : Mitigation: Retpolines Mar 25 01:58:35.954414 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 25 01:58:35.959155 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 25 01:58:35.959167 kernel: Speculative Store Bypass: Vulnerable Mar 25 01:58:35.959176 kernel: x86/fpu: x87 FPU will use FXSAVE Mar 25 01:58:35.959186 kernel: Freeing SMP alternatives memory: 32K Mar 25 01:58:35.959201 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:58:35.959212 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:58:35.959221 kernel: landlock: Up and running. Mar 25 01:58:35.959230 kernel: SELinux: Initializing. Mar 25 01:58:35.959240 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:58:35.959249 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 25 01:58:35.959258 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Mar 25 01:58:35.959268 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:58:35.959279 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:58:35.959289 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:58:35.959298 kernel: Performance Events: AMD PMU driver. Mar 25 01:58:35.959307 kernel: ... version: 0 Mar 25 01:58:35.959316 kernel: ... bit width: 48 Mar 25 01:58:35.959327 kernel: ... generic registers: 4 Mar 25 01:58:35.959336 kernel: ... value mask: 0000ffffffffffff Mar 25 01:58:35.959345 kernel: ... max period: 00007fffffffffff Mar 25 01:58:35.959354 kernel: ... fixed-purpose events: 0 Mar 25 01:58:35.959363 kernel: ... event mask: 000000000000000f Mar 25 01:58:35.959372 kernel: signal: max sigframe size: 1440 Mar 25 01:58:35.959381 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:58:35.959390 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:58:35.959399 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:58:35.959410 kernel: smpboot: x86: Booting SMP configuration: Mar 25 01:58:35.959448 kernel: .... node #0, CPUs: #1 Mar 25 01:58:35.959458 kernel: smp: Brought up 1 node, 2 CPUs Mar 25 01:58:35.959467 kernel: smpboot: Max logical packages: 2 Mar 25 01:58:35.959476 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Mar 25 01:58:35.959485 kernel: devtmpfs: initialized Mar 25 01:58:35.959494 kernel: x86/mm: Memory block size: 128MB Mar 25 01:58:35.959504 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:58:35.959513 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 25 01:58:35.959525 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:58:35.959534 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:58:35.959543 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:58:35.959552 kernel: audit: type=2000 audit(1742867914.701:1): state=initialized audit_enabled=0 res=1 Mar 25 01:58:35.959561 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:58:35.959570 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 25 01:58:35.959579 kernel: cpuidle: using governor menu Mar 25 01:58:35.959588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:58:35.959597 kernel: dca service started, version 1.12.1 Mar 25 01:58:35.959608 kernel: PCI: Using configuration type 1 for base access Mar 25 01:58:35.959617 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 25 01:58:35.959626 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:58:35.959635 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:58:35.959644 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:58:35.959653 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:58:35.959662 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:58:35.959671 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:58:35.959680 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 25 01:58:35.959691 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 25 01:58:35.959700 kernel: ACPI: Interpreter enabled Mar 25 01:58:35.959709 kernel: ACPI: PM: (supports S0 S3 S5) Mar 25 01:58:35.959718 kernel: ACPI: Using IOAPIC for interrupt routing Mar 25 01:58:35.959727 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 25 01:58:35.959736 kernel: PCI: Using E820 reservations for host bridge windows Mar 25 01:58:35.959745 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 25 01:58:35.959754 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 25 01:58:35.959899 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 25 01:58:35.960002 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 25 01:58:35.960096 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 25 01:58:35.960110 kernel: acpiphp: Slot [3] registered Mar 25 01:58:35.960119 kernel: acpiphp: Slot [4] registered Mar 25 01:58:35.960129 kernel: acpiphp: Slot [5] registered Mar 25 01:58:35.960138 kernel: acpiphp: Slot [6] registered Mar 25 01:58:35.960147 kernel: acpiphp: Slot [7] registered Mar 25 01:58:35.960159 kernel: acpiphp: Slot [8] registered Mar 25 01:58:35.960168 kernel: acpiphp: Slot [9] registered Mar 25 01:58:35.960177 kernel: acpiphp: Slot [10] registered Mar 25 01:58:35.960186 kernel: acpiphp: Slot [11] registered Mar 25 01:58:35.960195 kernel: acpiphp: Slot [12] registered Mar 25 01:58:35.960204 kernel: acpiphp: Slot [13] registered Mar 25 01:58:35.960213 kernel: acpiphp: Slot [14] registered Mar 25 01:58:35.960222 kernel: acpiphp: Slot [15] registered Mar 25 01:58:35.960231 kernel: acpiphp: Slot [16] registered Mar 25 01:58:35.960240 kernel: acpiphp: Slot [17] registered Mar 25 01:58:35.960251 kernel: acpiphp: Slot [18] registered Mar 25 01:58:35.960260 kernel: acpiphp: Slot [19] registered Mar 25 01:58:35.960269 kernel: acpiphp: Slot [20] registered Mar 25 01:58:35.960277 kernel: acpiphp: Slot [21] registered Mar 25 01:58:35.960286 kernel: acpiphp: Slot [22] registered Mar 25 01:58:35.960295 kernel: acpiphp: Slot [23] registered Mar 25 01:58:35.960304 kernel: acpiphp: Slot [24] registered Mar 25 01:58:35.960313 kernel: acpiphp: Slot [25] registered Mar 25 01:58:35.960322 kernel: acpiphp: Slot [26] registered Mar 25 01:58:35.960333 kernel: acpiphp: Slot [27] registered Mar 25 01:58:35.960342 kernel: acpiphp: Slot [28] registered Mar 25 01:58:35.960351 kernel: acpiphp: Slot [29] registered Mar 25 01:58:35.960360 kernel: acpiphp: Slot [30] registered Mar 25 01:58:35.960369 kernel: acpiphp: Slot [31] registered Mar 25 01:58:35.960377 kernel: PCI host bridge to bus 0000:00 Mar 25 01:58:35.960522 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 25 01:58:35.960609 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 25 01:58:35.960698 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 25 01:58:35.960780 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 25 01:58:35.960864 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc07fffffff window] Mar 25 01:58:35.960946 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 25 01:58:35.961057 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 25 01:58:35.961167 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 25 01:58:35.961268 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 25 01:58:35.961387 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Mar 25 01:58:35.961530 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 25 01:58:35.961627 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 25 01:58:35.961720 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 25 01:58:35.961815 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 25 01:58:35.961916 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 25 01:58:35.962017 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 25 01:58:35.962112 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 25 01:58:35.962214 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 25 01:58:35.962311 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 25 01:58:35.962410 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc000000000-0xc000003fff 64bit pref] Mar 25 01:58:35.963577 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Mar 25 01:58:35.963674 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Mar 25 01:58:35.963773 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 25 01:58:35.963876 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 25 01:58:35.963971 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Mar 25 01:58:35.964064 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Mar 25 01:58:35.964157 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xc000004000-0xc000007fff 64bit pref] Mar 25 01:58:35.964249 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Mar 25 01:58:35.964355 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 25 01:58:35.965019 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 25 01:58:35.965118 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Mar 25 01:58:35.965210 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xc000008000-0xc00000bfff 64bit pref] Mar 25 01:58:35.965310 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Mar 25 01:58:35.965436 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Mar 25 01:58:35.966559 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xc00000c000-0xc00000ffff 64bit pref] Mar 25 01:58:35.966671 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Mar 25 01:58:35.966779 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Mar 25 01:58:35.966880 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfeb93000-0xfeb93fff] Mar 25 01:58:35.966981 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xc000010000-0xc000013fff 64bit pref] Mar 25 01:58:35.966996 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 25 01:58:35.967006 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 25 01:58:35.967016 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 25 01:58:35.967026 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 25 01:58:35.967036 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 25 01:58:35.967049 kernel: iommu: Default domain type: Translated Mar 25 01:58:35.967059 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 25 01:58:35.967070 kernel: PCI: Using ACPI for IRQ routing Mar 25 01:58:35.967079 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 25 01:58:35.967089 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 25 01:58:35.967099 kernel: e820: reserve RAM buffer [mem 0xbffdd000-0xbfffffff] Mar 25 01:58:35.967197 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 25 01:58:35.967306 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 25 01:58:35.967407 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 25 01:58:35.967441 kernel: vgaarb: loaded Mar 25 01:58:35.967467 kernel: clocksource: Switched to clocksource kvm-clock Mar 25 01:58:35.967477 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:58:35.967486 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:58:35.967496 kernel: pnp: PnP ACPI init Mar 25 01:58:35.967606 kernel: pnp 00:03: [dma 2] Mar 25 01:58:35.967622 kernel: pnp: PnP ACPI: found 5 devices Mar 25 01:58:35.967633 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 25 01:58:35.967646 kernel: NET: Registered PF_INET protocol family Mar 25 01:58:35.967656 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 01:58:35.967667 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 25 01:58:35.967677 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:58:35.967687 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 25 01:58:35.967697 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 25 01:58:35.967707 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 25 01:58:35.967717 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:58:35.967726 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 25 01:58:35.967738 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:58:35.967748 kernel: NET: Registered PF_XDP protocol family Mar 25 01:58:35.967837 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 25 01:58:35.967922 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 25 01:58:35.968004 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 25 01:58:35.968088 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Mar 25 01:58:35.968172 kernel: pci_bus 0000:00: resource 8 [mem 0xc000000000-0xc07fffffff window] Mar 25 01:58:35.968269 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 25 01:58:35.968370 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 25 01:58:35.968385 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:58:35.968394 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 25 01:58:35.968403 kernel: software IO TLB: mapped [mem 0x00000000bbfdd000-0x00000000bffdd000] (64MB) Mar 25 01:58:35.968413 kernel: Initialise system trusted keyrings Mar 25 01:58:35.969201 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 25 01:58:35.969214 kernel: Key type asymmetric registered Mar 25 01:58:35.969223 kernel: Asymmetric key parser 'x509' registered Mar 25 01:58:35.969232 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 25 01:58:35.969245 kernel: io scheduler mq-deadline registered Mar 25 01:58:35.969255 kernel: io scheduler kyber registered Mar 25 01:58:35.969264 kernel: io scheduler bfq registered Mar 25 01:58:35.969273 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 25 01:58:35.969283 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 25 01:58:35.969292 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 25 01:58:35.969302 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 25 01:58:35.969311 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 25 01:58:35.969334 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:58:35.969346 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 25 01:58:35.969355 kernel: random: crng init done Mar 25 01:58:35.969364 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 25 01:58:35.969373 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 25 01:58:35.969382 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 25 01:58:35.969514 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 25 01:58:35.969530 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 25 01:58:35.969613 kernel: rtc_cmos 00:04: registered as rtc0 Mar 25 01:58:35.969703 kernel: rtc_cmos 00:04: setting system clock to 2025-03-25T01:58:35 UTC (1742867915) Mar 25 01:58:35.969788 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 25 01:58:35.969801 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 25 01:58:35.969811 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:58:35.969820 kernel: Segment Routing with IPv6 Mar 25 01:58:35.969829 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:58:35.969838 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:58:35.969847 kernel: Key type dns_resolver registered Mar 25 01:58:35.969856 kernel: IPI shorthand broadcast: enabled Mar 25 01:58:35.969869 kernel: sched_clock: Marking stable (1001103707, 177765842)->(1213084249, -34214700) Mar 25 01:58:35.969878 kernel: registered taskstats version 1 Mar 25 01:58:35.969887 kernel: Loading compiled-in X.509 certificates Mar 25 01:58:35.969896 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: eff01054e94a599f8e404b9a9482f4e2220f5386' Mar 25 01:58:35.969905 kernel: Key type .fscrypt registered Mar 25 01:58:35.969914 kernel: Key type fscrypt-provisioning registered Mar 25 01:58:35.969923 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 25 01:58:35.969933 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:58:35.969943 kernel: ima: No architecture policies found Mar 25 01:58:35.969952 kernel: clk: Disabling unused clocks Mar 25 01:58:35.969961 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 25 01:58:35.969970 kernel: Write protecting the kernel read-only data: 40960k Mar 25 01:58:35.969980 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 25 01:58:35.969989 kernel: Run /init as init process Mar 25 01:58:35.969997 kernel: with arguments: Mar 25 01:58:35.970006 kernel: /init Mar 25 01:58:35.970015 kernel: with environment: Mar 25 01:58:35.970026 kernel: HOME=/ Mar 25 01:58:35.970035 kernel: TERM=linux Mar 25 01:58:35.970043 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:58:35.970054 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:58:35.970067 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:58:35.970077 systemd[1]: Detected virtualization kvm. Mar 25 01:58:35.970087 systemd[1]: Detected architecture x86-64. Mar 25 01:58:35.970099 systemd[1]: Running in initrd. Mar 25 01:58:35.970108 systemd[1]: No hostname configured, using default hostname. Mar 25 01:58:35.970118 systemd[1]: Hostname set to . Mar 25 01:58:35.970128 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:58:35.970138 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:58:35.970147 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:58:35.970158 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:58:35.970176 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:58:35.970188 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:58:35.970198 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:58:35.970209 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:58:35.970221 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:58:35.970233 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:58:35.970252 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:58:35.970267 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:58:35.970283 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:58:35.970298 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:58:35.970313 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:58:35.970324 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:58:35.970335 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:58:35.970347 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:58:35.970358 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:58:35.970373 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:58:35.970384 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:58:35.970396 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:58:35.970407 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:58:35.970418 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:58:35.970469 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:58:35.970481 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:58:35.970492 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:58:35.970508 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:58:35.970520 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:58:35.970531 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:58:35.970542 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:58:35.970554 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:58:35.970565 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:58:35.970601 systemd-journald[185]: Collecting audit messages is disabled. Mar 25 01:58:35.970629 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:58:35.970644 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:58:35.970656 systemd-journald[185]: Journal started Mar 25 01:58:35.970681 systemd-journald[185]: Runtime Journal (/run/log/journal/b8e8b9603fbe446eb65bcba93c029853) is 8M, max 78.2M, 70.2M free. Mar 25 01:58:35.974440 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:58:35.975525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:58:36.019968 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:58:36.019994 kernel: Bridge firewalling registered Mar 25 01:58:35.982789 systemd-modules-load[187]: Inserted module 'overlay' Mar 25 01:58:36.010416 systemd-modules-load[187]: Inserted module 'br_netfilter' Mar 25 01:58:36.020715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:58:36.035567 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:58:36.039496 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:58:36.041208 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:58:36.042694 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:58:36.046798 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:58:36.052605 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:58:36.060240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:58:36.062353 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:58:36.064832 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:58:36.074023 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:58:36.079873 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:58:36.097438 dracut-cmdline[223]: dracut-dracut-053 Mar 25 01:58:36.098099 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:58:36.108993 systemd-resolved[212]: Positive Trust Anchors: Mar 25 01:58:36.109729 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:58:36.110548 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:58:36.116044 systemd-resolved[212]: Defaulting to hostname 'linux'. Mar 25 01:58:36.116926 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:58:36.117772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:58:36.182483 kernel: SCSI subsystem initialized Mar 25 01:58:36.192522 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:58:36.205488 kernel: iscsi: registered transport (tcp) Mar 25 01:58:36.227760 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:58:36.227832 kernel: QLogic iSCSI HBA Driver Mar 25 01:58:36.287117 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:58:36.289837 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:58:36.357301 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:58:36.357465 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:58:36.365525 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:58:36.426499 kernel: raid6: sse2x4 gen() 5150 MB/s Mar 25 01:58:36.445490 kernel: raid6: sse2x2 gen() 5929 MB/s Mar 25 01:58:36.464014 kernel: raid6: sse2x1 gen() 8233 MB/s Mar 25 01:58:36.464081 kernel: raid6: using algorithm sse2x1 gen() 8233 MB/s Mar 25 01:58:36.482864 kernel: raid6: .... xor() 7167 MB/s, rmw enabled Mar 25 01:58:36.482946 kernel: raid6: using ssse3x2 recovery algorithm Mar 25 01:58:36.504486 kernel: xor: measuring software checksum speed Mar 25 01:58:36.504570 kernel: prefetch64-sse : 15947 MB/sec Mar 25 01:58:36.507069 kernel: generic_sse : 16862 MB/sec Mar 25 01:58:36.507127 kernel: xor: using function: generic_sse (16862 MB/sec) Mar 25 01:58:36.682496 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:58:36.700909 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:58:36.707612 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:58:36.733754 systemd-udevd[406]: Using default interface naming scheme 'v255'. Mar 25 01:58:36.749805 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:58:36.760638 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:58:36.809885 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Mar 25 01:58:36.877172 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:58:36.881736 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:58:36.972132 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:58:36.981159 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:58:37.033137 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:58:37.037383 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:58:37.041305 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:58:37.043042 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:58:37.047556 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:58:37.067036 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:58:37.079293 kernel: virtio_blk virtio2: 2/0/0 default/read/poll queues Mar 25 01:58:37.103307 kernel: virtio_blk virtio2: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB) Mar 25 01:58:37.103455 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 01:58:37.103472 kernel: GPT:17805311 != 20971519 Mar 25 01:58:37.103484 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 01:58:37.103495 kernel: GPT:17805311 != 20971519 Mar 25 01:58:37.103512 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 01:58:37.103524 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:58:37.084199 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:58:37.084323 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:58:37.086375 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:58:37.087025 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:58:37.111313 kernel: libata version 3.00 loaded. Mar 25 01:58:37.087153 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:58:37.087940 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:58:37.114652 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 25 01:58:37.129576 kernel: scsi host0: ata_piix Mar 25 01:58:37.129722 kernel: scsi host1: ata_piix Mar 25 01:58:37.129846 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Mar 25 01:58:37.129868 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Mar 25 01:58:37.099856 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:58:37.188449 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (474) Mar 25 01:58:37.188476 kernel: BTRFS: device fsid 6d9424cd-1432-492b-b006-b311869817e2 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (463) Mar 25 01:58:37.100791 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:58:37.165053 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 25 01:58:37.190794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:58:37.218131 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 25 01:58:37.226973 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 25 01:58:37.227634 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 25 01:58:37.240145 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 01:58:37.242735 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:58:37.251134 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:58:37.260153 disk-uuid[514]: Primary Header is updated. Mar 25 01:58:37.260153 disk-uuid[514]: Secondary Entries is updated. Mar 25 01:58:37.260153 disk-uuid[514]: Secondary Header is updated. Mar 25 01:58:37.270494 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:58:37.300496 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:58:38.286508 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 25 01:58:38.288473 disk-uuid[515]: The operation has completed successfully. Mar 25 01:58:38.363949 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:58:38.364057 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:58:38.412521 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:58:38.434706 sh[534]: Success Mar 25 01:58:38.454455 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Mar 25 01:58:38.514794 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:58:38.519513 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:58:38.528678 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:58:38.545607 kernel: BTRFS info (device dm-0): first mount of filesystem 6d9424cd-1432-492b-b006-b311869817e2 Mar 25 01:58:38.545644 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:58:38.545657 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:58:38.549260 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:58:38.549288 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:58:38.563960 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:58:38.564967 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:58:38.567529 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:58:38.570104 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:58:38.612253 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:58:38.612350 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:58:38.613722 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:58:38.623498 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:58:38.630473 kernel: BTRFS info (device vda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:58:38.639026 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:58:38.642535 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:58:38.712062 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:58:38.715597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:58:38.751876 systemd-networkd[715]: lo: Link UP Mar 25 01:58:38.751885 systemd-networkd[715]: lo: Gained carrier Mar 25 01:58:38.753054 systemd-networkd[715]: Enumeration completed Mar 25 01:58:38.753284 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:58:38.753728 systemd-networkd[715]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:58:38.753732 systemd-networkd[715]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:58:38.753900 systemd[1]: Reached target network.target - Network. Mar 25 01:58:38.755668 systemd-networkd[715]: eth0: Link UP Mar 25 01:58:38.755672 systemd-networkd[715]: eth0: Gained carrier Mar 25 01:58:38.755679 systemd-networkd[715]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:58:38.769470 systemd-networkd[715]: eth0: DHCPv4 address 172.24.4.226/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 25 01:58:38.809857 ignition[639]: Ignition 2.20.0 Mar 25 01:58:38.809874 ignition[639]: Stage: fetch-offline Mar 25 01:58:38.811295 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:58:38.809923 ignition[639]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:58:38.809939 ignition[639]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 01:58:38.810033 ignition[639]: parsed url from cmdline: "" Mar 25 01:58:38.813564 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 25 01:58:38.810037 ignition[639]: no config URL provided Mar 25 01:58:38.810042 ignition[639]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:58:38.810050 ignition[639]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:58:38.810054 ignition[639]: failed to fetch config: resource requires networking Mar 25 01:58:38.810247 ignition[639]: Ignition finished successfully Mar 25 01:58:38.835779 ignition[726]: Ignition 2.20.0 Mar 25 01:58:38.835791 ignition[726]: Stage: fetch Mar 25 01:58:38.835950 ignition[726]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:58:38.835962 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 01:58:38.836080 ignition[726]: parsed url from cmdline: "" Mar 25 01:58:38.836084 ignition[726]: no config URL provided Mar 25 01:58:38.836089 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:58:38.836097 ignition[726]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:58:38.836210 ignition[726]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 25 01:58:38.836565 ignition[726]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 25 01:58:38.836591 ignition[726]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 25 01:58:39.066510 ignition[726]: GET result: OK Mar 25 01:58:39.066674 ignition[726]: parsing config with SHA512: 9a7c18215109adb7e578de9cf73215fd139f9777d83239b11d36e39fc290c0ca4636eaf2c84bc22bc621582338507dda8c16acbd6fb5c5e30d51c170b6a561a9 Mar 25 01:58:39.078242 unknown[726]: fetched base config from "system" Mar 25 01:58:39.078269 unknown[726]: fetched base config from "system" Mar 25 01:58:39.081133 ignition[726]: fetch: fetch complete Mar 25 01:58:39.078283 unknown[726]: fetched user config from "openstack" Mar 25 01:58:39.081148 ignition[726]: fetch: fetch passed Mar 25 01:58:39.086597 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 25 01:58:39.081244 ignition[726]: Ignition finished successfully Mar 25 01:58:39.092627 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:58:39.125807 ignition[733]: Ignition 2.20.0 Mar 25 01:58:39.125832 ignition[733]: Stage: kargs Mar 25 01:58:39.126134 ignition[733]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:58:39.126155 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 01:58:39.130955 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:58:39.127965 ignition[733]: kargs: kargs passed Mar 25 01:58:39.128041 ignition[733]: Ignition finished successfully Mar 25 01:58:39.136631 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:58:39.174524 ignition[740]: Ignition 2.20.0 Mar 25 01:58:39.174547 ignition[740]: Stage: disks Mar 25 01:58:39.174944 ignition[740]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:58:39.180601 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:58:39.174971 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 01:58:39.183393 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:58:39.178956 ignition[740]: disks: disks passed Mar 25 01:58:39.185029 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:58:39.179055 ignition[740]: Ignition finished successfully Mar 25 01:58:39.187747 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:58:39.190922 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:58:39.193229 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:58:39.199612 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:58:39.241597 systemd-fsck[749]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 25 01:58:39.259376 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:58:39.264546 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:58:39.435474 kernel: EXT4-fs (vda9): mounted filesystem 4e6dca82-2e50-453c-be25-61f944b72008 r/w with ordered data mode. Quota mode: none. Mar 25 01:58:39.436074 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:58:39.437596 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:58:39.440478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:58:39.442516 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:58:39.443210 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 25 01:58:39.446863 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 25 01:58:39.449250 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:58:39.450361 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:58:39.459571 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:58:39.462593 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:58:39.475056 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (757) Mar 25 01:58:39.486250 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:58:39.486277 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:58:39.486289 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:58:39.491444 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:58:39.495994 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:58:39.610483 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:58:39.617152 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:58:39.622809 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:58:39.627462 initrd-setup-root[806]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:58:39.710097 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:58:39.712059 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:58:39.713590 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:58:39.726350 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:58:39.729158 kernel: BTRFS info (device vda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:58:39.753846 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:58:39.755217 ignition[873]: INFO : Ignition 2.20.0 Mar 25 01:58:39.755217 ignition[873]: INFO : Stage: mount Mar 25 01:58:39.755217 ignition[873]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:58:39.755217 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 01:58:39.757995 ignition[873]: INFO : mount: mount passed Mar 25 01:58:39.757995 ignition[873]: INFO : Ignition finished successfully Mar 25 01:58:39.758080 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:58:40.120852 systemd-networkd[715]: eth0: Gained IPv6LL Mar 25 01:58:46.669248 coreos-metadata[759]: Mar 25 01:58:46.669 WARN failed to locate config-drive, using the metadata service API instead Mar 25 01:58:46.712459 coreos-metadata[759]: Mar 25 01:58:46.712 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 25 01:58:46.727754 coreos-metadata[759]: Mar 25 01:58:46.727 INFO Fetch successful Mar 25 01:58:46.729295 coreos-metadata[759]: Mar 25 01:58:46.728 INFO wrote hostname ci-4284-0-0-7-d93044f3e4.novalocal to /sysroot/etc/hostname Mar 25 01:58:46.731908 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 25 01:58:46.732134 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 25 01:58:46.740004 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:58:46.768000 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:58:46.800519 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (890) Mar 25 01:58:46.807965 kernel: BTRFS info (device vda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:58:46.808031 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:58:46.812352 kernel: BTRFS info (device vda6): using free space tree Mar 25 01:58:46.823507 kernel: BTRFS info (device vda6): auto enabling async discard Mar 25 01:58:46.828869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:58:46.875022 ignition[908]: INFO : Ignition 2.20.0 Mar 25 01:58:46.875022 ignition[908]: INFO : Stage: files Mar 25 01:58:46.878068 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:58:46.878068 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 01:58:46.878068 ignition[908]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:58:46.883547 ignition[908]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:58:46.883547 ignition[908]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:58:46.888283 ignition[908]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:58:46.888283 ignition[908]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:58:46.892350 ignition[908]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:58:46.888507 unknown[908]: wrote ssh authorized keys file for user: core Mar 25 01:58:46.896322 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:58:46.896322 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 25 01:58:46.955501 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:58:47.254717 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:58:47.254717 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:58:47.259773 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 25 01:58:48.033895 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 25 01:58:48.639674 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 25 01:58:48.639674 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:58:48.644182 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 25 01:58:49.189741 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 25 01:58:51.493608 ignition[908]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:58:51.493608 ignition[908]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 25 01:58:51.499296 ignition[908]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:58:51.499296 ignition[908]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:58:51.499296 ignition[908]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 25 01:58:51.499296 ignition[908]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:58:51.499296 ignition[908]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:58:51.499296 ignition[908]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:58:51.499296 ignition[908]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:58:51.499296 ignition[908]: INFO : files: files passed Mar 25 01:58:51.499296 ignition[908]: INFO : Ignition finished successfully Mar 25 01:58:51.500176 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:58:51.510669 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:58:51.516641 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:58:51.540132 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:58:51.540326 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:58:51.555346 initrd-setup-root-after-ignition[937]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:58:51.555346 initrd-setup-root-after-ignition[937]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:58:51.560548 initrd-setup-root-after-ignition[941]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:58:51.562933 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:58:51.565126 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:58:51.579729 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:58:51.628157 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:58:51.628357 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:58:51.630559 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:58:51.632562 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:58:51.634335 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:58:51.636067 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:58:51.661998 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:58:51.666705 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:58:51.696221 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:58:51.697868 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:58:51.699957 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:58:51.701877 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:58:51.702253 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:58:51.704777 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:58:51.706769 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:58:51.708642 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:58:51.710794 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:58:51.712937 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:58:51.715132 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:58:51.717213 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:58:51.719542 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:58:51.721722 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:58:51.723678 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:58:51.725417 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:58:51.725808 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:58:51.727990 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:58:51.730186 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:58:51.731895 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:58:51.732519 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:58:51.733971 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:58:51.734091 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:58:51.735560 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:58:51.735684 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:58:51.736952 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:58:51.737059 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:58:51.740627 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:58:51.744645 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:58:51.745161 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:58:51.745355 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:58:51.747580 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:58:51.747737 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:58:51.762164 ignition[961]: INFO : Ignition 2.20.0 Mar 25 01:58:51.762164 ignition[961]: INFO : Stage: umount Mar 25 01:58:51.762164 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:58:51.762164 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 25 01:58:51.767684 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:58:51.775566 ignition[961]: INFO : umount: umount passed Mar 25 01:58:51.775566 ignition[961]: INFO : Ignition finished successfully Mar 25 01:58:51.767810 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:58:51.772700 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:58:51.772775 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:58:51.774119 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:58:51.774185 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:58:51.774710 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:58:51.774752 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:58:51.775270 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 25 01:58:51.775312 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 25 01:58:51.778595 systemd[1]: Stopped target network.target - Network. Mar 25 01:58:51.779303 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:58:51.779348 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:58:51.779871 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:58:51.780291 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:58:51.782049 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:58:51.783190 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:58:51.783692 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:58:51.784702 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:58:51.784737 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:58:51.785763 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:58:51.785794 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:58:51.786789 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:58:51.786840 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:58:51.788006 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:58:51.788045 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:58:51.788902 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:58:51.790873 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:58:51.792332 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:58:51.797533 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:58:51.797623 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:58:51.802314 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:58:51.802844 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:58:51.803021 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:58:51.807131 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:58:51.807333 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:58:51.807512 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:58:51.809055 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:58:51.809224 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:58:51.810438 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:58:51.810524 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:58:51.812532 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:58:51.813724 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:58:51.813772 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:58:51.818692 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:58:51.818737 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:58:51.819930 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:58:51.819972 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:58:51.820856 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:58:51.820898 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:58:51.822325 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:58:51.824072 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:58:51.824136 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:58:51.834692 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:58:51.835320 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:58:51.836745 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:58:51.836822 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:58:51.838765 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:58:51.838822 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:58:51.839365 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:58:51.839395 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:58:51.840583 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:58:51.840633 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:58:51.842242 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:58:51.842285 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:58:51.843394 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:58:51.843456 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:58:51.846541 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:58:51.847404 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:58:51.847488 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:58:51.849731 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 25 01:58:51.849775 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:58:51.851348 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:58:51.851390 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:58:51.852531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:58:51.852571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:58:51.856912 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 25 01:58:51.856967 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:58:51.860238 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:58:51.860330 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:58:51.861934 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:58:51.865530 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:58:51.885029 systemd[1]: Switching root. Mar 25 01:58:51.920175 systemd-journald[185]: Journal stopped Mar 25 01:58:54.031987 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Mar 25 01:58:54.032038 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:58:54.032057 kernel: SELinux: policy capability open_perms=1 Mar 25 01:58:54.032069 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:58:54.032081 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:58:54.032095 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:58:54.032107 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:58:54.032122 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:58:54.032137 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:58:54.032148 kernel: audit: type=1403 audit(1742867932.787:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:58:54.032163 systemd[1]: Successfully loaded SELinux policy in 63.483ms. Mar 25 01:58:54.032183 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.645ms. Mar 25 01:58:54.032197 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:58:54.032210 systemd[1]: Detected virtualization kvm. Mar 25 01:58:54.032222 systemd[1]: Detected architecture x86-64. Mar 25 01:58:54.032237 systemd[1]: Detected first boot. Mar 25 01:58:54.032250 systemd[1]: Hostname set to . Mar 25 01:58:54.032263 systemd[1]: Initializing machine ID from VM UUID. Mar 25 01:58:54.032275 zram_generator::config[1005]: No configuration found. Mar 25 01:58:54.032289 kernel: Guest personality initialized and is inactive Mar 25 01:58:54.032300 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 25 01:58:54.032312 kernel: Initialized host personality Mar 25 01:58:54.032323 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:58:54.032335 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:58:54.032350 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:58:54.032363 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:58:54.032376 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:58:54.032392 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:58:54.032405 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:58:54.032417 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:58:54.034293 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:58:54.034313 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:58:54.034331 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:58:54.034346 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:58:54.034359 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:58:54.034372 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:58:54.034386 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:58:54.034400 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:58:54.034413 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:58:54.034474 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:58:54.034494 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:58:54.034508 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:58:54.034523 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 25 01:58:54.034536 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:58:54.034549 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:58:54.034563 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:58:54.034577 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:58:54.034592 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:58:54.034606 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:58:54.034619 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:58:54.034633 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:58:54.034646 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:58:54.034659 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:58:54.034673 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:58:54.034687 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:58:54.034700 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:58:54.034716 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:58:54.034730 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:58:54.034743 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:58:54.034756 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:58:54.034769 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:58:54.034782 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:58:54.034795 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:58:54.034809 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:58:54.034822 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:58:54.034839 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:58:54.034853 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:58:54.034868 systemd[1]: Reached target machines.target - Containers. Mar 25 01:58:54.034884 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:58:54.034896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:58:54.034909 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:58:54.034922 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:58:54.034934 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:58:54.034949 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:58:54.034961 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:58:54.034974 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:58:54.034986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:58:54.034999 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:58:54.035012 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:58:54.035024 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:58:54.035036 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:58:54.035049 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:58:54.035064 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:58:54.035077 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:58:54.035089 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:58:54.035102 kernel: loop: module loaded Mar 25 01:58:54.035114 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:58:54.035126 kernel: fuse: init (API version 7.39) Mar 25 01:58:54.035139 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:58:54.035152 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:58:54.035167 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:58:54.035180 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:58:54.035195 systemd[1]: Stopped verity-setup.service. Mar 25 01:58:54.035208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:58:54.035222 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:58:54.035234 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:58:54.035247 kernel: ACPI: bus type drm_connector registered Mar 25 01:58:54.035258 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:58:54.035271 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:58:54.035283 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:58:54.035313 systemd-journald[1093]: Collecting audit messages is disabled. Mar 25 01:58:54.035349 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:58:54.035362 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:58:54.035375 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:58:54.035388 systemd-journald[1093]: Journal started Mar 25 01:58:54.035413 systemd-journald[1093]: Runtime Journal (/run/log/journal/b8e8b9603fbe446eb65bcba93c029853) is 8M, max 78.2M, 70.2M free. Mar 25 01:58:53.682523 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:58:53.690555 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 25 01:58:53.690957 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:58:54.041512 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:58:54.041553 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:58:54.045416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:58:54.045646 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:58:54.046412 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:58:54.046669 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:58:54.047561 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:58:54.047739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:58:54.048519 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:58:54.048658 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:58:54.049721 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:58:54.049860 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:58:54.052781 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:58:54.053567 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:58:54.054287 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:58:54.055129 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:58:54.059297 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:58:54.065586 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:58:54.068572 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:58:54.073576 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:58:54.074191 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:58:54.074225 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:58:54.078315 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:58:54.082367 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:58:54.088567 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:58:54.095339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:58:54.097632 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:58:54.100559 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:58:54.101538 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:58:54.103622 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:58:54.104519 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:58:54.106527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:58:54.109706 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:58:54.114960 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:58:54.121843 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:58:54.122552 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:58:54.123328 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:58:54.146451 kernel: loop0: detected capacity change from 0 to 210664 Mar 25 01:58:54.146978 systemd-journald[1093]: Time spent on flushing to /var/log/journal/b8e8b9603fbe446eb65bcba93c029853 is 67.040ms for 965 entries. Mar 25 01:58:54.146978 systemd-journald[1093]: System Journal (/var/log/journal/b8e8b9603fbe446eb65bcba93c029853) is 8M, max 584.8M, 576.8M free. Mar 25 01:58:54.251480 systemd-journald[1093]: Received client request to flush runtime journal. Mar 25 01:58:54.145897 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:58:54.149680 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:58:54.159054 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:58:54.159941 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:58:54.163541 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:58:54.202901 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:58:54.206243 udevadm[1152]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 25 01:58:54.217268 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Mar 25 01:58:54.217282 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Mar 25 01:58:54.226083 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:58:54.228528 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:58:54.255487 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:58:54.303779 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:58:54.309464 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:58:54.345408 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:58:54.349243 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:58:54.360449 kernel: loop1: detected capacity change from 0 to 8 Mar 25 01:58:54.381863 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Mar 25 01:58:54.382215 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Mar 25 01:58:54.385461 kernel: loop2: detected capacity change from 0 to 151640 Mar 25 01:58:54.389362 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:58:54.476473 kernel: loop3: detected capacity change from 0 to 109808 Mar 25 01:58:54.557939 kernel: loop4: detected capacity change from 0 to 210664 Mar 25 01:58:54.654813 kernel: loop5: detected capacity change from 0 to 8 Mar 25 01:58:54.660451 kernel: loop6: detected capacity change from 0 to 151640 Mar 25 01:58:54.694890 kernel: loop7: detected capacity change from 0 to 109808 Mar 25 01:58:54.692875 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:58:54.758699 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 25 01:58:54.759155 (sd-merge)[1173]: Merged extensions into '/usr'. Mar 25 01:58:54.770846 systemd[1]: Reload requested from client PID 1145 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:58:54.770875 systemd[1]: Reloading... Mar 25 01:58:54.886529 zram_generator::config[1197]: No configuration found. Mar 25 01:58:55.038377 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:58:55.120395 systemd[1]: Reloading finished in 348 ms. Mar 25 01:58:55.137683 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:58:55.139163 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:58:55.146899 systemd[1]: Starting ensure-sysext.service... Mar 25 01:58:55.150701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:58:55.158661 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:58:55.175206 systemd[1]: Reload requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:58:55.175224 systemd[1]: Reloading... Mar 25 01:58:55.205597 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:58:55.206503 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:58:55.207289 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:58:55.207613 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 25 01:58:55.207671 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 25 01:58:55.214222 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:58:55.214234 systemd-tmpfiles[1258]: Skipping /boot Mar 25 01:58:55.231404 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:58:55.231418 systemd-tmpfiles[1258]: Skipping /boot Mar 25 01:58:55.268072 systemd-udevd[1259]: Using default interface naming scheme 'v255'. Mar 25 01:58:55.269705 ldconfig[1140]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:58:55.280444 zram_generator::config[1288]: No configuration found. Mar 25 01:58:55.439735 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1322) Mar 25 01:58:55.463195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:58:55.476465 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 25 01:58:55.516626 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 25 01:58:55.519438 kernel: ACPI: button: Power Button [PWRF] Mar 25 01:58:55.577201 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 25 01:58:55.585244 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 25 01:58:55.585499 systemd[1]: Reloading finished in 408 ms. Mar 25 01:58:55.592298 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 01:58:55.593555 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:58:55.594489 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:58:55.602447 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 25 01:58:55.602511 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 25 01:58:55.602703 kernel: Console: switching to colour dummy device 80x25 Mar 25 01:58:55.603494 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 25 01:58:55.603531 kernel: [drm] features: -context_init Mar 25 01:58:55.607476 kernel: [drm] number of scanouts: 1 Mar 25 01:58:55.605875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:58:55.612441 kernel: [drm] number of cap sets: 0 Mar 25 01:58:55.619485 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 25 01:58:55.621444 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 25 01:58:55.621481 kernel: Console: switching to colour frame buffer device 160x50 Mar 25 01:58:55.633480 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 25 01:58:55.644144 systemd[1]: Finished ensure-sysext.service. Mar 25 01:58:55.656072 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:58:55.670135 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 25 01:58:55.673543 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:58:55.674798 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:58:55.680551 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:58:55.680796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:58:55.682273 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:58:55.684955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:58:55.690509 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:58:55.694646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:58:55.698549 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:58:55.698757 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:58:55.701118 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:58:55.701205 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:58:55.702689 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:58:55.707516 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:58:55.713389 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:58:55.716531 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 25 01:58:55.719725 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:58:55.726124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:58:55.730449 lvm[1381]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:58:55.728519 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:58:55.729280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:58:55.729489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:58:55.729790 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:58:55.729933 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:58:55.740313 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:58:55.740518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:58:55.741287 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:58:55.745291 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:58:55.745702 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:58:55.747924 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:58:55.759996 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:58:55.768857 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:58:55.775119 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:58:55.780950 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:58:55.782059 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:58:55.803948 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:58:55.805098 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:58:55.845375 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:58:55.849814 augenrules[1424]: No rules Mar 25 01:58:55.853203 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:58:55.854967 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:58:55.857834 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:58:55.858046 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:58:55.858776 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:58:55.878824 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:58:55.886388 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:58:55.891764 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:58:55.909540 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:58:55.958741 systemd-networkd[1389]: lo: Link UP Mar 25 01:58:55.958752 systemd-networkd[1389]: lo: Gained carrier Mar 25 01:58:55.960001 systemd-networkd[1389]: Enumeration completed Mar 25 01:58:55.960084 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:58:55.966068 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:58:55.966080 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:58:55.966613 systemd-networkd[1389]: eth0: Link UP Mar 25 01:58:55.966622 systemd-networkd[1389]: eth0: Gained carrier Mar 25 01:58:55.966636 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:58:55.966916 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:58:55.973650 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:58:55.982475 systemd-networkd[1389]: eth0: DHCPv4 address 172.24.4.226/24, gateway 172.24.4.1 acquired from 172.24.4.1 Mar 25 01:58:55.996687 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:58:56.007671 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 25 01:58:56.008354 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:58:56.014808 systemd-resolved[1395]: Positive Trust Anchors: Mar 25 01:58:56.015097 systemd-resolved[1395]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:58:56.015190 systemd-resolved[1395]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:58:56.019982 systemd-resolved[1395]: Using system hostname 'ci-4284-0-0-7-d93044f3e4.novalocal'. Mar 25 01:58:56.021573 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:58:56.022136 systemd[1]: Reached target network.target - Network. Mar 25 01:58:56.022592 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:58:56.023025 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:58:56.024611 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:58:56.026171 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:58:56.027778 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:58:56.029262 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:58:56.030887 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:58:56.032304 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:58:56.032339 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:58:56.033686 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:58:56.037189 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:58:56.039363 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:58:56.045825 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:58:56.049687 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:58:56.051711 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:58:56.054443 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:58:56.060662 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:58:56.062048 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:58:56.064449 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:58:56.064977 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:58:56.065544 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:58:56.065578 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:58:56.068624 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:58:56.077570 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 01:58:56.087569 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:58:56.091542 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:58:56.096586 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:58:56.097152 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:58:56.102129 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:58:56.110294 jq[1457]: false Mar 25 01:58:56.107323 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:58:56.110986 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:58:56.123620 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:58:56.131642 extend-filesystems[1458]: Found loop4 Mar 25 01:58:56.142118 extend-filesystems[1458]: Found loop5 Mar 25 01:58:56.142118 extend-filesystems[1458]: Found loop6 Mar 25 01:58:56.142118 extend-filesystems[1458]: Found loop7 Mar 25 01:58:56.142118 extend-filesystems[1458]: Found vda Mar 25 01:58:56.142118 extend-filesystems[1458]: Found vda1 Mar 25 01:58:56.142118 extend-filesystems[1458]: Found vda2 Mar 25 01:58:56.142118 extend-filesystems[1458]: Found vda3 Mar 25 01:58:56.142118 extend-filesystems[1458]: Found usr Mar 25 01:58:56.142118 extend-filesystems[1458]: Found vda4 Mar 25 01:58:56.142118 extend-filesystems[1458]: Found vda6 Mar 25 01:58:56.142118 extend-filesystems[1458]: Found vda7 Mar 25 01:58:56.142118 extend-filesystems[1458]: Found vda9 Mar 25 01:58:56.142118 extend-filesystems[1458]: Checking size of /dev/vda9 Mar 25 01:58:56.133549 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:58:56.211088 extend-filesystems[1458]: Resized partition /dev/vda9 Mar 25 01:58:56.133661 systemd-timesyncd[1396]: Contacted time server 74.208.117.38:123 (0.flatcar.pool.ntp.org). Mar 25 01:58:56.223979 extend-filesystems[1488]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 01:58:56.133741 systemd-timesyncd[1396]: Initial clock synchronization to Tue 2025-03-25 01:58:56.260220 UTC. Mar 25 01:58:56.140287 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 25 01:58:56.227267 jq[1470]: true Mar 25 01:58:56.141759 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:58:56.227459 update_engine[1467]: I20250325 01:58:56.206416 1467 main.cc:92] Flatcar Update Engine starting Mar 25 01:58:56.236231 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 2014203 blocks Mar 25 01:58:56.143534 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:58:56.230851 dbus-daemon[1454]: [system] SELinux support is enabled Mar 25 01:58:56.151637 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:58:56.170342 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:58:56.170681 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:58:56.173318 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:58:56.174413 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:58:56.218872 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:58:56.219115 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:58:56.231652 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:58:56.238812 (ntainerd)[1487]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:58:56.242909 update_engine[1467]: I20250325 01:58:56.242559 1467 update_check_scheduler.cc:74] Next update check in 11m57s Mar 25 01:58:56.246664 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:58:56.261912 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:58:56.261969 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:58:56.263464 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:58:56.263487 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:58:56.274451 kernel: EXT4-fs (vda9): resized filesystem to 2014203 Mar 25 01:58:56.316015 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1330) Mar 25 01:58:56.316078 tar[1478]: linux-amd64/helm Mar 25 01:58:56.276671 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:58:56.316368 jq[1483]: true Mar 25 01:58:56.320501 extend-filesystems[1488]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 25 01:58:56.320501 extend-filesystems[1488]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 25 01:58:56.320501 extend-filesystems[1488]: The filesystem on /dev/vda9 is now 2014203 (4k) blocks long. Mar 25 01:58:56.318483 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:58:56.338652 extend-filesystems[1458]: Resized filesystem in /dev/vda9 Mar 25 01:58:56.318692 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:58:56.403734 systemd-logind[1465]: New seat seat0. Mar 25 01:58:56.413230 systemd-logind[1465]: Watching system buttons on /dev/input/event1 (Power Button) Mar 25 01:58:56.413255 systemd-logind[1465]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 25 01:58:56.414970 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:58:56.440412 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:58:56.438526 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:58:56.452548 systemd[1]: Starting sshkeys.service... Mar 25 01:58:56.491343 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 25 01:58:56.497875 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 25 01:58:56.503459 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:58:56.735492 containerd[1487]: time="2025-03-25T01:58:56Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:58:56.738446 containerd[1487]: time="2025-03-25T01:58:56.738147690Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.762812139Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.973µs" Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.762850711Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.762871149Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.763038994Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.763063440Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.763091683Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.763151585Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.763167004Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.763407094Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.763447099Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.763460474Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:58:56.764300 containerd[1487]: time="2025-03-25T01:58:56.763470103Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:58:56.764622 containerd[1487]: time="2025-03-25T01:58:56.763549531Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:58:56.764622 containerd[1487]: time="2025-03-25T01:58:56.763760187Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:58:56.764622 containerd[1487]: time="2025-03-25T01:58:56.763790253Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:58:56.764622 containerd[1487]: time="2025-03-25T01:58:56.763803157Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:58:56.764622 containerd[1487]: time="2025-03-25T01:58:56.763833143Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:58:56.764622 containerd[1487]: time="2025-03-25T01:58:56.764065900Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:58:56.764622 containerd[1487]: time="2025-03-25T01:58:56.764125221Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775509052Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775550710Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775568713Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775656899Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775680012Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775695241Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775710640Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775725067Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775741938Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775755464Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775767456Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775783627Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775925783Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:58:56.776184 containerd[1487]: time="2025-03-25T01:58:56.775951862Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.775969485Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.775981538Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.775992979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.776005332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.776018868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.776031772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.776045197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.776057360Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.776069302Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.776138492Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.776153951Z" level=info msg="Start snapshots syncer" Mar 25 01:58:56.776495 containerd[1487]: time="2025-03-25T01:58:56.776184158Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:58:56.778439 containerd[1487]: time="2025-03-25T01:58:56.777714137Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:58:56.778439 containerd[1487]: time="2025-03-25T01:58:56.777779440Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.777850603Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.777943167Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.777965829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.777978864Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.777989924Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.778004412Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.778016194Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.778027054Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.778049035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.778062060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.778072439Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.778100742Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.778115370Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:58:56.778597 containerd[1487]: time="2025-03-25T01:58:56.778126651Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:58:56.778892 containerd[1487]: time="2025-03-25T01:58:56.778137551Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:58:56.778892 containerd[1487]: time="2025-03-25T01:58:56.778147139Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:58:56.778892 containerd[1487]: time="2025-03-25T01:58:56.778164081Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:58:56.778892 containerd[1487]: time="2025-03-25T01:58:56.778176424Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:58:56.778892 containerd[1487]: time="2025-03-25T01:58:56.778195510Z" level=info msg="runtime interface created" Mar 25 01:58:56.778892 containerd[1487]: time="2025-03-25T01:58:56.778201662Z" level=info msg="created NRI interface" Mar 25 01:58:56.778892 containerd[1487]: time="2025-03-25T01:58:56.778210258Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:58:56.778892 containerd[1487]: time="2025-03-25T01:58:56.778222200Z" level=info msg="Connect containerd service" Mar 25 01:58:56.778892 containerd[1487]: time="2025-03-25T01:58:56.778248880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:58:56.781032 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:58:56.781232 containerd[1487]: time="2025-03-25T01:58:56.779743032Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:58:56.809502 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:58:56.814507 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:58:56.833845 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:58:56.834049 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:58:56.842738 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:58:56.871539 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:58:56.880903 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:58:56.884850 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 25 01:58:56.885646 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:58:56.969106 containerd[1487]: time="2025-03-25T01:58:56.968243836Z" level=info msg="Start subscribing containerd event" Mar 25 01:58:56.969106 containerd[1487]: time="2025-03-25T01:58:56.968311803Z" level=info msg="Start recovering state" Mar 25 01:58:56.969106 containerd[1487]: time="2025-03-25T01:58:56.968397344Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:58:56.969106 containerd[1487]: time="2025-03-25T01:58:56.968452868Z" level=info msg="Start event monitor" Mar 25 01:58:56.969106 containerd[1487]: time="2025-03-25T01:58:56.968471763Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:58:56.969106 containerd[1487]: time="2025-03-25T01:58:56.968483104Z" level=info msg="Start streaming server" Mar 25 01:58:56.969106 containerd[1487]: time="2025-03-25T01:58:56.968497882Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:58:56.969106 containerd[1487]: time="2025-03-25T01:58:56.968506428Z" level=info msg="runtime interface starting up..." Mar 25 01:58:56.969106 containerd[1487]: time="2025-03-25T01:58:56.968513030Z" level=info msg="starting plugins..." Mar 25 01:58:56.969106 containerd[1487]: time="2025-03-25T01:58:56.968529361Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:58:56.969106 containerd[1487]: time="2025-03-25T01:58:56.968476111Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:58:56.968848 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:58:56.972639 containerd[1487]: time="2025-03-25T01:58:56.972597972Z" level=info msg="containerd successfully booted in 0.237815s" Mar 25 01:58:56.986955 tar[1478]: linux-amd64/LICENSE Mar 25 01:58:56.987027 tar[1478]: linux-amd64/README.md Mar 25 01:58:57.001884 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:58:57.080923 systemd-networkd[1389]: eth0: Gained IPv6LL Mar 25 01:58:57.085584 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:58:57.092271 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:58:57.104096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:58:57.114095 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:58:57.181295 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:58:59.006826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:58:59.026046 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:59:00.356588 kubelet[1578]: E0325 01:59:00.356490 1578 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:59:00.361209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:59:00.361583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:59:00.362702 systemd[1]: kubelet.service: Consumed 2.183s CPU time, 244M memory peak. Mar 25 01:59:01.346079 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:59:01.351001 systemd[1]: Started sshd@0-172.24.4.226:22-172.24.4.1:48934.service - OpenSSH per-connection server daemon (172.24.4.1:48934). Mar 25 01:59:01.975075 login[1549]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Mar 25 01:59:01.980012 login[1550]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 25 01:59:02.010928 systemd-logind[1465]: New session 1 of user core. Mar 25 01:59:02.014496 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:59:02.017351 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:59:02.053871 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:59:02.059359 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:59:02.078675 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:59:02.083419 systemd-logind[1465]: New session c1 of user core. Mar 25 01:59:02.318021 systemd[1597]: Queued start job for default target default.target. Mar 25 01:59:02.324141 systemd[1597]: Created slice app.slice - User Application Slice. Mar 25 01:59:02.324195 systemd[1597]: Reached target paths.target - Paths. Mar 25 01:59:02.324269 systemd[1597]: Reached target timers.target - Timers. Mar 25 01:59:02.326480 systemd[1597]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:59:02.358497 systemd[1597]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:59:02.358993 systemd[1597]: Reached target sockets.target - Sockets. Mar 25 01:59:02.359290 systemd[1597]: Reached target basic.target - Basic System. Mar 25 01:59:02.359640 systemd[1597]: Reached target default.target - Main User Target. Mar 25 01:59:02.359706 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:59:02.360131 systemd[1597]: Startup finished in 266ms. Mar 25 01:59:02.373959 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:59:02.770723 sshd[1589]: Accepted publickey for core from 172.24.4.1 port 48934 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 01:59:02.773409 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:59:02.786676 systemd-logind[1465]: New session 3 of user core. Mar 25 01:59:02.796880 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:59:02.975938 login[1549]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 25 01:59:02.986403 systemd-logind[1465]: New session 2 of user core. Mar 25 01:59:02.999880 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:59:03.152763 coreos-metadata[1453]: Mar 25 01:59:03.152 WARN failed to locate config-drive, using the metadata service API instead Mar 25 01:59:03.203077 coreos-metadata[1453]: Mar 25 01:59:03.202 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 25 01:59:03.399965 systemd[1]: Started sshd@1-172.24.4.226:22-172.24.4.1:48944.service - OpenSSH per-connection server daemon (172.24.4.1:48944). Mar 25 01:59:03.504288 coreos-metadata[1453]: Mar 25 01:59:03.504 INFO Fetch successful Mar 25 01:59:03.504288 coreos-metadata[1453]: Mar 25 01:59:03.504 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 25 01:59:03.520420 coreos-metadata[1453]: Mar 25 01:59:03.520 INFO Fetch successful Mar 25 01:59:03.520420 coreos-metadata[1453]: Mar 25 01:59:03.520 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 25 01:59:03.536884 coreos-metadata[1453]: Mar 25 01:59:03.536 INFO Fetch successful Mar 25 01:59:03.537258 coreos-metadata[1453]: Mar 25 01:59:03.537 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 25 01:59:03.550142 coreos-metadata[1453]: Mar 25 01:59:03.550 INFO Fetch successful Mar 25 01:59:03.550142 coreos-metadata[1453]: Mar 25 01:59:03.550 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 25 01:59:03.562082 coreos-metadata[1453]: Mar 25 01:59:03.562 INFO Fetch successful Mar 25 01:59:03.562082 coreos-metadata[1453]: Mar 25 01:59:03.562 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 25 01:59:03.573722 coreos-metadata[1453]: Mar 25 01:59:03.573 INFO Fetch successful Mar 25 01:59:03.615509 coreos-metadata[1519]: Mar 25 01:59:03.614 WARN failed to locate config-drive, using the metadata service API instead Mar 25 01:59:03.632404 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 01:59:03.634821 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:59:03.663692 coreos-metadata[1519]: Mar 25 01:59:03.663 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 25 01:59:03.678892 coreos-metadata[1519]: Mar 25 01:59:03.678 INFO Fetch successful Mar 25 01:59:03.678892 coreos-metadata[1519]: Mar 25 01:59:03.678 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 25 01:59:03.691995 coreos-metadata[1519]: Mar 25 01:59:03.691 INFO Fetch successful Mar 25 01:59:03.697973 unknown[1519]: wrote ssh authorized keys file for user: core Mar 25 01:59:03.741921 update-ssh-keys[1640]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:59:03.743078 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 25 01:59:03.746287 systemd[1]: Finished sshkeys.service. Mar 25 01:59:03.752287 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:59:03.752642 systemd[1]: Startup finished in 1.141s (kernel) + 17.060s (initrd) + 11.027s (userspace) = 29.230s. Mar 25 01:59:04.602972 sshd[1630]: Accepted publickey for core from 172.24.4.1 port 48944 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 01:59:04.605718 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:59:04.617873 systemd-logind[1465]: New session 4 of user core. Mar 25 01:59:04.634781 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:59:05.250247 sshd[1643]: Connection closed by 172.24.4.1 port 48944 Mar 25 01:59:05.248944 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Mar 25 01:59:05.265689 systemd[1]: sshd@1-172.24.4.226:22-172.24.4.1:48944.service: Deactivated successfully. Mar 25 01:59:05.269207 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:59:05.272950 systemd-logind[1465]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:59:05.275977 systemd[1]: Started sshd@2-172.24.4.226:22-172.24.4.1:45780.service - OpenSSH per-connection server daemon (172.24.4.1:45780). Mar 25 01:59:05.279193 systemd-logind[1465]: Removed session 4. Mar 25 01:59:06.475293 sshd[1648]: Accepted publickey for core from 172.24.4.1 port 45780 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 01:59:06.477882 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:59:06.488724 systemd-logind[1465]: New session 5 of user core. Mar 25 01:59:06.497727 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:59:07.268231 sshd[1651]: Connection closed by 172.24.4.1 port 45780 Mar 25 01:59:07.268935 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Mar 25 01:59:07.282296 systemd[1]: sshd@2-172.24.4.226:22-172.24.4.1:45780.service: Deactivated successfully. Mar 25 01:59:07.285729 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:59:07.287840 systemd-logind[1465]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:59:07.293031 systemd[1]: Started sshd@3-172.24.4.226:22-172.24.4.1:45788.service - OpenSSH per-connection server daemon (172.24.4.1:45788). Mar 25 01:59:07.296392 systemd-logind[1465]: Removed session 5. Mar 25 01:59:08.491936 sshd[1656]: Accepted publickey for core from 172.24.4.1 port 45788 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 01:59:08.494517 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:59:08.506219 systemd-logind[1465]: New session 6 of user core. Mar 25 01:59:08.518757 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:59:09.278487 sshd[1659]: Connection closed by 172.24.4.1 port 45788 Mar 25 01:59:09.279678 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Mar 25 01:59:09.296714 systemd[1]: sshd@3-172.24.4.226:22-172.24.4.1:45788.service: Deactivated successfully. Mar 25 01:59:09.300469 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:59:09.304710 systemd-logind[1465]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:59:09.307288 systemd[1]: Started sshd@4-172.24.4.226:22-172.24.4.1:45798.service - OpenSSH per-connection server daemon (172.24.4.1:45798). Mar 25 01:59:09.309375 systemd-logind[1465]: Removed session 6. Mar 25 01:59:10.411700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:59:10.415178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:59:10.501005 sshd[1664]: Accepted publickey for core from 172.24.4.1 port 45798 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 01:59:10.505611 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:59:10.522549 systemd-logind[1465]: New session 7 of user core. Mar 25 01:59:10.529933 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:59:10.592721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:59:10.606769 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:59:10.830699 kubelet[1676]: E0325 01:59:10.830494 1676 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:59:10.838685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:59:10.839259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:59:10.840073 systemd[1]: kubelet.service: Consumed 243ms CPU time, 98.1M memory peak. Mar 25 01:59:10.978230 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:59:10.979801 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:59:11.001276 sudo[1685]: pam_unix(sudo:session): session closed for user root Mar 25 01:59:11.223497 sshd[1670]: Connection closed by 172.24.4.1 port 45798 Mar 25 01:59:11.223995 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Mar 25 01:59:11.243084 systemd[1]: sshd@4-172.24.4.226:22-172.24.4.1:45798.service: Deactivated successfully. Mar 25 01:59:11.246753 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:59:11.249093 systemd-logind[1465]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:59:11.254479 systemd[1]: Started sshd@5-172.24.4.226:22-172.24.4.1:45808.service - OpenSSH per-connection server daemon (172.24.4.1:45808). Mar 25 01:59:11.259292 systemd-logind[1465]: Removed session 7. Mar 25 01:59:12.749665 sshd[1690]: Accepted publickey for core from 172.24.4.1 port 45808 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 01:59:12.752308 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:59:12.763969 systemd-logind[1465]: New session 8 of user core. Mar 25 01:59:12.772730 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:59:13.215687 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:59:13.216300 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:59:13.224096 sudo[1695]: pam_unix(sudo:session): session closed for user root Mar 25 01:59:13.241196 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:59:13.242999 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:59:13.266722 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:59:13.341015 augenrules[1717]: No rules Mar 25 01:59:13.342571 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:59:13.343000 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:59:13.345509 sudo[1694]: pam_unix(sudo:session): session closed for user root Mar 25 01:59:13.540564 sshd[1693]: Connection closed by 172.24.4.1 port 45808 Mar 25 01:59:13.541909 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Mar 25 01:59:13.558558 systemd[1]: sshd@5-172.24.4.226:22-172.24.4.1:45808.service: Deactivated successfully. Mar 25 01:59:13.561669 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:59:13.566723 systemd-logind[1465]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:59:13.568069 systemd[1]: Started sshd@6-172.24.4.226:22-172.24.4.1:43112.service - OpenSSH per-connection server daemon (172.24.4.1:43112). Mar 25 01:59:13.571882 systemd-logind[1465]: Removed session 8. Mar 25 01:59:14.762131 sshd[1725]: Accepted publickey for core from 172.24.4.1 port 43112 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 01:59:14.764782 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:59:14.775101 systemd-logind[1465]: New session 9 of user core. Mar 25 01:59:14.787744 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:59:15.275763 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:59:15.277084 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:59:15.982916 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:59:15.998115 (dockerd)[1747]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:59:16.503668 dockerd[1747]: time="2025-03-25T01:59:16.503574187Z" level=info msg="Starting up" Mar 25 01:59:16.506004 dockerd[1747]: time="2025-03-25T01:59:16.505942103Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:59:16.539773 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport683848914-merged.mount: Deactivated successfully. Mar 25 01:59:16.580904 systemd[1]: var-lib-docker-metacopy\x2dcheck3089250073-merged.mount: Deactivated successfully. Mar 25 01:59:16.624724 dockerd[1747]: time="2025-03-25T01:59:16.624674687Z" level=info msg="Loading containers: start." Mar 25 01:59:16.817514 kernel: Initializing XFRM netlink socket Mar 25 01:59:16.933647 systemd-networkd[1389]: docker0: Link UP Mar 25 01:59:16.998897 dockerd[1747]: time="2025-03-25T01:59:16.998833953Z" level=info msg="Loading containers: done." Mar 25 01:59:17.025666 dockerd[1747]: time="2025-03-25T01:59:17.025245099Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:59:17.026310 dockerd[1747]: time="2025-03-25T01:59:17.025622067Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:59:17.026670 dockerd[1747]: time="2025-03-25T01:59:17.026591675Z" level=info msg="Daemon has completed initialization" Mar 25 01:59:17.092907 dockerd[1747]: time="2025-03-25T01:59:17.092708730Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:59:17.093282 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:59:17.537258 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1097061153-merged.mount: Deactivated successfully. Mar 25 01:59:18.861157 containerd[1487]: time="2025-03-25T01:59:18.861005787Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 25 01:59:19.671419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401007514.mount: Deactivated successfully. Mar 25 01:59:20.911851 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 01:59:20.914143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:59:21.032545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:59:21.041690 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:59:21.270650 kubelet[2016]: E0325 01:59:21.270456 2016 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:59:21.273033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:59:21.273170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:59:21.273548 systemd[1]: kubelet.service: Consumed 138ms CPU time, 94.3M memory peak. Mar 25 01:59:21.786285 containerd[1487]: time="2025-03-25T01:59:21.786151540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:21.787457 containerd[1487]: time="2025-03-25T01:59:21.787373624Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674581" Mar 25 01:59:21.788835 containerd[1487]: time="2025-03-25T01:59:21.788790770Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:21.791760 containerd[1487]: time="2025-03-25T01:59:21.791720672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:21.792919 containerd[1487]: time="2025-03-25T01:59:21.792781872Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 2.931687477s" Mar 25 01:59:21.792919 containerd[1487]: time="2025-03-25T01:59:21.792813932Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 25 01:59:21.812040 containerd[1487]: time="2025-03-25T01:59:21.812009338Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 25 01:59:24.520709 containerd[1487]: time="2025-03-25T01:59:24.520557253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:24.523190 containerd[1487]: time="2025-03-25T01:59:24.523072871Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619780" Mar 25 01:59:24.524971 containerd[1487]: time="2025-03-25T01:59:24.524839291Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:24.533754 containerd[1487]: time="2025-03-25T01:59:24.533669081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:24.542487 containerd[1487]: time="2025-03-25T01:59:24.540744658Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 2.72855854s" Mar 25 01:59:24.542487 containerd[1487]: time="2025-03-25T01:59:24.540835207Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 25 01:59:24.588962 containerd[1487]: time="2025-03-25T01:59:24.588855399Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 25 01:59:26.154002 containerd[1487]: time="2025-03-25T01:59:26.153946148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:26.155194 containerd[1487]: time="2025-03-25T01:59:26.155138837Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903317" Mar 25 01:59:26.156823 containerd[1487]: time="2025-03-25T01:59:26.156777067Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:26.159806 containerd[1487]: time="2025-03-25T01:59:26.159766947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:26.160949 containerd[1487]: time="2025-03-25T01:59:26.160730411Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.571806296s" Mar 25 01:59:26.160949 containerd[1487]: time="2025-03-25T01:59:26.160778096Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 25 01:59:26.179571 containerd[1487]: time="2025-03-25T01:59:26.179544389Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 25 01:59:27.547152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4056253983.mount: Deactivated successfully. Mar 25 01:59:28.016523 containerd[1487]: time="2025-03-25T01:59:28.016466734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:28.018171 containerd[1487]: time="2025-03-25T01:59:28.017994715Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185380" Mar 25 01:59:28.019509 containerd[1487]: time="2025-03-25T01:59:28.019479493Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:28.022010 containerd[1487]: time="2025-03-25T01:59:28.021953926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:28.023136 containerd[1487]: time="2025-03-25T01:59:28.022477260Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.842700839s" Mar 25 01:59:28.023136 containerd[1487]: time="2025-03-25T01:59:28.022507924Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 25 01:59:28.041022 containerd[1487]: time="2025-03-25T01:59:28.040962560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 01:59:28.669064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2442877049.mount: Deactivated successfully. Mar 25 01:59:29.954639 containerd[1487]: time="2025-03-25T01:59:29.954559269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:29.957531 containerd[1487]: time="2025-03-25T01:59:29.957290863Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Mar 25 01:59:29.960363 containerd[1487]: time="2025-03-25T01:59:29.959523719Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:29.975991 containerd[1487]: time="2025-03-25T01:59:29.975912698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:29.979128 containerd[1487]: time="2025-03-25T01:59:29.979051135Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.937874251s" Mar 25 01:59:29.979333 containerd[1487]: time="2025-03-25T01:59:29.979291676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 25 01:59:30.022026 containerd[1487]: time="2025-03-25T01:59:30.021880902Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 25 01:59:30.587548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount561872502.mount: Deactivated successfully. Mar 25 01:59:30.598521 containerd[1487]: time="2025-03-25T01:59:30.598285887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:30.601071 containerd[1487]: time="2025-03-25T01:59:30.600834284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Mar 25 01:59:30.602774 containerd[1487]: time="2025-03-25T01:59:30.602700712Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:30.607661 containerd[1487]: time="2025-03-25T01:59:30.607556024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:30.609915 containerd[1487]: time="2025-03-25T01:59:30.609335235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 587.236029ms" Mar 25 01:59:30.609915 containerd[1487]: time="2025-03-25T01:59:30.609403254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 25 01:59:30.649484 containerd[1487]: time="2025-03-25T01:59:30.649378000Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 25 01:59:31.276055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3476990644.mount: Deactivated successfully. Mar 25 01:59:31.411520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 25 01:59:31.418106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:59:31.551545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:59:31.561776 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:59:31.722666 kubelet[2150]: E0325 01:59:31.722607 2150 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:59:31.724196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:59:31.724342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:59:31.724705 systemd[1]: kubelet.service: Consumed 144ms CPU time, 99.7M memory peak. Mar 25 01:59:34.580787 containerd[1487]: time="2025-03-25T01:59:34.580670405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:34.586138 containerd[1487]: time="2025-03-25T01:59:34.585911674Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Mar 25 01:59:34.591650 containerd[1487]: time="2025-03-25T01:59:34.591402082Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:34.686499 containerd[1487]: time="2025-03-25T01:59:34.684385008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:59:34.688273 containerd[1487]: time="2025-03-25T01:59:34.688181793Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.038750518s" Mar 25 01:59:34.688396 containerd[1487]: time="2025-03-25T01:59:34.688271283Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 25 01:59:39.362275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:59:39.362492 systemd[1]: kubelet.service: Consumed 144ms CPU time, 99.7M memory peak. Mar 25 01:59:39.366116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:59:39.385573 systemd[1]: Reload requested from client PID 2274 ('systemctl') (unit session-9.scope)... Mar 25 01:59:39.385582 systemd[1]: Reloading... Mar 25 01:59:39.486450 zram_generator::config[2320]: No configuration found. Mar 25 01:59:39.624443 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:59:39.741210 systemd[1]: Reloading finished in 354 ms. Mar 25 01:59:39.793836 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 25 01:59:39.793926 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 25 01:59:39.794241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:59:39.796245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:59:39.964318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:59:39.971887 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:59:40.212626 kubelet[2384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:59:40.213335 kubelet[2384]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:59:40.213531 kubelet[2384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:59:40.214089 kubelet[2384]: I0325 01:59:40.213825 2384 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:59:40.986685 kubelet[2384]: I0325 01:59:40.986654 2384 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:59:40.986840 kubelet[2384]: I0325 01:59:40.986829 2384 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:59:40.987127 kubelet[2384]: I0325 01:59:40.987112 2384 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:59:41.264249 kubelet[2384]: I0325 01:59:41.263575 2384 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:59:41.266502 kubelet[2384]: E0325 01:59:41.266029 2384 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.226:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:41.289848 kubelet[2384]: I0325 01:59:41.289799 2384 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:59:41.290282 kubelet[2384]: I0325 01:59:41.290216 2384 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:59:41.290740 kubelet[2384]: I0325 01:59:41.290276 2384 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-7-d93044f3e4.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:59:41.290740 kubelet[2384]: I0325 01:59:41.290737 2384 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:59:41.291091 kubelet[2384]: I0325 01:59:41.290785 2384 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:59:41.291091 kubelet[2384]: I0325 01:59:41.291004 2384 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:59:41.293392 kubelet[2384]: I0325 01:59:41.292963 2384 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:59:41.293392 kubelet[2384]: I0325 01:59:41.293006 2384 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:59:41.293392 kubelet[2384]: I0325 01:59:41.293051 2384 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:59:41.293392 kubelet[2384]: I0325 01:59:41.293083 2384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:59:41.305314 kubelet[2384]: W0325 01:59:41.304676 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.226:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-7-d93044f3e4.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:41.305314 kubelet[2384]: E0325 01:59:41.304900 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.226:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-7-d93044f3e4.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:41.310351 kubelet[2384]: W0325 01:59:41.309588 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.226:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:41.310351 kubelet[2384]: E0325 01:59:41.309692 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.226:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:41.311039 kubelet[2384]: I0325 01:59:41.311000 2384 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:59:41.315172 kubelet[2384]: I0325 01:59:41.315123 2384 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:59:41.315514 kubelet[2384]: W0325 01:59:41.315486 2384 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:59:41.320291 kubelet[2384]: I0325 01:59:41.320257 2384 server.go:1264] "Started kubelet" Mar 25 01:59:41.326187 kubelet[2384]: I0325 01:59:41.326150 2384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:59:41.338666 kubelet[2384]: I0325 01:59:41.338607 2384 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:59:41.340492 kubelet[2384]: I0325 01:59:41.339136 2384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:59:41.340492 kubelet[2384]: I0325 01:59:41.339712 2384 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:59:41.343167 kubelet[2384]: I0325 01:59:41.343127 2384 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:59:41.347529 kubelet[2384]: E0325 01:59:41.347244 2384 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.24.4.226:6443/api/v1/namespaces/default/events\": dial tcp 172.24.4.226:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-7-d93044f3e4.novalocal.182fe931e4fe363f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-7-d93044f3e4.novalocal,UID:ci-4284-0-0-7-d93044f3e4.novalocal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-7-d93044f3e4.novalocal,},FirstTimestamp:2025-03-25 01:59:41.320205887 +0000 UTC m=+1.344432266,LastTimestamp:2025-03-25 01:59:41.320205887 +0000 UTC m=+1.344432266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-7-d93044f3e4.novalocal,}" Mar 25 01:59:41.348121 kubelet[2384]: E0325 01:59:41.348044 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-7-d93044f3e4.novalocal?timeout=10s\": dial tcp 172.24.4.226:6443: connect: connection refused" interval="200ms" Mar 25 01:59:41.348227 kubelet[2384]: I0325 01:59:41.348149 2384 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:59:41.349554 kubelet[2384]: I0325 01:59:41.349522 2384 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:59:41.351776 kubelet[2384]: I0325 01:59:41.351721 2384 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:59:41.354593 kubelet[2384]: I0325 01:59:41.354385 2384 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:59:41.354593 kubelet[2384]: I0325 01:59:41.354549 2384 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:59:41.356792 kubelet[2384]: I0325 01:59:41.356755 2384 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:59:41.362665 kubelet[2384]: W0325 01:59:41.360952 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.226:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:41.362665 kubelet[2384]: E0325 01:59:41.361009 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.226:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:41.362665 kubelet[2384]: E0325 01:59:41.362662 2384 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:59:41.369783 kubelet[2384]: I0325 01:59:41.369747 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:59:41.372382 kubelet[2384]: I0325 01:59:41.372339 2384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:59:41.372382 kubelet[2384]: I0325 01:59:41.372385 2384 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:59:41.372513 kubelet[2384]: I0325 01:59:41.372401 2384 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:59:41.372513 kubelet[2384]: E0325 01:59:41.372446 2384 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:59:41.373181 kubelet[2384]: W0325 01:59:41.373137 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.226:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:41.373232 kubelet[2384]: E0325 01:59:41.373184 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.226:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:41.373266 kubelet[2384]: I0325 01:59:41.373246 2384 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:59:41.373266 kubelet[2384]: I0325 01:59:41.373255 2384 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:59:41.373312 kubelet[2384]: I0325 01:59:41.373269 2384 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:59:41.380808 kubelet[2384]: I0325 01:59:41.380784 2384 policy_none.go:49] "None policy: Start" Mar 25 01:59:41.381408 kubelet[2384]: I0325 01:59:41.381381 2384 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:59:41.381408 kubelet[2384]: I0325 01:59:41.381401 2384 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:59:41.389526 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:59:41.403258 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:59:41.406318 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:59:41.414011 kubelet[2384]: I0325 01:59:41.413980 2384 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:59:41.414175 kubelet[2384]: I0325 01:59:41.414133 2384 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:59:41.414262 kubelet[2384]: I0325 01:59:41.414242 2384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:59:41.415980 kubelet[2384]: E0325 01:59:41.415943 2384 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-7-d93044f3e4.novalocal\" not found" Mar 25 01:59:41.447331 kubelet[2384]: I0325 01:59:41.447259 2384 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.448054 kubelet[2384]: E0325 01:59:41.447973 2384 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.226:6443/api/v1/nodes\": dial tcp 172.24.4.226:6443: connect: connection refused" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.472762 kubelet[2384]: I0325 01:59:41.472588 2384 topology_manager.go:215] "Topology Admit Handler" podUID="70d5e75e2846328c9d372a5187f62dd2" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.476222 kubelet[2384]: I0325 01:59:41.476143 2384 topology_manager.go:215] "Topology Admit Handler" podUID="42a5accdbc6566d2c8dd73652fc31d81" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.480969 kubelet[2384]: I0325 01:59:41.480830 2384 topology_manager.go:215] "Topology Admit Handler" podUID="b4c953fe277545c0a8ad45406641927d" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.495337 systemd[1]: Created slice kubepods-burstable-pod70d5e75e2846328c9d372a5187f62dd2.slice - libcontainer container kubepods-burstable-pod70d5e75e2846328c9d372a5187f62dd2.slice. Mar 25 01:59:41.522690 update_engine[1467]: I20250325 01:59:41.522509 1467 update_attempter.cc:509] Updating boot flags... Mar 25 01:59:41.525302 systemd[1]: Created slice kubepods-burstable-pod42a5accdbc6566d2c8dd73652fc31d81.slice - libcontainer container kubepods-burstable-pod42a5accdbc6566d2c8dd73652fc31d81.slice. Mar 25 01:59:41.543086 systemd[1]: Created slice kubepods-burstable-podb4c953fe277545c0a8ad45406641927d.slice - libcontainer container kubepods-burstable-podb4c953fe277545c0a8ad45406641927d.slice. Mar 25 01:59:41.550665 kubelet[2384]: E0325 01:59:41.550378 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-7-d93044f3e4.novalocal?timeout=10s\": dial tcp 172.24.4.226:6443: connect: connection refused" interval="400ms" Mar 25 01:59:41.551288 kubelet[2384]: I0325 01:59:41.551121 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70d5e75e2846328c9d372a5187f62dd2-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"70d5e75e2846328c9d372a5187f62dd2\") " pod="kube-system/kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.551580 kubelet[2384]: I0325 01:59:41.551385 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70d5e75e2846328c9d372a5187f62dd2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"70d5e75e2846328c9d372a5187f62dd2\") " pod="kube-system/kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.552006 kubelet[2384]: I0325 01:59:41.551823 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42a5accdbc6566d2c8dd73652fc31d81-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"42a5accdbc6566d2c8dd73652fc31d81\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.552006 kubelet[2384]: I0325 01:59:41.551953 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4c953fe277545c0a8ad45406641927d-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"b4c953fe277545c0a8ad45406641927d\") " pod="kube-system/kube-scheduler-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.552600 kubelet[2384]: I0325 01:59:41.552341 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70d5e75e2846328c9d372a5187f62dd2-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"70d5e75e2846328c9d372a5187f62dd2\") " pod="kube-system/kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.553008 kubelet[2384]: I0325 01:59:41.552558 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42a5accdbc6566d2c8dd73652fc31d81-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"42a5accdbc6566d2c8dd73652fc31d81\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.553008 kubelet[2384]: I0325 01:59:41.552930 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42a5accdbc6566d2c8dd73652fc31d81-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"42a5accdbc6566d2c8dd73652fc31d81\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.553498 kubelet[2384]: I0325 01:59:41.553244 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42a5accdbc6566d2c8dd73652fc31d81-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"42a5accdbc6566d2c8dd73652fc31d81\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.553813 kubelet[2384]: I0325 01:59:41.553661 2384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42a5accdbc6566d2c8dd73652fc31d81-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"42a5accdbc6566d2c8dd73652fc31d81\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.584634 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2423) Mar 25 01:59:41.653182 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2424) Mar 25 01:59:41.660579 kubelet[2384]: I0325 01:59:41.659630 2384 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.663670 kubelet[2384]: E0325 01:59:41.663646 2384 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.226:6443/api/v1/nodes\": dial tcp 172.24.4.226:6443: connect: connection refused" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:41.817660 containerd[1487]: time="2025-03-25T01:59:41.817479401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal,Uid:70d5e75e2846328c9d372a5187f62dd2,Namespace:kube-system,Attempt:0,}" Mar 25 01:59:41.842385 containerd[1487]: time="2025-03-25T01:59:41.842031871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal,Uid:42a5accdbc6566d2c8dd73652fc31d81,Namespace:kube-system,Attempt:0,}" Mar 25 01:59:41.851571 containerd[1487]: time="2025-03-25T01:59:41.851487608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-7-d93044f3e4.novalocal,Uid:b4c953fe277545c0a8ad45406641927d,Namespace:kube-system,Attempt:0,}" Mar 25 01:59:41.952305 kubelet[2384]: E0325 01:59:41.952215 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-7-d93044f3e4.novalocal?timeout=10s\": dial tcp 172.24.4.226:6443: connect: connection refused" interval="800ms" Mar 25 01:59:42.068008 kubelet[2384]: I0325 01:59:42.067808 2384 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:42.068557 kubelet[2384]: E0325 01:59:42.068479 2384 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.226:6443/api/v1/nodes\": dial tcp 172.24.4.226:6443: connect: connection refused" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:42.231926 kubelet[2384]: W0325 01:59:42.231809 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.24.4.226:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:42.231926 kubelet[2384]: E0325 01:59:42.231933 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.226:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:42.412591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1052675704.mount: Deactivated successfully. Mar 25 01:59:42.427380 containerd[1487]: time="2025-03-25T01:59:42.427263974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:59:42.430754 containerd[1487]: time="2025-03-25T01:59:42.430633415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 25 01:59:42.435077 containerd[1487]: time="2025-03-25T01:59:42.434978418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:59:42.436657 containerd[1487]: time="2025-03-25T01:59:42.436580827Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:59:42.439937 containerd[1487]: time="2025-03-25T01:59:42.439705851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:59:42.442128 containerd[1487]: time="2025-03-25T01:59:42.441618999Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:59:42.442128 containerd[1487]: time="2025-03-25T01:59:42.441778652Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:59:42.446736 containerd[1487]: time="2025-03-25T01:59:42.446642652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:59:42.449646 containerd[1487]: time="2025-03-25T01:59:42.448526312Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 624.616866ms" Mar 25 01:59:42.452880 containerd[1487]: time="2025-03-25T01:59:42.452802540Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 592.824672ms" Mar 25 01:59:42.472267 containerd[1487]: time="2025-03-25T01:59:42.472200275Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 625.892419ms" Mar 25 01:59:42.510642 kubelet[2384]: W0325 01:59:42.510103 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.24.4.226:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:42.510642 kubelet[2384]: E0325 01:59:42.510233 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.226:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:42.515656 containerd[1487]: time="2025-03-25T01:59:42.513826512Z" level=info msg="connecting to shim d6c0a7f60b1741b5024f94237386a1638e15784a206e3aa8b116215af70ba43e" address="unix:///run/containerd/s/d230c41952178e6bc8652968536ff964ac22560d81256ba34e24eb2966a175f0" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:59:42.515805 kubelet[2384]: W0325 01:59:42.515191 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.24.4.226:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:42.515805 kubelet[2384]: E0325 01:59:42.515307 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.226:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:42.539841 containerd[1487]: time="2025-03-25T01:59:42.539782405Z" level=info msg="connecting to shim d8ae22f0c86b012139ee481c62586dd277db9337b2a169f658230f985da25a7d" address="unix:///run/containerd/s/a67b624a73a230dcf7328a09e0606065cbcbb41fd3ca5ec5fdf9064502a067b1" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:59:42.542781 containerd[1487]: time="2025-03-25T01:59:42.542748947Z" level=info msg="connecting to shim 675a03b4c8f6d7fe65d9703ae54d1994a5422fd6e60d591a5a022b2af3f67f68" address="unix:///run/containerd/s/5d2f3b92055dc8eb33339c8113c32905bc3df89f1675bdf6204767583debd233" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:59:42.561195 kubelet[2384]: W0325 01:59:42.561110 2384 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.24.4.226:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-7-d93044f3e4.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:42.561195 kubelet[2384]: E0325 01:59:42.561200 2384 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.226:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-7-d93044f3e4.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.226:6443: connect: connection refused Mar 25 01:59:42.570665 systemd[1]: Started cri-containerd-d6c0a7f60b1741b5024f94237386a1638e15784a206e3aa8b116215af70ba43e.scope - libcontainer container d6c0a7f60b1741b5024f94237386a1638e15784a206e3aa8b116215af70ba43e. Mar 25 01:59:42.585563 systemd[1]: Started cri-containerd-d8ae22f0c86b012139ee481c62586dd277db9337b2a169f658230f985da25a7d.scope - libcontainer container d8ae22f0c86b012139ee481c62586dd277db9337b2a169f658230f985da25a7d. Mar 25 01:59:42.589254 systemd[1]: Started cri-containerd-675a03b4c8f6d7fe65d9703ae54d1994a5422fd6e60d591a5a022b2af3f67f68.scope - libcontainer container 675a03b4c8f6d7fe65d9703ae54d1994a5422fd6e60d591a5a022b2af3f67f68. Mar 25 01:59:42.655009 containerd[1487]: time="2025-03-25T01:59:42.654944636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-7-d93044f3e4.novalocal,Uid:b4c953fe277545c0a8ad45406641927d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8ae22f0c86b012139ee481c62586dd277db9337b2a169f658230f985da25a7d\"" Mar 25 01:59:42.661542 containerd[1487]: time="2025-03-25T01:59:42.661504128Z" level=info msg="CreateContainer within sandbox \"d8ae22f0c86b012139ee481c62586dd277db9337b2a169f658230f985da25a7d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:59:42.664405 containerd[1487]: time="2025-03-25T01:59:42.664213846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal,Uid:70d5e75e2846328c9d372a5187f62dd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6c0a7f60b1741b5024f94237386a1638e15784a206e3aa8b116215af70ba43e\"" Mar 25 01:59:42.668711 containerd[1487]: time="2025-03-25T01:59:42.668677061Z" level=info msg="CreateContainer within sandbox \"d6c0a7f60b1741b5024f94237386a1638e15784a206e3aa8b116215af70ba43e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:59:42.678983 containerd[1487]: time="2025-03-25T01:59:42.678889761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal,Uid:42a5accdbc6566d2c8dd73652fc31d81,Namespace:kube-system,Attempt:0,} returns sandbox id \"675a03b4c8f6d7fe65d9703ae54d1994a5422fd6e60d591a5a022b2af3f67f68\"" Mar 25 01:59:42.682808 containerd[1487]: time="2025-03-25T01:59:42.682123757Z" level=info msg="CreateContainer within sandbox \"675a03b4c8f6d7fe65d9703ae54d1994a5422fd6e60d591a5a022b2af3f67f68\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:59:42.684509 containerd[1487]: time="2025-03-25T01:59:42.684489462Z" level=info msg="Container 5568d445dae4bc0857114afa18b883c74671a7a0bae1203ae6df1ac489f519e4: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:59:42.691573 containerd[1487]: time="2025-03-25T01:59:42.691541618Z" level=info msg="Container a466ff0dab7a45318dbde4006e08df0601e9ce6e10be78e94daefaea37d0e886: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:59:42.697981 containerd[1487]: time="2025-03-25T01:59:42.697939463Z" level=info msg="CreateContainer within sandbox \"d6c0a7f60b1741b5024f94237386a1638e15784a206e3aa8b116215af70ba43e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5568d445dae4bc0857114afa18b883c74671a7a0bae1203ae6df1ac489f519e4\"" Mar 25 01:59:42.698762 containerd[1487]: time="2025-03-25T01:59:42.698475153Z" level=info msg="StartContainer for \"5568d445dae4bc0857114afa18b883c74671a7a0bae1203ae6df1ac489f519e4\"" Mar 25 01:59:42.699480 containerd[1487]: time="2025-03-25T01:59:42.699457769Z" level=info msg="Container cb728fcd925fe8c07e296473fbe119d5ab830b5a3c37c24c621601afdd22753b: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:59:42.699587 containerd[1487]: time="2025-03-25T01:59:42.699559898Z" level=info msg="connecting to shim 5568d445dae4bc0857114afa18b883c74671a7a0bae1203ae6df1ac489f519e4" address="unix:///run/containerd/s/d230c41952178e6bc8652968536ff964ac22560d81256ba34e24eb2966a175f0" protocol=ttrpc version=3 Mar 25 01:59:42.712412 containerd[1487]: time="2025-03-25T01:59:42.712258869Z" level=info msg="CreateContainer within sandbox \"d8ae22f0c86b012139ee481c62586dd277db9337b2a169f658230f985da25a7d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a466ff0dab7a45318dbde4006e08df0601e9ce6e10be78e94daefaea37d0e886\"" Mar 25 01:59:42.713010 containerd[1487]: time="2025-03-25T01:59:42.712977016Z" level=info msg="StartContainer for \"a466ff0dab7a45318dbde4006e08df0601e9ce6e10be78e94daefaea37d0e886\"" Mar 25 01:59:42.715035 containerd[1487]: time="2025-03-25T01:59:42.715012334Z" level=info msg="connecting to shim a466ff0dab7a45318dbde4006e08df0601e9ce6e10be78e94daefaea37d0e886" address="unix:///run/containerd/s/a67b624a73a230dcf7328a09e0606065cbcbb41fd3ca5ec5fdf9064502a067b1" protocol=ttrpc version=3 Mar 25 01:59:42.720706 containerd[1487]: time="2025-03-25T01:59:42.720678765Z" level=info msg="CreateContainer within sandbox \"675a03b4c8f6d7fe65d9703ae54d1994a5422fd6e60d591a5a022b2af3f67f68\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cb728fcd925fe8c07e296473fbe119d5ab830b5a3c37c24c621601afdd22753b\"" Mar 25 01:59:42.721233 systemd[1]: Started cri-containerd-5568d445dae4bc0857114afa18b883c74671a7a0bae1203ae6df1ac489f519e4.scope - libcontainer container 5568d445dae4bc0857114afa18b883c74671a7a0bae1203ae6df1ac489f519e4. Mar 25 01:59:42.721522 containerd[1487]: time="2025-03-25T01:59:42.721483513Z" level=info msg="StartContainer for \"cb728fcd925fe8c07e296473fbe119d5ab830b5a3c37c24c621601afdd22753b\"" Mar 25 01:59:42.723447 containerd[1487]: time="2025-03-25T01:59:42.723383655Z" level=info msg="connecting to shim cb728fcd925fe8c07e296473fbe119d5ab830b5a3c37c24c621601afdd22753b" address="unix:///run/containerd/s/5d2f3b92055dc8eb33339c8113c32905bc3df89f1675bdf6204767583debd233" protocol=ttrpc version=3 Mar 25 01:59:42.741182 systemd[1]: Started cri-containerd-a466ff0dab7a45318dbde4006e08df0601e9ce6e10be78e94daefaea37d0e886.scope - libcontainer container a466ff0dab7a45318dbde4006e08df0601e9ce6e10be78e94daefaea37d0e886. Mar 25 01:59:42.753328 kubelet[2384]: E0325 01:59:42.753289 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.226:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-7-d93044f3e4.novalocal?timeout=10s\": dial tcp 172.24.4.226:6443: connect: connection refused" interval="1.6s" Mar 25 01:59:42.758657 systemd[1]: Started cri-containerd-cb728fcd925fe8c07e296473fbe119d5ab830b5a3c37c24c621601afdd22753b.scope - libcontainer container cb728fcd925fe8c07e296473fbe119d5ab830b5a3c37c24c621601afdd22753b. Mar 25 01:59:42.805548 containerd[1487]: time="2025-03-25T01:59:42.805512546Z" level=info msg="StartContainer for \"5568d445dae4bc0857114afa18b883c74671a7a0bae1203ae6df1ac489f519e4\" returns successfully" Mar 25 01:59:42.851456 containerd[1487]: time="2025-03-25T01:59:42.849900132Z" level=info msg="StartContainer for \"a466ff0dab7a45318dbde4006e08df0601e9ce6e10be78e94daefaea37d0e886\" returns successfully" Mar 25 01:59:42.852264 containerd[1487]: time="2025-03-25T01:59:42.852222752Z" level=info msg="StartContainer for \"cb728fcd925fe8c07e296473fbe119d5ab830b5a3c37c24c621601afdd22753b\" returns successfully" Mar 25 01:59:42.871457 kubelet[2384]: I0325 01:59:42.871396 2384 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:42.872086 kubelet[2384]: E0325 01:59:42.872015 2384 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.24.4.226:6443/api/v1/nodes\": dial tcp 172.24.4.226:6443: connect: connection refused" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:44.476317 kubelet[2384]: I0325 01:59:44.476287 2384 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:45.106817 kubelet[2384]: E0325 01:59:45.106754 2384 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284-0-0-7-d93044f3e4.novalocal\" not found" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:45.201661 kubelet[2384]: I0325 01:59:45.201447 2384 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:45.296186 kubelet[2384]: I0325 01:59:45.294580 2384 apiserver.go:52] "Watching apiserver" Mar 25 01:59:45.349042 kubelet[2384]: I0325 01:59:45.348701 2384 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:59:45.408556 kubelet[2384]: E0325 01:59:45.407023 2384 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:47.703293 systemd[1]: Reload requested from client PID 2673 ('systemctl') (unit session-9.scope)... Mar 25 01:59:47.703327 systemd[1]: Reloading... Mar 25 01:59:47.827477 zram_generator::config[2719]: No configuration found. Mar 25 01:59:47.976625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:59:48.112485 systemd[1]: Reloading finished in 408 ms. Mar 25 01:59:48.141568 kubelet[2384]: I0325 01:59:48.141538 2384 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:59:48.141939 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:59:48.153984 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:59:48.154169 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:59:48.154213 systemd[1]: kubelet.service: Consumed 1.510s CPU time, 116M memory peak. Mar 25 01:59:48.156626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:59:48.283604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:59:48.292010 (kubelet)[2782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:59:48.339638 kubelet[2782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:59:48.339638 kubelet[2782]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:59:48.339638 kubelet[2782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:59:48.340002 kubelet[2782]: I0325 01:59:48.339687 2782 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:59:48.344237 kubelet[2782]: I0325 01:59:48.344206 2782 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:59:48.344237 kubelet[2782]: I0325 01:59:48.344231 2782 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:59:48.344475 kubelet[2782]: I0325 01:59:48.344460 2782 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:59:48.345840 kubelet[2782]: I0325 01:59:48.345816 2782 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:59:48.347649 kubelet[2782]: I0325 01:59:48.347188 2782 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:59:48.354012 kubelet[2782]: I0325 01:59:48.353992 2782 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:59:48.354335 kubelet[2782]: I0325 01:59:48.354300 2782 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:59:48.354694 kubelet[2782]: I0325 01:59:48.354400 2782 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-7-d93044f3e4.novalocal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:59:48.354834 kubelet[2782]: I0325 01:59:48.354823 2782 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:59:48.354898 kubelet[2782]: I0325 01:59:48.354890 2782 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:59:48.354979 kubelet[2782]: I0325 01:59:48.354971 2782 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:59:48.355114 kubelet[2782]: I0325 01:59:48.355103 2782 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:59:48.356139 kubelet[2782]: I0325 01:59:48.355176 2782 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:59:48.356139 kubelet[2782]: I0325 01:59:48.355217 2782 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:59:48.356139 kubelet[2782]: I0325 01:59:48.355232 2782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:59:48.358449 kubelet[2782]: I0325 01:59:48.357948 2782 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:59:48.358449 kubelet[2782]: I0325 01:59:48.358095 2782 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:59:48.358593 kubelet[2782]: I0325 01:59:48.358582 2782 server.go:1264] "Started kubelet" Mar 25 01:59:48.360651 kubelet[2782]: I0325 01:59:48.360635 2782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:59:48.365136 kubelet[2782]: I0325 01:59:48.365108 2782 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:59:48.366157 kubelet[2782]: I0325 01:59:48.366143 2782 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:59:48.367054 kubelet[2782]: I0325 01:59:48.367018 2782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:59:48.367325 kubelet[2782]: I0325 01:59:48.367311 2782 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:59:48.368993 kubelet[2782]: I0325 01:59:48.368980 2782 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:59:48.370851 kubelet[2782]: I0325 01:59:48.370835 2782 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:59:48.371034 kubelet[2782]: I0325 01:59:48.371023 2782 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:59:48.372582 kubelet[2782]: I0325 01:59:48.372560 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:59:48.373610 kubelet[2782]: I0325 01:59:48.373595 2782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:59:48.373697 kubelet[2782]: I0325 01:59:48.373688 2782 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:59:48.373769 kubelet[2782]: I0325 01:59:48.373760 2782 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:59:48.373899 kubelet[2782]: E0325 01:59:48.373868 2782 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:59:48.380871 kubelet[2782]: I0325 01:59:48.380850 2782 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:59:48.381043 kubelet[2782]: I0325 01:59:48.381025 2782 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:59:48.382992 kubelet[2782]: E0325 01:59:48.382976 2782 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:59:48.383449 kubelet[2782]: I0325 01:59:48.383169 2782 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:59:48.444067 kubelet[2782]: I0325 01:59:48.444025 2782 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:59:48.444067 kubelet[2782]: I0325 01:59:48.444044 2782 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:59:48.444231 kubelet[2782]: I0325 01:59:48.444105 2782 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:59:48.444282 kubelet[2782]: I0325 01:59:48.444262 2782 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:59:48.444315 kubelet[2782]: I0325 01:59:48.444275 2782 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:59:48.444315 kubelet[2782]: I0325 01:59:48.444292 2782 policy_none.go:49] "None policy: Start" Mar 25 01:59:48.445118 kubelet[2782]: I0325 01:59:48.445097 2782 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:59:48.445118 kubelet[2782]: I0325 01:59:48.445117 2782 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:59:48.445278 kubelet[2782]: I0325 01:59:48.445258 2782 state_mem.go:75] "Updated machine memory state" Mar 25 01:59:48.450039 kubelet[2782]: I0325 01:59:48.450010 2782 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:59:48.450195 kubelet[2782]: I0325 01:59:48.450164 2782 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:59:48.450454 kubelet[2782]: I0325 01:59:48.450251 2782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:59:48.475215 kubelet[2782]: I0325 01:59:48.473386 2782 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.475215 kubelet[2782]: I0325 01:59:48.474116 2782 topology_manager.go:215] "Topology Admit Handler" podUID="70d5e75e2846328c9d372a5187f62dd2" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.475215 kubelet[2782]: I0325 01:59:48.474202 2782 topology_manager.go:215] "Topology Admit Handler" podUID="42a5accdbc6566d2c8dd73652fc31d81" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.475215 kubelet[2782]: I0325 01:59:48.474289 2782 topology_manager.go:215] "Topology Admit Handler" podUID="b4c953fe277545c0a8ad45406641927d" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.515720 kubelet[2782]: W0325 01:59:48.514992 2782 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:59:48.515720 kubelet[2782]: W0325 01:59:48.515084 2782 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:59:48.515720 kubelet[2782]: W0325 01:59:48.515164 2782 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:59:48.526598 kubelet[2782]: I0325 01:59:48.526496 2782 kubelet_node_status.go:112] "Node was previously registered" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.526598 kubelet[2782]: I0325 01:59:48.526586 2782 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.571865 kubelet[2782]: I0325 01:59:48.571702 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42a5accdbc6566d2c8dd73652fc31d81-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"42a5accdbc6566d2c8dd73652fc31d81\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.571865 kubelet[2782]: I0325 01:59:48.571743 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70d5e75e2846328c9d372a5187f62dd2-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"70d5e75e2846328c9d372a5187f62dd2\") " pod="kube-system/kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.571865 kubelet[2782]: I0325 01:59:48.571765 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70d5e75e2846328c9d372a5187f62dd2-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"70d5e75e2846328c9d372a5187f62dd2\") " pod="kube-system/kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.571865 kubelet[2782]: I0325 01:59:48.571783 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42a5accdbc6566d2c8dd73652fc31d81-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"42a5accdbc6566d2c8dd73652fc31d81\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.572213 kubelet[2782]: I0325 01:59:48.571801 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42a5accdbc6566d2c8dd73652fc31d81-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"42a5accdbc6566d2c8dd73652fc31d81\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.572213 kubelet[2782]: I0325 01:59:48.571820 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42a5accdbc6566d2c8dd73652fc31d81-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"42a5accdbc6566d2c8dd73652fc31d81\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.573564 kubelet[2782]: I0325 01:59:48.571837 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b4c953fe277545c0a8ad45406641927d-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"b4c953fe277545c0a8ad45406641927d\") " pod="kube-system/kube-scheduler-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.573564 kubelet[2782]: I0325 01:59:48.573297 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70d5e75e2846328c9d372a5187f62dd2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"70d5e75e2846328c9d372a5187f62dd2\") " pod="kube-system/kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.573564 kubelet[2782]: I0325 01:59:48.573321 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42a5accdbc6566d2c8dd73652fc31d81-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal\" (UID: \"42a5accdbc6566d2c8dd73652fc31d81\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:48.674916 sudo[2813]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 25 01:59:48.675195 sudo[2813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 25 01:59:49.246926 sudo[2813]: pam_unix(sudo:session): session closed for user root Mar 25 01:59:49.358705 kubelet[2782]: I0325 01:59:49.358186 2782 apiserver.go:52] "Watching apiserver" Mar 25 01:59:49.371322 kubelet[2782]: I0325 01:59:49.371261 2782 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:59:49.437251 kubelet[2782]: W0325 01:59:49.436714 2782 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Mar 25 01:59:49.437251 kubelet[2782]: E0325 01:59:49.436808 2782 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4284-0-0-7-d93044f3e4.novalocal\" already exists" pod="kube-system/kube-scheduler-ci-4284-0-0-7-d93044f3e4.novalocal" Mar 25 01:59:49.450403 kubelet[2782]: I0325 01:59:49.450347 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-7-d93044f3e4.novalocal" podStartSLOduration=1.45033458 podStartE2EDuration="1.45033458s" podCreationTimestamp="2025-03-25 01:59:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:59:49.450134291 +0000 UTC m=+1.153177229" watchObservedRunningTime="2025-03-25 01:59:49.45033458 +0000 UTC m=+1.153377528" Mar 25 01:59:49.461258 kubelet[2782]: I0325 01:59:49.461211 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-7-d93044f3e4.novalocal" podStartSLOduration=1.4611967940000001 podStartE2EDuration="1.461196794s" podCreationTimestamp="2025-03-25 01:59:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:59:49.460776301 +0000 UTC m=+1.163819239" watchObservedRunningTime="2025-03-25 01:59:49.461196794 +0000 UTC m=+1.164239743" Mar 25 01:59:49.488240 kubelet[2782]: I0325 01:59:49.488123 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-7-d93044f3e4.novalocal" podStartSLOduration=1.488109126 podStartE2EDuration="1.488109126s" podCreationTimestamp="2025-03-25 01:59:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:59:49.470778551 +0000 UTC m=+1.173821489" watchObservedRunningTime="2025-03-25 01:59:49.488109126 +0000 UTC m=+1.191152075" Mar 25 01:59:51.505074 sudo[1729]: pam_unix(sudo:session): session closed for user root Mar 25 01:59:51.785905 sshd[1728]: Connection closed by 172.24.4.1 port 43112 Mar 25 01:59:51.786217 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Mar 25 01:59:51.796987 systemd[1]: sshd@6-172.24.4.226:22-172.24.4.1:43112.service: Deactivated successfully. Mar 25 01:59:51.802165 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:59:51.802762 systemd[1]: session-9.scope: Consumed 7.893s CPU time, 286.3M memory peak. Mar 25 01:59:51.807876 systemd-logind[1465]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:59:51.810204 systemd-logind[1465]: Removed session 9. Mar 25 02:00:02.188410 kubelet[2782]: I0325 02:00:02.188300 2782 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 02:00:02.194031 kubelet[2782]: I0325 02:00:02.190628 2782 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 02:00:02.194292 containerd[1487]: time="2025-03-25T02:00:02.188674815Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 02:00:03.041739 kubelet[2782]: I0325 02:00:03.041685 2782 topology_manager.go:215] "Topology Admit Handler" podUID="dbd3d8c0-4091-4437-8826-318bbb6570c1" podNamespace="kube-system" podName="kube-proxy-sg79x" Mar 25 02:00:03.053697 systemd[1]: Created slice kubepods-besteffort-poddbd3d8c0_4091_4437_8826_318bbb6570c1.slice - libcontainer container kubepods-besteffort-poddbd3d8c0_4091_4437_8826_318bbb6570c1.slice. Mar 25 02:00:03.072380 kubelet[2782]: I0325 02:00:03.072347 2782 topology_manager.go:215] "Topology Admit Handler" podUID="74ed71b2-ca19-400a-9e9a-6e2eb015a91a" podNamespace="kube-system" podName="cilium-krnhj" Mar 25 02:00:03.073395 kubelet[2782]: I0325 02:00:03.072984 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dbd3d8c0-4091-4437-8826-318bbb6570c1-kube-proxy\") pod \"kube-proxy-sg79x\" (UID: \"dbd3d8c0-4091-4437-8826-318bbb6570c1\") " pod="kube-system/kube-proxy-sg79x" Mar 25 02:00:03.073395 kubelet[2782]: I0325 02:00:03.073235 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbd3d8c0-4091-4437-8826-318bbb6570c1-lib-modules\") pod \"kube-proxy-sg79x\" (UID: \"dbd3d8c0-4091-4437-8826-318bbb6570c1\") " pod="kube-system/kube-proxy-sg79x" Mar 25 02:00:03.073395 kubelet[2782]: I0325 02:00:03.073262 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbd3d8c0-4091-4437-8826-318bbb6570c1-xtables-lock\") pod \"kube-proxy-sg79x\" (UID: \"dbd3d8c0-4091-4437-8826-318bbb6570c1\") " pod="kube-system/kube-proxy-sg79x" Mar 25 02:00:03.074754 kubelet[2782]: I0325 02:00:03.073647 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82znf\" (UniqueName: \"kubernetes.io/projected/dbd3d8c0-4091-4437-8826-318bbb6570c1-kube-api-access-82znf\") pod \"kube-proxy-sg79x\" (UID: \"dbd3d8c0-4091-4437-8826-318bbb6570c1\") " pod="kube-system/kube-proxy-sg79x" Mar 25 02:00:03.084079 systemd[1]: Created slice kubepods-burstable-pod74ed71b2_ca19_400a_9e9a_6e2eb015a91a.slice - libcontainer container kubepods-burstable-pod74ed71b2_ca19_400a_9e9a_6e2eb015a91a.slice. Mar 25 02:00:03.174636 kubelet[2782]: I0325 02:00:03.174581 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-hubble-tls\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174636 kubelet[2782]: I0325 02:00:03.174635 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-host-proc-sys-kernel\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174785 kubelet[2782]: I0325 02:00:03.174658 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-run\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174785 kubelet[2782]: I0325 02:00:03.174675 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-cgroup\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174785 kubelet[2782]: I0325 02:00:03.174695 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-lib-modules\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174785 kubelet[2782]: I0325 02:00:03.174714 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjz82\" (UniqueName: \"kubernetes.io/projected/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-kube-api-access-cjz82\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174785 kubelet[2782]: I0325 02:00:03.174744 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cni-path\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174785 kubelet[2782]: I0325 02:00:03.174760 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-xtables-lock\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174938 kubelet[2782]: I0325 02:00:03.174778 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-config-path\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174938 kubelet[2782]: I0325 02:00:03.174795 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-host-proc-sys-net\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174938 kubelet[2782]: I0325 02:00:03.174812 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-bpf-maps\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174938 kubelet[2782]: I0325 02:00:03.174828 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-clustermesh-secrets\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174938 kubelet[2782]: I0325 02:00:03.174854 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-hostproc\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.174938 kubelet[2782]: I0325 02:00:03.174883 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-etc-cni-netd\") pod \"cilium-krnhj\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " pod="kube-system/cilium-krnhj" Mar 25 02:00:03.233877 kubelet[2782]: I0325 02:00:03.233793 2782 topology_manager.go:215] "Topology Admit Handler" podUID="48b8a7e2-8e2c-4852-95da-f83640820ac1" podNamespace="kube-system" podName="cilium-operator-599987898-ldz2n" Mar 25 02:00:03.244369 systemd[1]: Created slice kubepods-besteffort-pod48b8a7e2_8e2c_4852_95da_f83640820ac1.slice - libcontainer container kubepods-besteffort-pod48b8a7e2_8e2c_4852_95da_f83640820ac1.slice. Mar 25 02:00:03.275249 kubelet[2782]: I0325 02:00:03.275181 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48b8a7e2-8e2c-4852-95da-f83640820ac1-cilium-config-path\") pod \"cilium-operator-599987898-ldz2n\" (UID: \"48b8a7e2-8e2c-4852-95da-f83640820ac1\") " pod="kube-system/cilium-operator-599987898-ldz2n" Mar 25 02:00:03.275249 kubelet[2782]: I0325 02:00:03.275226 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spbkv\" (UniqueName: \"kubernetes.io/projected/48b8a7e2-8e2c-4852-95da-f83640820ac1-kube-api-access-spbkv\") pod \"cilium-operator-599987898-ldz2n\" (UID: \"48b8a7e2-8e2c-4852-95da-f83640820ac1\") " pod="kube-system/cilium-operator-599987898-ldz2n" Mar 25 02:00:03.366850 containerd[1487]: time="2025-03-25T02:00:03.366747638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sg79x,Uid:dbd3d8c0-4091-4437-8826-318bbb6570c1,Namespace:kube-system,Attempt:0,}" Mar 25 02:00:03.395625 containerd[1487]: time="2025-03-25T02:00:03.395354677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-krnhj,Uid:74ed71b2-ca19-400a-9e9a-6e2eb015a91a,Namespace:kube-system,Attempt:0,}" Mar 25 02:00:03.428881 containerd[1487]: time="2025-03-25T02:00:03.428813399Z" level=info msg="connecting to shim 8f6395dd1b07a693f01608d791728ef3fb3fc6f28c3e84563f3af4c948657e30" address="unix:///run/containerd/s/8384be4b4d61f03ce15bb3d6afcb77dad146b894a303b356db30137d9bef0f58" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:00:03.442953 containerd[1487]: time="2025-03-25T02:00:03.442683525Z" level=info msg="connecting to shim 16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95" address="unix:///run/containerd/s/c09710e91bc046679718b956cdc48972e19b8562532b2a4824c17f2ac5dfda18" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:00:03.471118 systemd[1]: Started cri-containerd-8f6395dd1b07a693f01608d791728ef3fb3fc6f28c3e84563f3af4c948657e30.scope - libcontainer container 8f6395dd1b07a693f01608d791728ef3fb3fc6f28c3e84563f3af4c948657e30. Mar 25 02:00:03.478200 systemd[1]: Started cri-containerd-16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95.scope - libcontainer container 16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95. Mar 25 02:00:03.513237 containerd[1487]: time="2025-03-25T02:00:03.513196818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-krnhj,Uid:74ed71b2-ca19-400a-9e9a-6e2eb015a91a,Namespace:kube-system,Attempt:0,} returns sandbox id \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\"" Mar 25 02:00:03.516906 containerd[1487]: time="2025-03-25T02:00:03.516667160Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 25 02:00:03.523268 containerd[1487]: time="2025-03-25T02:00:03.523182632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sg79x,Uid:dbd3d8c0-4091-4437-8826-318bbb6570c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f6395dd1b07a693f01608d791728ef3fb3fc6f28c3e84563f3af4c948657e30\"" Mar 25 02:00:03.526571 containerd[1487]: time="2025-03-25T02:00:03.526212352Z" level=info msg="CreateContainer within sandbox \"8f6395dd1b07a693f01608d791728ef3fb3fc6f28c3e84563f3af4c948657e30\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 02:00:03.544531 containerd[1487]: time="2025-03-25T02:00:03.543770045Z" level=info msg="Container 52e21742a56b4e6f88bfb5a7143adc8ec13e022d4e514a2e7cfaac36bcad8814: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:00:03.550512 containerd[1487]: time="2025-03-25T02:00:03.550411517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ldz2n,Uid:48b8a7e2-8e2c-4852-95da-f83640820ac1,Namespace:kube-system,Attempt:0,}" Mar 25 02:00:03.558787 containerd[1487]: time="2025-03-25T02:00:03.558753298Z" level=info msg="CreateContainer within sandbox \"8f6395dd1b07a693f01608d791728ef3fb3fc6f28c3e84563f3af4c948657e30\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"52e21742a56b4e6f88bfb5a7143adc8ec13e022d4e514a2e7cfaac36bcad8814\"" Mar 25 02:00:03.560571 containerd[1487]: time="2025-03-25T02:00:03.559338026Z" level=info msg="StartContainer for \"52e21742a56b4e6f88bfb5a7143adc8ec13e022d4e514a2e7cfaac36bcad8814\"" Mar 25 02:00:03.561602 containerd[1487]: time="2025-03-25T02:00:03.561569130Z" level=info msg="connecting to shim 52e21742a56b4e6f88bfb5a7143adc8ec13e022d4e514a2e7cfaac36bcad8814" address="unix:///run/containerd/s/8384be4b4d61f03ce15bb3d6afcb77dad146b894a303b356db30137d9bef0f58" protocol=ttrpc version=3 Mar 25 02:00:03.584896 containerd[1487]: time="2025-03-25T02:00:03.584599572Z" level=info msg="connecting to shim 2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4" address="unix:///run/containerd/s/da4452f6a398eac4d7821ec37fa54ae0f46c93cb7ae2a4ed3851ba5127f2d657" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:00:03.584667 systemd[1]: Started cri-containerd-52e21742a56b4e6f88bfb5a7143adc8ec13e022d4e514a2e7cfaac36bcad8814.scope - libcontainer container 52e21742a56b4e6f88bfb5a7143adc8ec13e022d4e514a2e7cfaac36bcad8814. Mar 25 02:00:03.617561 systemd[1]: Started cri-containerd-2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4.scope - libcontainer container 2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4. Mar 25 02:00:03.647789 containerd[1487]: time="2025-03-25T02:00:03.647670293Z" level=info msg="StartContainer for \"52e21742a56b4e6f88bfb5a7143adc8ec13e022d4e514a2e7cfaac36bcad8814\" returns successfully" Mar 25 02:00:03.679010 containerd[1487]: time="2025-03-25T02:00:03.678926654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-ldz2n,Uid:48b8a7e2-8e2c-4852-95da-f83640820ac1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\"" Mar 25 02:00:04.484460 kubelet[2782]: I0325 02:00:04.484233 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sg79x" podStartSLOduration=1.484217843 podStartE2EDuration="1.484217843s" podCreationTimestamp="2025-03-25 02:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:00:04.483910315 +0000 UTC m=+16.186953263" watchObservedRunningTime="2025-03-25 02:00:04.484217843 +0000 UTC m=+16.187260791" Mar 25 02:00:12.056597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2003586923.mount: Deactivated successfully. Mar 25 02:00:14.373518 containerd[1487]: time="2025-03-25T02:00:14.372707514Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:00:14.376495 containerd[1487]: time="2025-03-25T02:00:14.376455025Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 25 02:00:14.377809 containerd[1487]: time="2025-03-25T02:00:14.377789574Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:00:14.379434 containerd[1487]: time="2025-03-25T02:00:14.379394727Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.862689013s" Mar 25 02:00:14.379528 containerd[1487]: time="2025-03-25T02:00:14.379511300Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 25 02:00:14.381055 containerd[1487]: time="2025-03-25T02:00:14.381023776Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 25 02:00:14.382318 containerd[1487]: time="2025-03-25T02:00:14.382295085Z" level=info msg="CreateContainer within sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 02:00:14.411741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1405775880.mount: Deactivated successfully. Mar 25 02:00:14.412544 containerd[1487]: time="2025-03-25T02:00:14.412084951Z" level=info msg="Container 4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:00:14.422761 containerd[1487]: time="2025-03-25T02:00:14.422671663Z" level=info msg="CreateContainer within sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\"" Mar 25 02:00:14.423535 containerd[1487]: time="2025-03-25T02:00:14.423508205Z" level=info msg="StartContainer for \"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\"" Mar 25 02:00:14.424455 containerd[1487]: time="2025-03-25T02:00:14.424313327Z" level=info msg="connecting to shim 4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3" address="unix:///run/containerd/s/c09710e91bc046679718b956cdc48972e19b8562532b2a4824c17f2ac5dfda18" protocol=ttrpc version=3 Mar 25 02:00:14.447564 systemd[1]: Started cri-containerd-4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3.scope - libcontainer container 4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3. Mar 25 02:00:14.484962 containerd[1487]: time="2025-03-25T02:00:14.484919782Z" level=info msg="StartContainer for \"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\" returns successfully" Mar 25 02:00:14.493298 systemd[1]: cri-containerd-4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3.scope: Deactivated successfully. Mar 25 02:00:14.497015 containerd[1487]: time="2025-03-25T02:00:14.496827045Z" level=info msg="received exit event container_id:\"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\" id:\"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\" pid:3188 exited_at:{seconds:1742868014 nanos:495755226}" Mar 25 02:00:14.498209 containerd[1487]: time="2025-03-25T02:00:14.498139052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\" id:\"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\" pid:3188 exited_at:{seconds:1742868014 nanos:495755226}" Mar 25 02:00:14.526093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3-rootfs.mount: Deactivated successfully. Mar 25 02:00:16.523036 containerd[1487]: time="2025-03-25T02:00:16.520400632Z" level=info msg="CreateContainer within sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 02:00:16.560760 containerd[1487]: time="2025-03-25T02:00:16.557406952Z" level=info msg="Container 40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:00:16.577515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount375682721.mount: Deactivated successfully. Mar 25 02:00:16.595179 containerd[1487]: time="2025-03-25T02:00:16.595132818Z" level=info msg="CreateContainer within sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\"" Mar 25 02:00:16.595776 containerd[1487]: time="2025-03-25T02:00:16.595669789Z" level=info msg="StartContainer for \"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\"" Mar 25 02:00:16.597316 containerd[1487]: time="2025-03-25T02:00:16.597280011Z" level=info msg="connecting to shim 40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d" address="unix:///run/containerd/s/c09710e91bc046679718b956cdc48972e19b8562532b2a4824c17f2ac5dfda18" protocol=ttrpc version=3 Mar 25 02:00:16.619617 systemd[1]: Started cri-containerd-40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d.scope - libcontainer container 40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d. Mar 25 02:00:16.651161 containerd[1487]: time="2025-03-25T02:00:16.651084922Z" level=info msg="StartContainer for \"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\" returns successfully" Mar 25 02:00:16.663058 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 02:00:16.663781 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 02:00:16.663926 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 25 02:00:16.666180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 02:00:16.669326 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 02:00:16.673989 systemd[1]: cri-containerd-40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d.scope: Deactivated successfully. Mar 25 02:00:16.675704 containerd[1487]: time="2025-03-25T02:00:16.675378247Z" level=info msg="received exit event container_id:\"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\" id:\"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\" pid:3231 exited_at:{seconds:1742868016 nanos:675136438}" Mar 25 02:00:16.675704 containerd[1487]: time="2025-03-25T02:00:16.675661626Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\" id:\"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\" pid:3231 exited_at:{seconds:1742868016 nanos:675136438}" Mar 25 02:00:16.698375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 02:00:17.522220 containerd[1487]: time="2025-03-25T02:00:17.522181321Z" level=info msg="CreateContainer within sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 02:00:17.542007 containerd[1487]: time="2025-03-25T02:00:17.541785210Z" level=info msg="Container ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:00:17.554338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d-rootfs.mount: Deactivated successfully. Mar 25 02:00:17.560909 containerd[1487]: time="2025-03-25T02:00:17.560882105Z" level=info msg="CreateContainer within sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\"" Mar 25 02:00:17.561866 containerd[1487]: time="2025-03-25T02:00:17.561795410Z" level=info msg="StartContainer for \"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\"" Mar 25 02:00:17.563808 containerd[1487]: time="2025-03-25T02:00:17.563759886Z" level=info msg="connecting to shim ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4" address="unix:///run/containerd/s/c09710e91bc046679718b956cdc48972e19b8562532b2a4824c17f2ac5dfda18" protocol=ttrpc version=3 Mar 25 02:00:17.598612 systemd[1]: Started cri-containerd-ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4.scope - libcontainer container ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4. Mar 25 02:00:17.643338 systemd[1]: cri-containerd-ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4.scope: Deactivated successfully. Mar 25 02:00:17.646553 containerd[1487]: time="2025-03-25T02:00:17.646524867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\" id:\"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\" pid:3291 exited_at:{seconds:1742868017 nanos:645490482}" Mar 25 02:00:17.649312 containerd[1487]: time="2025-03-25T02:00:17.649208469Z" level=info msg="received exit event container_id:\"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\" id:\"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\" pid:3291 exited_at:{seconds:1742868017 nanos:645490482}" Mar 25 02:00:17.651110 containerd[1487]: time="2025-03-25T02:00:17.651090948Z" level=info msg="StartContainer for \"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\" returns successfully" Mar 25 02:00:17.682205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4-rootfs.mount: Deactivated successfully. Mar 25 02:00:18.137094 containerd[1487]: time="2025-03-25T02:00:18.137012464Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:00:18.138641 containerd[1487]: time="2025-03-25T02:00:18.138446830Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 25 02:00:18.140026 containerd[1487]: time="2025-03-25T02:00:18.139943766Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 02:00:18.141585 containerd[1487]: time="2025-03-25T02:00:18.141442575Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.760362912s" Mar 25 02:00:18.141585 containerd[1487]: time="2025-03-25T02:00:18.141485536Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 25 02:00:18.145071 containerd[1487]: time="2025-03-25T02:00:18.145026667Z" level=info msg="CreateContainer within sandbox \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 25 02:00:18.159581 containerd[1487]: time="2025-03-25T02:00:18.157479064Z" level=info msg="Container 6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:00:18.178502 containerd[1487]: time="2025-03-25T02:00:18.177727319Z" level=info msg="CreateContainer within sandbox \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\"" Mar 25 02:00:18.179376 containerd[1487]: time="2025-03-25T02:00:18.178812992Z" level=info msg="StartContainer for \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\"" Mar 25 02:00:18.179941 containerd[1487]: time="2025-03-25T02:00:18.179826269Z" level=info msg="connecting to shim 6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b" address="unix:///run/containerd/s/da4452f6a398eac4d7821ec37fa54ae0f46c93cb7ae2a4ed3851ba5127f2d657" protocol=ttrpc version=3 Mar 25 02:00:18.201575 systemd[1]: Started cri-containerd-6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b.scope - libcontainer container 6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b. Mar 25 02:00:18.230524 containerd[1487]: time="2025-03-25T02:00:18.230472469Z" level=info msg="StartContainer for \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\" returns successfully" Mar 25 02:00:18.535140 containerd[1487]: time="2025-03-25T02:00:18.534487496Z" level=info msg="CreateContainer within sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 02:00:18.550178 kubelet[2782]: I0325 02:00:18.550119 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-ldz2n" podStartSLOduration=1.0886205310000001 podStartE2EDuration="15.550102816s" podCreationTimestamp="2025-03-25 02:00:03 +0000 UTC" firstStartedPulling="2025-03-25 02:00:03.680841274 +0000 UTC m=+15.383884222" lastFinishedPulling="2025-03-25 02:00:18.142323559 +0000 UTC m=+29.845366507" observedRunningTime="2025-03-25 02:00:18.549769992 +0000 UTC m=+30.252812931" watchObservedRunningTime="2025-03-25 02:00:18.550102816 +0000 UTC m=+30.253145754" Mar 25 02:00:18.556018 containerd[1487]: time="2025-03-25T02:00:18.555984214Z" level=info msg="Container 50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:00:18.574452 containerd[1487]: time="2025-03-25T02:00:18.574068926Z" level=info msg="CreateContainer within sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\"" Mar 25 02:00:18.575759 containerd[1487]: time="2025-03-25T02:00:18.574776741Z" level=info msg="StartContainer for \"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\"" Mar 25 02:00:18.576722 containerd[1487]: time="2025-03-25T02:00:18.576674769Z" level=info msg="connecting to shim 50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322" address="unix:///run/containerd/s/c09710e91bc046679718b956cdc48972e19b8562532b2a4824c17f2ac5dfda18" protocol=ttrpc version=3 Mar 25 02:00:18.614615 systemd[1]: Started cri-containerd-50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322.scope - libcontainer container 50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322. Mar 25 02:00:18.670541 systemd[1]: cri-containerd-50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322.scope: Deactivated successfully. Mar 25 02:00:18.672076 containerd[1487]: time="2025-03-25T02:00:18.672031900Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\" id:\"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\" pid:3364 exited_at:{seconds:1742868018 nanos:671668089}" Mar 25 02:00:18.673824 containerd[1487]: time="2025-03-25T02:00:18.673724327Z" level=info msg="received exit event container_id:\"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\" id:\"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\" pid:3364 exited_at:{seconds:1742868018 nanos:671668089}" Mar 25 02:00:18.681897 containerd[1487]: time="2025-03-25T02:00:18.681865460Z" level=info msg="StartContainer for \"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\" returns successfully" Mar 25 02:00:18.699230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322-rootfs.mount: Deactivated successfully. Mar 25 02:00:19.556606 containerd[1487]: time="2025-03-25T02:00:19.556371280Z" level=info msg="CreateContainer within sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 02:00:19.597500 containerd[1487]: time="2025-03-25T02:00:19.590000734Z" level=info msg="Container b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:00:19.617390 containerd[1487]: time="2025-03-25T02:00:19.617354935Z" level=info msg="CreateContainer within sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\"" Mar 25 02:00:19.618971 containerd[1487]: time="2025-03-25T02:00:19.618073671Z" level=info msg="StartContainer for \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\"" Mar 25 02:00:19.618971 containerd[1487]: time="2025-03-25T02:00:19.618909969Z" level=info msg="connecting to shim b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d" address="unix:///run/containerd/s/c09710e91bc046679718b956cdc48972e19b8562532b2a4824c17f2ac5dfda18" protocol=ttrpc version=3 Mar 25 02:00:19.642561 systemd[1]: Started cri-containerd-b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d.scope - libcontainer container b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d. Mar 25 02:00:19.681167 containerd[1487]: time="2025-03-25T02:00:19.681123370Z" level=info msg="StartContainer for \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" returns successfully" Mar 25 02:00:19.777309 containerd[1487]: time="2025-03-25T02:00:19.777200682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" id:\"722ee7afe51491e1dfb1aef12f26b739a356bc5ba1cb21586e9abea82ade6837\" pid:3431 exited_at:{seconds:1742868019 nanos:776674552}" Mar 25 02:00:19.854106 kubelet[2782]: I0325 02:00:19.853844 2782 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 25 02:00:19.884375 kubelet[2782]: I0325 02:00:19.884335 2782 topology_manager.go:215] "Topology Admit Handler" podUID="ef8e73f7-f48d-47e6-bec9-a845d5704981" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ts8q9" Mar 25 02:00:19.889233 kubelet[2782]: I0325 02:00:19.888232 2782 topology_manager.go:215] "Topology Admit Handler" podUID="f360efc5-c603-47d4-8afb-40a70f85e34c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lglth" Mar 25 02:00:19.890172 kubelet[2782]: I0325 02:00:19.890154 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef8e73f7-f48d-47e6-bec9-a845d5704981-config-volume\") pod \"coredns-7db6d8ff4d-ts8q9\" (UID: \"ef8e73f7-f48d-47e6-bec9-a845d5704981\") " pod="kube-system/coredns-7db6d8ff4d-ts8q9" Mar 25 02:00:19.890480 kubelet[2782]: I0325 02:00:19.890463 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw9cx\" (UniqueName: \"kubernetes.io/projected/ef8e73f7-f48d-47e6-bec9-a845d5704981-kube-api-access-rw9cx\") pod \"coredns-7db6d8ff4d-ts8q9\" (UID: \"ef8e73f7-f48d-47e6-bec9-a845d5704981\") " pod="kube-system/coredns-7db6d8ff4d-ts8q9" Mar 25 02:00:19.895200 systemd[1]: Created slice kubepods-burstable-podef8e73f7_f48d_47e6_bec9_a845d5704981.slice - libcontainer container kubepods-burstable-podef8e73f7_f48d_47e6_bec9_a845d5704981.slice. Mar 25 02:00:19.905705 systemd[1]: Created slice kubepods-burstable-podf360efc5_c603_47d4_8afb_40a70f85e34c.slice - libcontainer container kubepods-burstable-podf360efc5_c603_47d4_8afb_40a70f85e34c.slice. Mar 25 02:00:19.991733 kubelet[2782]: I0325 02:00:19.990941 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f360efc5-c603-47d4-8afb-40a70f85e34c-config-volume\") pod \"coredns-7db6d8ff4d-lglth\" (UID: \"f360efc5-c603-47d4-8afb-40a70f85e34c\") " pod="kube-system/coredns-7db6d8ff4d-lglth" Mar 25 02:00:19.991733 kubelet[2782]: I0325 02:00:19.990982 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngthr\" (UniqueName: \"kubernetes.io/projected/f360efc5-c603-47d4-8afb-40a70f85e34c-kube-api-access-ngthr\") pod \"coredns-7db6d8ff4d-lglth\" (UID: \"f360efc5-c603-47d4-8afb-40a70f85e34c\") " pod="kube-system/coredns-7db6d8ff4d-lglth" Mar 25 02:00:20.199884 containerd[1487]: time="2025-03-25T02:00:20.199844940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ts8q9,Uid:ef8e73f7-f48d-47e6-bec9-a845d5704981,Namespace:kube-system,Attempt:0,}" Mar 25 02:00:20.210915 containerd[1487]: time="2025-03-25T02:00:20.210842032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lglth,Uid:f360efc5-c603-47d4-8afb-40a70f85e34c,Namespace:kube-system,Attempt:0,}" Mar 25 02:00:21.900090 systemd-networkd[1389]: cilium_host: Link UP Mar 25 02:00:21.900586 systemd-networkd[1389]: cilium_net: Link UP Mar 25 02:00:21.900986 systemd-networkd[1389]: cilium_net: Gained carrier Mar 25 02:00:21.901364 systemd-networkd[1389]: cilium_host: Gained carrier Mar 25 02:00:22.005404 systemd-networkd[1389]: cilium_vxlan: Link UP Mar 25 02:00:22.005640 systemd-networkd[1389]: cilium_vxlan: Gained carrier Mar 25 02:00:22.040549 systemd-networkd[1389]: cilium_net: Gained IPv6LL Mar 25 02:00:22.248478 kernel: NET: Registered PF_ALG protocol family Mar 25 02:00:22.776780 systemd-networkd[1389]: cilium_host: Gained IPv6LL Mar 25 02:00:23.043152 systemd-networkd[1389]: lxc_health: Link UP Mar 25 02:00:23.055448 systemd-networkd[1389]: lxc_health: Gained carrier Mar 25 02:00:23.235032 systemd-networkd[1389]: lxc7393c256719b: Link UP Mar 25 02:00:23.237448 kernel: eth0: renamed from tmp913dd Mar 25 02:00:23.246745 systemd-networkd[1389]: lxc7393c256719b: Gained carrier Mar 25 02:00:23.259770 kernel: eth0: renamed from tmp5eefb Mar 25 02:00:23.265819 systemd-networkd[1389]: lxc0507c5be67ed: Link UP Mar 25 02:00:23.266106 systemd-networkd[1389]: lxc0507c5be67ed: Gained carrier Mar 25 02:00:23.423615 kubelet[2782]: I0325 02:00:23.423264 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-krnhj" podStartSLOduration=9.557949047 podStartE2EDuration="20.423244243s" podCreationTimestamp="2025-03-25 02:00:03 +0000 UTC" firstStartedPulling="2025-03-25 02:00:03.515034841 +0000 UTC m=+15.218077789" lastFinishedPulling="2025-03-25 02:00:14.380330037 +0000 UTC m=+26.083372985" observedRunningTime="2025-03-25 02:00:20.582683897 +0000 UTC m=+32.285726906" watchObservedRunningTime="2025-03-25 02:00:23.423244243 +0000 UTC m=+35.126287191" Mar 25 02:00:23.544561 systemd-networkd[1389]: cilium_vxlan: Gained IPv6LL Mar 25 02:00:24.248681 systemd-networkd[1389]: lxc_health: Gained IPv6LL Mar 25 02:00:24.632665 systemd-networkd[1389]: lxc7393c256719b: Gained IPv6LL Mar 25 02:00:24.888722 systemd-networkd[1389]: lxc0507c5be67ed: Gained IPv6LL Mar 25 02:00:27.733335 containerd[1487]: time="2025-03-25T02:00:27.732185646Z" level=info msg="connecting to shim 913ddeb9b03a166492b0ddde9139336362824b6e90c85ae2117b96322db27214" address="unix:///run/containerd/s/f4c79507bb18221c96994a348deb078eb1fe1dfc864d61f089667ca5ea084a3b" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:00:27.763600 systemd[1]: Started cri-containerd-913ddeb9b03a166492b0ddde9139336362824b6e90c85ae2117b96322db27214.scope - libcontainer container 913ddeb9b03a166492b0ddde9139336362824b6e90c85ae2117b96322db27214. Mar 25 02:00:27.800102 containerd[1487]: time="2025-03-25T02:00:27.800016308Z" level=info msg="connecting to shim 5eefb033674f62d131d56c178ec5d54eccc14efc7ecbaecf56cfa0d2afaf49f6" address="unix:///run/containerd/s/83f93a544e3f108510b4aeb139cf613b33b9f4719fa132395ae8b9f698d251a9" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:00:27.841736 systemd[1]: Started cri-containerd-5eefb033674f62d131d56c178ec5d54eccc14efc7ecbaecf56cfa0d2afaf49f6.scope - libcontainer container 5eefb033674f62d131d56c178ec5d54eccc14efc7ecbaecf56cfa0d2afaf49f6. Mar 25 02:00:27.853160 containerd[1487]: time="2025-03-25T02:00:27.852979485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ts8q9,Uid:ef8e73f7-f48d-47e6-bec9-a845d5704981,Namespace:kube-system,Attempt:0,} returns sandbox id \"913ddeb9b03a166492b0ddde9139336362824b6e90c85ae2117b96322db27214\"" Mar 25 02:00:27.861568 containerd[1487]: time="2025-03-25T02:00:27.860760672Z" level=info msg="CreateContainer within sandbox \"913ddeb9b03a166492b0ddde9139336362824b6e90c85ae2117b96322db27214\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 02:00:27.890982 containerd[1487]: time="2025-03-25T02:00:27.890944857Z" level=info msg="Container a55c9fa3576461976f1d4daefcd26a58a724c84780d970ec2ef02e91dc0d82d2: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:00:27.902512 containerd[1487]: time="2025-03-25T02:00:27.902228823Z" level=info msg="CreateContainer within sandbox \"913ddeb9b03a166492b0ddde9139336362824b6e90c85ae2117b96322db27214\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a55c9fa3576461976f1d4daefcd26a58a724c84780d970ec2ef02e91dc0d82d2\"" Mar 25 02:00:27.903320 containerd[1487]: time="2025-03-25T02:00:27.903261603Z" level=info msg="StartContainer for \"a55c9fa3576461976f1d4daefcd26a58a724c84780d970ec2ef02e91dc0d82d2\"" Mar 25 02:00:27.905564 containerd[1487]: time="2025-03-25T02:00:27.905232804Z" level=info msg="connecting to shim a55c9fa3576461976f1d4daefcd26a58a724c84780d970ec2ef02e91dc0d82d2" address="unix:///run/containerd/s/f4c79507bb18221c96994a348deb078eb1fe1dfc864d61f089667ca5ea084a3b" protocol=ttrpc version=3 Mar 25 02:00:27.936668 systemd[1]: Started cri-containerd-a55c9fa3576461976f1d4daefcd26a58a724c84780d970ec2ef02e91dc0d82d2.scope - libcontainer container a55c9fa3576461976f1d4daefcd26a58a724c84780d970ec2ef02e91dc0d82d2. Mar 25 02:00:27.942623 containerd[1487]: time="2025-03-25T02:00:27.942509448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lglth,Uid:f360efc5-c603-47d4-8afb-40a70f85e34c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5eefb033674f62d131d56c178ec5d54eccc14efc7ecbaecf56cfa0d2afaf49f6\"" Mar 25 02:00:27.947963 containerd[1487]: time="2025-03-25T02:00:27.947920839Z" level=info msg="CreateContainer within sandbox \"5eefb033674f62d131d56c178ec5d54eccc14efc7ecbaecf56cfa0d2afaf49f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 02:00:27.964576 containerd[1487]: time="2025-03-25T02:00:27.963796809Z" level=info msg="Container 00e2872aebadc8e993624d26645eb7dcdb72482f195c46259ca3785957012254: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:00:27.977475 containerd[1487]: time="2025-03-25T02:00:27.976959700Z" level=info msg="CreateContainer within sandbox \"5eefb033674f62d131d56c178ec5d54eccc14efc7ecbaecf56cfa0d2afaf49f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00e2872aebadc8e993624d26645eb7dcdb72482f195c46259ca3785957012254\"" Mar 25 02:00:27.978901 containerd[1487]: time="2025-03-25T02:00:27.978872721Z" level=info msg="StartContainer for \"00e2872aebadc8e993624d26645eb7dcdb72482f195c46259ca3785957012254\"" Mar 25 02:00:27.981754 containerd[1487]: time="2025-03-25T02:00:27.981722059Z" level=info msg="connecting to shim 00e2872aebadc8e993624d26645eb7dcdb72482f195c46259ca3785957012254" address="unix:///run/containerd/s/83f93a544e3f108510b4aeb139cf613b33b9f4719fa132395ae8b9f698d251a9" protocol=ttrpc version=3 Mar 25 02:00:27.993938 containerd[1487]: time="2025-03-25T02:00:27.993255477Z" level=info msg="StartContainer for \"a55c9fa3576461976f1d4daefcd26a58a724c84780d970ec2ef02e91dc0d82d2\" returns successfully" Mar 25 02:00:28.023613 systemd[1]: Started cri-containerd-00e2872aebadc8e993624d26645eb7dcdb72482f195c46259ca3785957012254.scope - libcontainer container 00e2872aebadc8e993624d26645eb7dcdb72482f195c46259ca3785957012254. Mar 25 02:00:28.075083 containerd[1487]: time="2025-03-25T02:00:28.075024536Z" level=info msg="StartContainer for \"00e2872aebadc8e993624d26645eb7dcdb72482f195c46259ca3785957012254\" returns successfully" Mar 25 02:00:28.618573 kubelet[2782]: I0325 02:00:28.617609 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lglth" podStartSLOduration=25.617572439 podStartE2EDuration="25.617572439s" podCreationTimestamp="2025-03-25 02:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:00:28.616315344 +0000 UTC m=+40.319358342" watchObservedRunningTime="2025-03-25 02:00:28.617572439 +0000 UTC m=+40.320615427" Mar 25 02:00:28.646850 kubelet[2782]: I0325 02:00:28.644165 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ts8q9" podStartSLOduration=25.64413253 podStartE2EDuration="25.64413253s" podCreationTimestamp="2025-03-25 02:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:00:28.643226491 +0000 UTC m=+40.346269489" watchObservedRunningTime="2025-03-25 02:00:28.64413253 +0000 UTC m=+40.347175518" Mar 25 02:02:33.149208 systemd[1]: Started sshd@7-172.24.4.226:22-172.24.4.1:36416.service - OpenSSH per-connection server daemon (172.24.4.1:36416). Mar 25 02:02:34.511700 sshd[4090]: Accepted publickey for core from 172.24.4.1 port 36416 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:02:34.514660 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:02:34.525543 systemd-logind[1465]: New session 10 of user core. Mar 25 02:02:34.534749 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 02:02:35.372492 sshd[4094]: Connection closed by 172.24.4.1 port 36416 Mar 25 02:02:35.373028 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Mar 25 02:02:35.378742 systemd[1]: sshd@7-172.24.4.226:22-172.24.4.1:36416.service: Deactivated successfully. Mar 25 02:02:35.380898 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 02:02:35.382179 systemd-logind[1465]: Session 10 logged out. Waiting for processes to exit. Mar 25 02:02:35.383707 systemd-logind[1465]: Removed session 10. Mar 25 02:02:40.394022 systemd[1]: Started sshd@8-172.24.4.226:22-172.24.4.1:44928.service - OpenSSH per-connection server daemon (172.24.4.1:44928). Mar 25 02:02:41.687797 sshd[4108]: Accepted publickey for core from 172.24.4.1 port 44928 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:02:41.692620 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:02:41.703633 systemd-logind[1465]: New session 11 of user core. Mar 25 02:02:41.710711 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 02:02:42.432323 sshd[4110]: Connection closed by 172.24.4.1 port 44928 Mar 25 02:02:42.433388 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Mar 25 02:02:42.439082 systemd[1]: sshd@8-172.24.4.226:22-172.24.4.1:44928.service: Deactivated successfully. Mar 25 02:02:42.442712 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 02:02:42.445940 systemd-logind[1465]: Session 11 logged out. Waiting for processes to exit. Mar 25 02:02:42.447440 systemd-logind[1465]: Removed session 11. Mar 25 02:02:47.454937 systemd[1]: Started sshd@9-172.24.4.226:22-172.24.4.1:33562.service - OpenSSH per-connection server daemon (172.24.4.1:33562). Mar 25 02:02:48.771688 sshd[4123]: Accepted publickey for core from 172.24.4.1 port 33562 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:02:48.775061 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:02:48.788031 systemd-logind[1465]: New session 12 of user core. Mar 25 02:02:48.796754 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 02:02:49.564543 sshd[4127]: Connection closed by 172.24.4.1 port 33562 Mar 25 02:02:49.565608 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Mar 25 02:02:49.572977 systemd[1]: sshd@9-172.24.4.226:22-172.24.4.1:33562.service: Deactivated successfully. Mar 25 02:02:49.577299 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 02:02:49.579561 systemd-logind[1465]: Session 12 logged out. Waiting for processes to exit. Mar 25 02:02:49.582802 systemd-logind[1465]: Removed session 12. Mar 25 02:02:54.590576 systemd[1]: Started sshd@10-172.24.4.226:22-172.24.4.1:45262.service - OpenSSH per-connection server daemon (172.24.4.1:45262). Mar 25 02:02:55.879770 sshd[4139]: Accepted publickey for core from 172.24.4.1 port 45262 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:02:55.882551 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:02:55.893890 systemd-logind[1465]: New session 13 of user core. Mar 25 02:02:55.902008 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 02:02:56.621547 sshd[4141]: Connection closed by 172.24.4.1 port 45262 Mar 25 02:02:56.623992 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Mar 25 02:02:56.640085 systemd[1]: sshd@10-172.24.4.226:22-172.24.4.1:45262.service: Deactivated successfully. Mar 25 02:02:56.645741 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 02:02:56.648769 systemd-logind[1465]: Session 13 logged out. Waiting for processes to exit. Mar 25 02:02:56.654954 systemd[1]: Started sshd@11-172.24.4.226:22-172.24.4.1:45274.service - OpenSSH per-connection server daemon (172.24.4.1:45274). Mar 25 02:02:56.658242 systemd-logind[1465]: Removed session 13. Mar 25 02:02:57.989150 sshd[4153]: Accepted publickey for core from 172.24.4.1 port 45274 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:02:57.992180 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:02:58.004551 systemd-logind[1465]: New session 14 of user core. Mar 25 02:02:58.008749 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 02:02:58.828474 sshd[4157]: Connection closed by 172.24.4.1 port 45274 Mar 25 02:02:58.831333 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Mar 25 02:02:58.849734 systemd[1]: sshd@11-172.24.4.226:22-172.24.4.1:45274.service: Deactivated successfully. Mar 25 02:02:58.854562 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 02:02:58.856940 systemd-logind[1465]: Session 14 logged out. Waiting for processes to exit. Mar 25 02:02:58.862782 systemd[1]: Started sshd@12-172.24.4.226:22-172.24.4.1:45290.service - OpenSSH per-connection server daemon (172.24.4.1:45290). Mar 25 02:02:58.867266 systemd-logind[1465]: Removed session 14. Mar 25 02:02:59.951112 sshd[4166]: Accepted publickey for core from 172.24.4.1 port 45290 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:02:59.954954 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:02:59.966561 systemd-logind[1465]: New session 15 of user core. Mar 25 02:02:59.977856 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 02:03:00.655226 sshd[4169]: Connection closed by 172.24.4.1 port 45290 Mar 25 02:03:00.656324 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:00.663615 systemd[1]: sshd@12-172.24.4.226:22-172.24.4.1:45290.service: Deactivated successfully. Mar 25 02:03:00.668943 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 02:03:00.671370 systemd-logind[1465]: Session 15 logged out. Waiting for processes to exit. Mar 25 02:03:00.674053 systemd-logind[1465]: Removed session 15. Mar 25 02:03:05.681029 systemd[1]: Started sshd@13-172.24.4.226:22-172.24.4.1:43974.service - OpenSSH per-connection server daemon (172.24.4.1:43974). Mar 25 02:03:07.032310 sshd[4182]: Accepted publickey for core from 172.24.4.1 port 43974 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:07.035903 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:07.049988 systemd-logind[1465]: New session 16 of user core. Mar 25 02:03:07.060499 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 02:03:07.674945 sshd[4184]: Connection closed by 172.24.4.1 port 43974 Mar 25 02:03:07.675631 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:07.695418 systemd[1]: sshd@13-172.24.4.226:22-172.24.4.1:43974.service: Deactivated successfully. Mar 25 02:03:07.699106 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 02:03:07.701397 systemd-logind[1465]: Session 16 logged out. Waiting for processes to exit. Mar 25 02:03:07.706174 systemd[1]: Started sshd@14-172.24.4.226:22-172.24.4.1:43990.service - OpenSSH per-connection server daemon (172.24.4.1:43990). Mar 25 02:03:07.710793 systemd-logind[1465]: Removed session 16. Mar 25 02:03:08.868608 sshd[4195]: Accepted publickey for core from 172.24.4.1 port 43990 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:08.871308 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:08.883560 systemd-logind[1465]: New session 17 of user core. Mar 25 02:03:08.890742 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 02:03:09.676807 sshd[4198]: Connection closed by 172.24.4.1 port 43990 Mar 25 02:03:09.677497 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:09.692537 systemd[1]: sshd@14-172.24.4.226:22-172.24.4.1:43990.service: Deactivated successfully. Mar 25 02:03:09.696550 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 02:03:09.700745 systemd-logind[1465]: Session 17 logged out. Waiting for processes to exit. Mar 25 02:03:09.704090 systemd[1]: Started sshd@15-172.24.4.226:22-172.24.4.1:43992.service - OpenSSH per-connection server daemon (172.24.4.1:43992). Mar 25 02:03:09.708628 systemd-logind[1465]: Removed session 17. Mar 25 02:03:10.873954 sshd[4206]: Accepted publickey for core from 172.24.4.1 port 43992 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:10.877033 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:10.889157 systemd-logind[1465]: New session 18 of user core. Mar 25 02:03:10.894803 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 02:03:13.732917 sshd[4209]: Connection closed by 172.24.4.1 port 43992 Mar 25 02:03:13.733806 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:13.751980 systemd[1]: sshd@15-172.24.4.226:22-172.24.4.1:43992.service: Deactivated successfully. Mar 25 02:03:13.758752 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 02:03:13.765044 systemd-logind[1465]: Session 18 logged out. Waiting for processes to exit. Mar 25 02:03:13.768934 systemd[1]: Started sshd@16-172.24.4.226:22-172.24.4.1:33404.service - OpenSSH per-connection server daemon (172.24.4.1:33404). Mar 25 02:03:13.772894 systemd-logind[1465]: Removed session 18. Mar 25 02:03:15.084636 sshd[4225]: Accepted publickey for core from 172.24.4.1 port 33404 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:15.087334 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:15.100210 systemd-logind[1465]: New session 19 of user core. Mar 25 02:03:15.107773 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 02:03:16.184085 sshd[4228]: Connection closed by 172.24.4.1 port 33404 Mar 25 02:03:16.183903 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:16.207245 systemd[1]: sshd@16-172.24.4.226:22-172.24.4.1:33404.service: Deactivated successfully. Mar 25 02:03:16.213187 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 02:03:16.215814 systemd-logind[1465]: Session 19 logged out. Waiting for processes to exit. Mar 25 02:03:16.221926 systemd[1]: Started sshd@17-172.24.4.226:22-172.24.4.1:33410.service - OpenSSH per-connection server daemon (172.24.4.1:33410). Mar 25 02:03:16.226280 systemd-logind[1465]: Removed session 19. Mar 25 02:03:17.540695 sshd[4237]: Accepted publickey for core from 172.24.4.1 port 33410 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:17.543737 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:17.556590 systemd-logind[1465]: New session 20 of user core. Mar 25 02:03:17.562758 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 02:03:18.284826 sshd[4240]: Connection closed by 172.24.4.1 port 33410 Mar 25 02:03:18.284508 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:18.291307 systemd[1]: sshd@17-172.24.4.226:22-172.24.4.1:33410.service: Deactivated successfully. Mar 25 02:03:18.296175 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 02:03:18.300078 systemd-logind[1465]: Session 20 logged out. Waiting for processes to exit. Mar 25 02:03:18.302371 systemd-logind[1465]: Removed session 20. Mar 25 02:03:23.308137 systemd[1]: Started sshd@18-172.24.4.226:22-172.24.4.1:33416.service - OpenSSH per-connection server daemon (172.24.4.1:33416). Mar 25 02:03:24.555248 sshd[4255]: Accepted publickey for core from 172.24.4.1 port 33416 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:24.559015 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:24.570947 systemd-logind[1465]: New session 21 of user core. Mar 25 02:03:24.578775 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 02:03:25.371619 sshd[4257]: Connection closed by 172.24.4.1 port 33416 Mar 25 02:03:25.372829 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:25.380937 systemd[1]: sshd@18-172.24.4.226:22-172.24.4.1:33416.service: Deactivated successfully. Mar 25 02:03:25.386640 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 02:03:25.388676 systemd-logind[1465]: Session 21 logged out. Waiting for processes to exit. Mar 25 02:03:25.391102 systemd-logind[1465]: Removed session 21. Mar 25 02:03:30.394053 systemd[1]: Started sshd@19-172.24.4.226:22-172.24.4.1:60520.service - OpenSSH per-connection server daemon (172.24.4.1:60520). Mar 25 02:03:31.598873 sshd[4270]: Accepted publickey for core from 172.24.4.1 port 60520 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:31.601581 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:31.612920 systemd-logind[1465]: New session 22 of user core. Mar 25 02:03:31.619781 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 02:03:32.331047 sshd[4272]: Connection closed by 172.24.4.1 port 60520 Mar 25 02:03:32.332231 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:32.339124 systemd[1]: sshd@19-172.24.4.226:22-172.24.4.1:60520.service: Deactivated successfully. Mar 25 02:03:32.343247 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 02:03:32.347986 systemd-logind[1465]: Session 22 logged out. Waiting for processes to exit. Mar 25 02:03:32.350539 systemd-logind[1465]: Removed session 22. Mar 25 02:03:37.352759 systemd[1]: Started sshd@20-172.24.4.226:22-172.24.4.1:49692.service - OpenSSH per-connection server daemon (172.24.4.1:49692). Mar 25 02:03:38.689881 sshd[4287]: Accepted publickey for core from 172.24.4.1 port 49692 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:38.692879 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:38.704568 systemd-logind[1465]: New session 23 of user core. Mar 25 02:03:38.712742 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 02:03:39.436561 sshd[4289]: Connection closed by 172.24.4.1 port 49692 Mar 25 02:03:39.437377 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:39.449136 systemd[1]: sshd@20-172.24.4.226:22-172.24.4.1:49692.service: Deactivated successfully. Mar 25 02:03:39.451374 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 02:03:39.452714 systemd-logind[1465]: Session 23 logged out. Waiting for processes to exit. Mar 25 02:03:39.454894 systemd[1]: Started sshd@21-172.24.4.226:22-172.24.4.1:49700.service - OpenSSH per-connection server daemon (172.24.4.1:49700). Mar 25 02:03:39.457293 systemd-logind[1465]: Removed session 23. Mar 25 02:03:40.826241 sshd[4300]: Accepted publickey for core from 172.24.4.1 port 49700 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:40.829276 sshd-session[4300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:40.840748 systemd-logind[1465]: New session 24 of user core. Mar 25 02:03:40.851736 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 02:03:43.476535 containerd[1487]: time="2025-03-25T02:03:43.476220548Z" level=info msg="StopContainer for \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\" with timeout 30 (s)" Mar 25 02:03:43.478078 containerd[1487]: time="2025-03-25T02:03:43.478051323Z" level=info msg="Stop container \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\" with signal terminated" Mar 25 02:03:43.494884 systemd[1]: cri-containerd-6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b.scope: Deactivated successfully. Mar 25 02:03:43.498310 containerd[1487]: time="2025-03-25T02:03:43.498259494Z" level=info msg="received exit event container_id:\"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\" id:\"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\" pid:3334 exited_at:{seconds:1742868223 nanos:497926938}" Mar 25 02:03:43.500040 containerd[1487]: time="2025-03-25T02:03:43.500012373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\" id:\"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\" pid:3334 exited_at:{seconds:1742868223 nanos:497926938}" Mar 25 02:03:43.501905 containerd[1487]: time="2025-03-25T02:03:43.501875738Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 02:03:43.506668 containerd[1487]: time="2025-03-25T02:03:43.506598876Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" id:\"49282186c0ee374e80ad3c5d19384303105927c3c84f1de475bf72f33954c7ee\" pid:4327 exited_at:{seconds:1742868223 nanos:505606498}" Mar 25 02:03:43.508364 containerd[1487]: time="2025-03-25T02:03:43.508333338Z" level=info msg="StopContainer for \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" with timeout 2 (s)" Mar 25 02:03:43.508662 containerd[1487]: time="2025-03-25T02:03:43.508639785Z" level=info msg="Stop container \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" with signal terminated" Mar 25 02:03:43.518321 systemd-networkd[1389]: lxc_health: Link DOWN Mar 25 02:03:43.518333 systemd-networkd[1389]: lxc_health: Lost carrier Mar 25 02:03:43.528056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b-rootfs.mount: Deactivated successfully. Mar 25 02:03:43.535001 kubelet[2782]: E0325 02:03:43.534688 2782 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 02:03:43.535834 systemd[1]: cri-containerd-b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d.scope: Deactivated successfully. Mar 25 02:03:43.536105 systemd[1]: cri-containerd-b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d.scope: Consumed 8.721s CPU time, 127.1M memory peak, 136K read from disk, 13.3M written to disk. Mar 25 02:03:43.537298 containerd[1487]: time="2025-03-25T02:03:43.537251418Z" level=info msg="received exit event container_id:\"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" id:\"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" pid:3401 exited_at:{seconds:1742868223 nanos:536173340}" Mar 25 02:03:43.539119 containerd[1487]: time="2025-03-25T02:03:43.538960835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" id:\"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" pid:3401 exited_at:{seconds:1742868223 nanos:536173340}" Mar 25 02:03:43.558976 containerd[1487]: time="2025-03-25T02:03:43.558937751Z" level=info msg="StopContainer for \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\" returns successfully" Mar 25 02:03:43.560976 containerd[1487]: time="2025-03-25T02:03:43.560642879Z" level=info msg="StopPodSandbox for \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\"" Mar 25 02:03:43.561586 containerd[1487]: time="2025-03-25T02:03:43.561565937Z" level=info msg="Container to stop \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 02:03:43.569196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d-rootfs.mount: Deactivated successfully. Mar 25 02:03:43.574301 systemd[1]: cri-containerd-2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4.scope: Deactivated successfully. Mar 25 02:03:43.580173 containerd[1487]: time="2025-03-25T02:03:43.580106430Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\" id:\"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\" pid:3000 exit_status:137 exited_at:{seconds:1742868223 nanos:579738738}" Mar 25 02:03:43.585215 containerd[1487]: time="2025-03-25T02:03:43.584922131Z" level=info msg="StopContainer for \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" returns successfully" Mar 25 02:03:43.585951 containerd[1487]: time="2025-03-25T02:03:43.585751872Z" level=info msg="StopPodSandbox for \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\"" Mar 25 02:03:43.586100 containerd[1487]: time="2025-03-25T02:03:43.586058379Z" level=info msg="Container to stop \"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 02:03:43.586100 containerd[1487]: time="2025-03-25T02:03:43.586080270Z" level=info msg="Container to stop \"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 02:03:43.586100 containerd[1487]: time="2025-03-25T02:03:43.586092964Z" level=info msg="Container to stop \"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 02:03:43.586590 containerd[1487]: time="2025-03-25T02:03:43.586386206Z" level=info msg="Container to stop \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 02:03:43.586745 containerd[1487]: time="2025-03-25T02:03:43.586531169Z" level=info msg="Container to stop \"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 02:03:43.597271 systemd[1]: cri-containerd-16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95.scope: Deactivated successfully. Mar 25 02:03:43.625405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4-rootfs.mount: Deactivated successfully. Mar 25 02:03:43.625770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95-rootfs.mount: Deactivated successfully. Mar 25 02:03:43.640019 containerd[1487]: time="2025-03-25T02:03:43.639845941Z" level=info msg="shim disconnected" id=16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95 namespace=k8s.io Mar 25 02:03:43.640019 containerd[1487]: time="2025-03-25T02:03:43.639880426Z" level=warning msg="cleaning up after shim disconnected" id=16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95 namespace=k8s.io Mar 25 02:03:43.640019 containerd[1487]: time="2025-03-25T02:03:43.639890014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 02:03:43.640636 containerd[1487]: time="2025-03-25T02:03:43.639851932Z" level=info msg="shim disconnected" id=2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4 namespace=k8s.io Mar 25 02:03:43.640636 containerd[1487]: time="2025-03-25T02:03:43.640333649Z" level=warning msg="cleaning up after shim disconnected" id=2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4 namespace=k8s.io Mar 25 02:03:43.640636 containerd[1487]: time="2025-03-25T02:03:43.640342225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 02:03:43.655020 containerd[1487]: time="2025-03-25T02:03:43.654958896Z" level=info msg="received exit event sandbox_id:\"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" exit_status:137 exited_at:{seconds:1742868223 nanos:597383006}" Mar 25 02:03:43.657676 containerd[1487]: time="2025-03-25T02:03:43.655774350Z" level=info msg="TearDown network for sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" successfully" Mar 25 02:03:43.657912 containerd[1487]: time="2025-03-25T02:03:43.657876857Z" level=info msg="StopPodSandbox for \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" returns successfully" Mar 25 02:03:43.659979 containerd[1487]: time="2025-03-25T02:03:43.659761713Z" level=info msg="received exit event sandbox_id:\"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\" exit_status:137 exited_at:{seconds:1742868223 nanos:579738738}" Mar 25 02:03:43.659979 containerd[1487]: time="2025-03-25T02:03:43.659816415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" id:\"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" pid:2928 exit_status:137 exited_at:{seconds:1742868223 nanos:597383006}" Mar 25 02:03:43.660521 containerd[1487]: time="2025-03-25T02:03:43.660336474Z" level=info msg="TearDown network for sandbox \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\" successfully" Mar 25 02:03:43.660521 containerd[1487]: time="2025-03-25T02:03:43.660357544Z" level=info msg="StopPodSandbox for \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\" returns successfully" Mar 25 02:03:43.660917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95-shm.mount: Deactivated successfully. Mar 25 02:03:43.663624 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4-shm.mount: Deactivated successfully. Mar 25 02:03:43.740042 kubelet[2782]: I0325 02:03:43.739954 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 02:03:43.742852 kubelet[2782]: I0325 02:03:43.739977 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-run\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.742852 kubelet[2782]: I0325 02:03:43.740237 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cni-path\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.742852 kubelet[2782]: I0325 02:03:43.740261 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-bpf-maps\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.742852 kubelet[2782]: I0325 02:03:43.740354 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cni-path" (OuterVolumeSpecName: "cni-path") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 02:03:43.742852 kubelet[2782]: I0325 02:03:43.740467 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-config-path\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.742852 kubelet[2782]: I0325 02:03:43.740515 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-hostproc\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.743457 kubelet[2782]: I0325 02:03:43.740555 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-hubble-tls\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.743457 kubelet[2782]: I0325 02:03:43.740590 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjz82\" (UniqueName: \"kubernetes.io/projected/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-kube-api-access-cjz82\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.743457 kubelet[2782]: I0325 02:03:43.740619 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-lib-modules\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.743457 kubelet[2782]: I0325 02:03:43.740648 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-xtables-lock\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.743457 kubelet[2782]: I0325 02:03:43.740681 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spbkv\" (UniqueName: \"kubernetes.io/projected/48b8a7e2-8e2c-4852-95da-f83640820ac1-kube-api-access-spbkv\") pod \"48b8a7e2-8e2c-4852-95da-f83640820ac1\" (UID: \"48b8a7e2-8e2c-4852-95da-f83640820ac1\") " Mar 25 02:03:43.743457 kubelet[2782]: I0325 02:03:43.740715 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48b8a7e2-8e2c-4852-95da-f83640820ac1-cilium-config-path\") pod \"48b8a7e2-8e2c-4852-95da-f83640820ac1\" (UID: \"48b8a7e2-8e2c-4852-95da-f83640820ac1\") " Mar 25 02:03:43.743624 kubelet[2782]: I0325 02:03:43.740744 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-cgroup\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.743624 kubelet[2782]: I0325 02:03:43.740776 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-host-proc-sys-kernel\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.743624 kubelet[2782]: I0325 02:03:43.740804 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-etc-cni-netd\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.743624 kubelet[2782]: I0325 02:03:43.740864 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-clustermesh-secrets\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.743624 kubelet[2782]: I0325 02:03:43.740893 2782 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-host-proc-sys-net\") pod \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\" (UID: \"74ed71b2-ca19-400a-9e9a-6e2eb015a91a\") " Mar 25 02:03:43.743624 kubelet[2782]: I0325 02:03:43.740952 2782 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-run\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.743781 kubelet[2782]: I0325 02:03:43.740971 2782 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cni-path\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.743781 kubelet[2782]: I0325 02:03:43.741005 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 02:03:43.743781 kubelet[2782]: I0325 02:03:43.741036 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-hostproc" (OuterVolumeSpecName: "hostproc") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 02:03:43.743781 kubelet[2782]: I0325 02:03:43.742373 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 02:03:43.743781 kubelet[2782]: I0325 02:03:43.742407 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 02:03:43.744218 kubelet[2782]: I0325 02:03:43.744179 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 02:03:43.744298 kubelet[2782]: I0325 02:03:43.744267 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 02:03:43.744369 kubelet[2782]: I0325 02:03:43.744341 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 02:03:43.748280 kubelet[2782]: I0325 02:03:43.748251 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48b8a7e2-8e2c-4852-95da-f83640820ac1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "48b8a7e2-8e2c-4852-95da-f83640820ac1" (UID: "48b8a7e2-8e2c-4852-95da-f83640820ac1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 02:03:43.748406 kubelet[2782]: I0325 02:03:43.748388 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 02:03:43.749278 kubelet[2782]: I0325 02:03:43.749247 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 02:03:43.749546 kubelet[2782]: I0325 02:03:43.749500 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 02:03:43.752819 kubelet[2782]: I0325 02:03:43.752622 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 25 02:03:43.752913 kubelet[2782]: I0325 02:03:43.752728 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-kube-api-access-cjz82" (OuterVolumeSpecName: "kube-api-access-cjz82") pod "74ed71b2-ca19-400a-9e9a-6e2eb015a91a" (UID: "74ed71b2-ca19-400a-9e9a-6e2eb015a91a"). InnerVolumeSpecName "kube-api-access-cjz82". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 02:03:43.755012 kubelet[2782]: I0325 02:03:43.754966 2782 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48b8a7e2-8e2c-4852-95da-f83640820ac1-kube-api-access-spbkv" (OuterVolumeSpecName: "kube-api-access-spbkv") pod "48b8a7e2-8e2c-4852-95da-f83640820ac1" (UID: "48b8a7e2-8e2c-4852-95da-f83640820ac1"). InnerVolumeSpecName "kube-api-access-spbkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 02:03:43.841600 kubelet[2782]: I0325 02:03:43.841481 2782 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-host-proc-sys-net\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.841600 kubelet[2782]: I0325 02:03:43.841542 2782 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-hostproc\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.841600 kubelet[2782]: I0325 02:03:43.841569 2782 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-hubble-tls\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.841600 kubelet[2782]: I0325 02:03:43.841593 2782 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cjz82\" (UniqueName: \"kubernetes.io/projected/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-kube-api-access-cjz82\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.841600 kubelet[2782]: I0325 02:03:43.841617 2782 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-bpf-maps\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.842068 kubelet[2782]: I0325 02:03:43.841640 2782 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-config-path\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.842068 kubelet[2782]: I0325 02:03:43.841663 2782 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-lib-modules\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.842068 kubelet[2782]: I0325 02:03:43.841686 2782 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-xtables-lock\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.842068 kubelet[2782]: I0325 02:03:43.841711 2782 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-spbkv\" (UniqueName: \"kubernetes.io/projected/48b8a7e2-8e2c-4852-95da-f83640820ac1-kube-api-access-spbkv\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.842068 kubelet[2782]: I0325 02:03:43.841734 2782 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-cilium-cgroup\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.842068 kubelet[2782]: I0325 02:03:43.841757 2782 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48b8a7e2-8e2c-4852-95da-f83640820ac1-cilium-config-path\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.842068 kubelet[2782]: I0325 02:03:43.841783 2782 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-etc-cni-netd\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.842587 kubelet[2782]: I0325 02:03:43.841806 2782 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-host-proc-sys-kernel\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:43.842587 kubelet[2782]: I0325 02:03:43.841828 2782 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/74ed71b2-ca19-400a-9e9a-6e2eb015a91a-clustermesh-secrets\") on node \"ci-4284-0-0-7-d93044f3e4.novalocal\" DevicePath \"\"" Mar 25 02:03:44.244991 kubelet[2782]: I0325 02:03:44.244931 2782 scope.go:117] "RemoveContainer" containerID="6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b" Mar 25 02:03:44.255487 containerd[1487]: time="2025-03-25T02:03:44.254001143Z" level=info msg="RemoveContainer for \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\"" Mar 25 02:03:44.263091 systemd[1]: Removed slice kubepods-besteffort-pod48b8a7e2_8e2c_4852_95da_f83640820ac1.slice - libcontainer container kubepods-besteffort-pod48b8a7e2_8e2c_4852_95da_f83640820ac1.slice. Mar 25 02:03:44.273744 containerd[1487]: time="2025-03-25T02:03:44.273640386Z" level=info msg="RemoveContainer for \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\" returns successfully" Mar 25 02:03:44.276939 kubelet[2782]: I0325 02:03:44.275071 2782 scope.go:117] "RemoveContainer" containerID="6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b" Mar 25 02:03:44.277808 containerd[1487]: time="2025-03-25T02:03:44.277699173Z" level=error msg="ContainerStatus for \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\": not found" Mar 25 02:03:44.280469 kubelet[2782]: E0325 02:03:44.280335 2782 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\": not found" containerID="6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b" Mar 25 02:03:44.280919 kubelet[2782]: I0325 02:03:44.280400 2782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b"} err="failed to get container status \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a962cd6fd92b62b55ed4157b4646cc69d94ab408ac92afac9b49690f81ec62b\": not found" Mar 25 02:03:44.280919 kubelet[2782]: I0325 02:03:44.280856 2782 scope.go:117] "RemoveContainer" containerID="b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d" Mar 25 02:03:44.287382 systemd[1]: Removed slice kubepods-burstable-pod74ed71b2_ca19_400a_9e9a_6e2eb015a91a.slice - libcontainer container kubepods-burstable-pod74ed71b2_ca19_400a_9e9a_6e2eb015a91a.slice. Mar 25 02:03:44.287729 systemd[1]: kubepods-burstable-pod74ed71b2_ca19_400a_9e9a_6e2eb015a91a.slice: Consumed 8.807s CPU time, 127.6M memory peak, 136K read from disk, 13.3M written to disk. Mar 25 02:03:44.299106 containerd[1487]: time="2025-03-25T02:03:44.299022917Z" level=info msg="RemoveContainer for \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\"" Mar 25 02:03:44.305941 containerd[1487]: time="2025-03-25T02:03:44.305841837Z" level=info msg="RemoveContainer for \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" returns successfully" Mar 25 02:03:44.307741 kubelet[2782]: I0325 02:03:44.306902 2782 scope.go:117] "RemoveContainer" containerID="50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322" Mar 25 02:03:44.309337 containerd[1487]: time="2025-03-25T02:03:44.309278675Z" level=info msg="RemoveContainer for \"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\"" Mar 25 02:03:44.317160 containerd[1487]: time="2025-03-25T02:03:44.317074152Z" level=info msg="RemoveContainer for \"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\" returns successfully" Mar 25 02:03:44.317914 kubelet[2782]: I0325 02:03:44.317502 2782 scope.go:117] "RemoveContainer" containerID="ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4" Mar 25 02:03:44.323920 containerd[1487]: time="2025-03-25T02:03:44.322785340Z" level=info msg="RemoveContainer for \"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\"" Mar 25 02:03:44.332537 containerd[1487]: time="2025-03-25T02:03:44.332478638Z" level=info msg="RemoveContainer for \"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\" returns successfully" Mar 25 02:03:44.332858 kubelet[2782]: I0325 02:03:44.332832 2782 scope.go:117] "RemoveContainer" containerID="40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d" Mar 25 02:03:44.334926 containerd[1487]: time="2025-03-25T02:03:44.334775029Z" level=info msg="RemoveContainer for \"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\"" Mar 25 02:03:44.339654 containerd[1487]: time="2025-03-25T02:03:44.339610458Z" level=info msg="RemoveContainer for \"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\" returns successfully" Mar 25 02:03:44.339935 kubelet[2782]: I0325 02:03:44.339855 2782 scope.go:117] "RemoveContainer" containerID="4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3" Mar 25 02:03:44.342257 containerd[1487]: time="2025-03-25T02:03:44.341341506Z" level=info msg="RemoveContainer for \"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\"" Mar 25 02:03:44.345333 containerd[1487]: time="2025-03-25T02:03:44.345310133Z" level=info msg="RemoveContainer for \"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\" returns successfully" Mar 25 02:03:44.345730 kubelet[2782]: I0325 02:03:44.345714 2782 scope.go:117] "RemoveContainer" containerID="b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d" Mar 25 02:03:44.346291 containerd[1487]: time="2025-03-25T02:03:44.346261743Z" level=error msg="ContainerStatus for \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\": not found" Mar 25 02:03:44.346628 kubelet[2782]: E0325 02:03:44.346607 2782 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\": not found" containerID="b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d" Mar 25 02:03:44.346762 kubelet[2782]: I0325 02:03:44.346737 2782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d"} err="failed to get container status \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4bf8df9f3d8800debc9c9b8f6dc816798c09feba16711d11c5827e391dc3f6d\": not found" Mar 25 02:03:44.346853 kubelet[2782]: I0325 02:03:44.346839 2782 scope.go:117] "RemoveContainer" containerID="50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322" Mar 25 02:03:44.347229 containerd[1487]: time="2025-03-25T02:03:44.347204968Z" level=error msg="ContainerStatus for \"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\": not found" Mar 25 02:03:44.347473 kubelet[2782]: E0325 02:03:44.347446 2782 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\": not found" containerID="50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322" Mar 25 02:03:44.347539 kubelet[2782]: I0325 02:03:44.347480 2782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322"} err="failed to get container status \"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\": rpc error: code = NotFound desc = an error occurred when try to find container \"50bf84b43041a574988b88f1e766702430df191ca497f28b4bd3855f314b9322\": not found" Mar 25 02:03:44.347539 kubelet[2782]: I0325 02:03:44.347508 2782 scope.go:117] "RemoveContainer" containerID="ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4" Mar 25 02:03:44.347964 kubelet[2782]: E0325 02:03:44.347831 2782 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\": not found" containerID="ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4" Mar 25 02:03:44.347964 kubelet[2782]: I0325 02:03:44.347852 2782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4"} err="failed to get container status \"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\": not found" Mar 25 02:03:44.347964 kubelet[2782]: I0325 02:03:44.347868 2782 scope.go:117] "RemoveContainer" containerID="40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d" Mar 25 02:03:44.348058 containerd[1487]: time="2025-03-25T02:03:44.347722532Z" level=error msg="ContainerStatus for \"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea34ba6394368bf60c757b1cd7ade2fff794db49847da7ac9b316dfd5c6c50a4\": not found" Mar 25 02:03:44.348058 containerd[1487]: time="2025-03-25T02:03:44.348017397Z" level=error msg="ContainerStatus for \"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\": not found" Mar 25 02:03:44.348168 kubelet[2782]: E0325 02:03:44.348116 2782 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\": not found" containerID="40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d" Mar 25 02:03:44.348168 kubelet[2782]: I0325 02:03:44.348135 2782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d"} err="failed to get container status \"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\": rpc error: code = NotFound desc = an error occurred when try to find container \"40d1adb086b31d3b53d9126112e6bd49053a179a639c6034d919bfc23e56227d\": not found" Mar 25 02:03:44.348168 kubelet[2782]: I0325 02:03:44.348149 2782 scope.go:117] "RemoveContainer" containerID="4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3" Mar 25 02:03:44.348493 containerd[1487]: time="2025-03-25T02:03:44.348373237Z" level=error msg="ContainerStatus for \"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\": not found" Mar 25 02:03:44.348607 kubelet[2782]: E0325 02:03:44.348582 2782 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\": not found" containerID="4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3" Mar 25 02:03:44.348680 kubelet[2782]: I0325 02:03:44.348609 2782 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3"} err="failed to get container status \"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\": rpc error: code = NotFound desc = an error occurred when try to find container \"4bbf524d8b2d4fb08faa1875fa461b1d86c0d5f9d93d6b965f8186601f7cbff3\": not found" Mar 25 02:03:44.377531 kubelet[2782]: I0325 02:03:44.377275 2782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b8a7e2-8e2c-4852-95da-f83640820ac1" path="/var/lib/kubelet/pods/48b8a7e2-8e2c-4852-95da-f83640820ac1/volumes" Mar 25 02:03:44.378132 kubelet[2782]: I0325 02:03:44.378104 2782 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74ed71b2-ca19-400a-9e9a-6e2eb015a91a" path="/var/lib/kubelet/pods/74ed71b2-ca19-400a-9e9a-6e2eb015a91a/volumes" Mar 25 02:03:44.529800 systemd[1]: var-lib-kubelet-pods-48b8a7e2\x2d8e2c\x2d4852\x2d95da\x2df83640820ac1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dspbkv.mount: Deactivated successfully. Mar 25 02:03:44.530091 systemd[1]: var-lib-kubelet-pods-74ed71b2\x2dca19\x2d400a\x2d9e9a\x2d6e2eb015a91a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcjz82.mount: Deactivated successfully. Mar 25 02:03:44.530317 systemd[1]: var-lib-kubelet-pods-74ed71b2\x2dca19\x2d400a\x2d9e9a\x2d6e2eb015a91a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 25 02:03:44.531382 systemd[1]: var-lib-kubelet-pods-74ed71b2\x2dca19\x2d400a\x2d9e9a\x2d6e2eb015a91a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 25 02:03:45.578676 sshd[4303]: Connection closed by 172.24.4.1 port 49700 Mar 25 02:03:45.581198 sshd-session[4300]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:45.596332 systemd[1]: sshd@21-172.24.4.226:22-172.24.4.1:49700.service: Deactivated successfully. Mar 25 02:03:45.600872 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 02:03:45.601499 systemd[1]: session-24.scope: Consumed 1.533s CPU time, 23.8M memory peak. Mar 25 02:03:45.604512 systemd-logind[1465]: Session 24 logged out. Waiting for processes to exit. Mar 25 02:03:45.608065 systemd[1]: Started sshd@22-172.24.4.226:22-172.24.4.1:58938.service - OpenSSH per-connection server daemon (172.24.4.1:58938). Mar 25 02:03:45.611730 systemd-logind[1465]: Removed session 24. Mar 25 02:03:46.995792 sshd[4453]: Accepted publickey for core from 172.24.4.1 port 58938 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:46.998546 sshd-session[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:47.011126 systemd-logind[1465]: New session 25 of user core. Mar 25 02:03:47.018760 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 02:03:48.401037 containerd[1487]: time="2025-03-25T02:03:48.400818201Z" level=info msg="StopPodSandbox for \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\"" Mar 25 02:03:48.401037 containerd[1487]: time="2025-03-25T02:03:48.400962232Z" level=info msg="TearDown network for sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" successfully" Mar 25 02:03:48.401037 containerd[1487]: time="2025-03-25T02:03:48.400976689Z" level=info msg="StopPodSandbox for \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" returns successfully" Mar 25 02:03:48.402280 containerd[1487]: time="2025-03-25T02:03:48.401674932Z" level=info msg="RemovePodSandbox for \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\"" Mar 25 02:03:48.402280 containerd[1487]: time="2025-03-25T02:03:48.401698177Z" level=info msg="Forcibly stopping sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\"" Mar 25 02:03:48.402280 containerd[1487]: time="2025-03-25T02:03:48.401762828Z" level=info msg="TearDown network for sandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" successfully" Mar 25 02:03:48.403858 containerd[1487]: time="2025-03-25T02:03:48.403617368Z" level=info msg="Ensure that sandbox 16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95 in task-service has been cleanup successfully" Mar 25 02:03:48.407795 containerd[1487]: time="2025-03-25T02:03:48.407676268Z" level=info msg="RemovePodSandbox \"16d1eecf73ca0d4f5700cf9fb9bef0f3da51f77fccfbb7eaa15cc118abab0d95\" returns successfully" Mar 25 02:03:48.408465 containerd[1487]: time="2025-03-25T02:03:48.408262772Z" level=info msg="StopPodSandbox for \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\"" Mar 25 02:03:48.408722 containerd[1487]: time="2025-03-25T02:03:48.408656233Z" level=info msg="TearDown network for sandbox \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\" successfully" Mar 25 02:03:48.408722 containerd[1487]: time="2025-03-25T02:03:48.408674677Z" level=info msg="StopPodSandbox for \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\" returns successfully" Mar 25 02:03:48.409864 containerd[1487]: time="2025-03-25T02:03:48.409631588Z" level=info msg="RemovePodSandbox for \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\"" Mar 25 02:03:48.409864 containerd[1487]: time="2025-03-25T02:03:48.409656875Z" level=info msg="Forcibly stopping sandbox \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\"" Mar 25 02:03:48.409864 containerd[1487]: time="2025-03-25T02:03:48.409723692Z" level=info msg="TearDown network for sandbox \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\" successfully" Mar 25 02:03:48.410702 containerd[1487]: time="2025-03-25T02:03:48.410660685Z" level=info msg="Ensure that sandbox 2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4 in task-service has been cleanup successfully" Mar 25 02:03:48.414279 containerd[1487]: time="2025-03-25T02:03:48.414217099Z" level=info msg="RemovePodSandbox \"2dd68b5395bda1977f7a23e22dbb7fd44cdf97f069a01432ca4e731e7329d5f4\" returns successfully" Mar 25 02:03:48.477614 kubelet[2782]: I0325 02:03:48.477228 2782 topology_manager.go:215] "Topology Admit Handler" podUID="2a32b1da-9dd6-4735-8e56-3d509c3334d3" podNamespace="kube-system" podName="cilium-f64jr" Mar 25 02:03:48.477614 kubelet[2782]: E0325 02:03:48.477298 2782 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74ed71b2-ca19-400a-9e9a-6e2eb015a91a" containerName="mount-cgroup" Mar 25 02:03:48.477614 kubelet[2782]: E0325 02:03:48.477309 2782 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74ed71b2-ca19-400a-9e9a-6e2eb015a91a" containerName="apply-sysctl-overwrites" Mar 25 02:03:48.477614 kubelet[2782]: E0325 02:03:48.477316 2782 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74ed71b2-ca19-400a-9e9a-6e2eb015a91a" containerName="mount-bpf-fs" Mar 25 02:03:48.477614 kubelet[2782]: E0325 02:03:48.477322 2782 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="48b8a7e2-8e2c-4852-95da-f83640820ac1" containerName="cilium-operator" Mar 25 02:03:48.477614 kubelet[2782]: E0325 02:03:48.477330 2782 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74ed71b2-ca19-400a-9e9a-6e2eb015a91a" containerName="cilium-agent" Mar 25 02:03:48.477614 kubelet[2782]: E0325 02:03:48.477338 2782 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="74ed71b2-ca19-400a-9e9a-6e2eb015a91a" containerName="clean-cilium-state" Mar 25 02:03:48.477614 kubelet[2782]: I0325 02:03:48.477362 2782 memory_manager.go:354] "RemoveStaleState removing state" podUID="74ed71b2-ca19-400a-9e9a-6e2eb015a91a" containerName="cilium-agent" Mar 25 02:03:48.477614 kubelet[2782]: I0325 02:03:48.477368 2782 memory_manager.go:354] "RemoveStaleState removing state" podUID="48b8a7e2-8e2c-4852-95da-f83640820ac1" containerName="cilium-operator" Mar 25 02:03:48.491015 systemd[1]: Created slice kubepods-burstable-pod2a32b1da_9dd6_4735_8e56_3d509c3334d3.slice - libcontainer container kubepods-burstable-pod2a32b1da_9dd6_4735_8e56_3d509c3334d3.slice. Mar 25 02:03:48.536227 kubelet[2782]: E0325 02:03:48.536164 2782 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 25 02:03:48.573601 kubelet[2782]: I0325 02:03:48.573568 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a32b1da-9dd6-4735-8e56-3d509c3334d3-cilium-run\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.573776 kubelet[2782]: I0325 02:03:48.573751 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a32b1da-9dd6-4735-8e56-3d509c3334d3-bpf-maps\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.573858 kubelet[2782]: I0325 02:03:48.573807 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a32b1da-9dd6-4735-8e56-3d509c3334d3-etc-cni-netd\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.573915 kubelet[2782]: I0325 02:03:48.573868 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a32b1da-9dd6-4735-8e56-3d509c3334d3-hostproc\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.573915 kubelet[2782]: I0325 02:03:48.573892 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a32b1da-9dd6-4735-8e56-3d509c3334d3-xtables-lock\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.573915 kubelet[2782]: I0325 02:03:48.573909 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a32b1da-9dd6-4735-8e56-3d509c3334d3-hubble-tls\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.574128 kubelet[2782]: I0325 02:03:48.573928 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a32b1da-9dd6-4735-8e56-3d509c3334d3-cilium-cgroup\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.574128 kubelet[2782]: I0325 02:03:48.573949 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a32b1da-9dd6-4735-8e56-3d509c3334d3-clustermesh-secrets\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.574128 kubelet[2782]: I0325 02:03:48.573966 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a32b1da-9dd6-4735-8e56-3d509c3334d3-cilium-config-path\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.574128 kubelet[2782]: I0325 02:03:48.573984 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a32b1da-9dd6-4735-8e56-3d509c3334d3-host-proc-sys-net\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.574128 kubelet[2782]: I0325 02:03:48.574003 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r958d\" (UniqueName: \"kubernetes.io/projected/2a32b1da-9dd6-4735-8e56-3d509c3334d3-kube-api-access-r958d\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.574406 kubelet[2782]: I0325 02:03:48.574021 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2a32b1da-9dd6-4735-8e56-3d509c3334d3-cilium-ipsec-secrets\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.574406 kubelet[2782]: I0325 02:03:48.574039 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a32b1da-9dd6-4735-8e56-3d509c3334d3-host-proc-sys-kernel\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.574406 kubelet[2782]: I0325 02:03:48.574056 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a32b1da-9dd6-4735-8e56-3d509c3334d3-cni-path\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.574406 kubelet[2782]: I0325 02:03:48.574074 2782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a32b1da-9dd6-4735-8e56-3d509c3334d3-lib-modules\") pod \"cilium-f64jr\" (UID: \"2a32b1da-9dd6-4735-8e56-3d509c3334d3\") " pod="kube-system/cilium-f64jr" Mar 25 02:03:48.651735 sshd[4456]: Connection closed by 172.24.4.1 port 58938 Mar 25 02:03:48.652176 sshd-session[4453]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:48.673608 systemd[1]: sshd@22-172.24.4.226:22-172.24.4.1:58938.service: Deactivated successfully. Mar 25 02:03:48.681527 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 02:03:48.682952 systemd[1]: session-25.scope: Consumed 1.006s CPU time, 23.9M memory peak. Mar 25 02:03:48.691861 systemd-logind[1465]: Session 25 logged out. Waiting for processes to exit. Mar 25 02:03:48.716911 systemd[1]: Started sshd@23-172.24.4.226:22-172.24.4.1:58948.service - OpenSSH per-connection server daemon (172.24.4.1:58948). Mar 25 02:03:48.752510 systemd-logind[1465]: Removed session 25. Mar 25 02:03:48.797089 containerd[1487]: time="2025-03-25T02:03:48.797044159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f64jr,Uid:2a32b1da-9dd6-4735-8e56-3d509c3334d3,Namespace:kube-system,Attempt:0,}" Mar 25 02:03:48.820202 containerd[1487]: time="2025-03-25T02:03:48.820164696Z" level=info msg="connecting to shim e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920" address="unix:///run/containerd/s/4d41a5e32d31f506a398dfc86e59a363ca9d5a8211e81714443c01fe1e9324bb" namespace=k8s.io protocol=ttrpc version=3 Mar 25 02:03:48.840580 systemd[1]: Started cri-containerd-e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920.scope - libcontainer container e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920. Mar 25 02:03:48.864819 containerd[1487]: time="2025-03-25T02:03:48.864765700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f64jr,Uid:2a32b1da-9dd6-4735-8e56-3d509c3334d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920\"" Mar 25 02:03:48.868125 containerd[1487]: time="2025-03-25T02:03:48.867641382Z" level=info msg="CreateContainer within sandbox \"e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 25 02:03:48.875576 containerd[1487]: time="2025-03-25T02:03:48.875541813Z" level=info msg="Container 1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:03:48.884828 containerd[1487]: time="2025-03-25T02:03:48.884800017Z" level=info msg="CreateContainer within sandbox \"e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667\"" Mar 25 02:03:48.885442 containerd[1487]: time="2025-03-25T02:03:48.885360643Z" level=info msg="StartContainer for \"1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667\"" Mar 25 02:03:48.886342 containerd[1487]: time="2025-03-25T02:03:48.886313436Z" level=info msg="connecting to shim 1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667" address="unix:///run/containerd/s/4d41a5e32d31f506a398dfc86e59a363ca9d5a8211e81714443c01fe1e9324bb" protocol=ttrpc version=3 Mar 25 02:03:48.905566 systemd[1]: Started cri-containerd-1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667.scope - libcontainer container 1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667. Mar 25 02:03:48.938748 containerd[1487]: time="2025-03-25T02:03:48.938714422Z" level=info msg="StartContainer for \"1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667\" returns successfully" Mar 25 02:03:48.942745 systemd[1]: cri-containerd-1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667.scope: Deactivated successfully. Mar 25 02:03:48.945721 containerd[1487]: time="2025-03-25T02:03:48.945637442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667\" id:\"1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667\" pid:4534 exited_at:{seconds:1742868228 nanos:944105257}" Mar 25 02:03:48.946160 containerd[1487]: time="2025-03-25T02:03:48.946027085Z" level=info msg="received exit event container_id:\"1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667\" id:\"1bc365fde9378dceaf3581dba46629dcebd8b8cf9c6a1253c7fadeb205667667\" pid:4534 exited_at:{seconds:1742868228 nanos:944105257}" Mar 25 02:03:49.316671 containerd[1487]: time="2025-03-25T02:03:49.316412666Z" level=info msg="CreateContainer within sandbox \"e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 25 02:03:49.332702 containerd[1487]: time="2025-03-25T02:03:49.332607029Z" level=info msg="Container 8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:03:49.359136 containerd[1487]: time="2025-03-25T02:03:49.358961404Z" level=info msg="CreateContainer within sandbox \"e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6\"" Mar 25 02:03:49.363461 containerd[1487]: time="2025-03-25T02:03:49.359949213Z" level=info msg="StartContainer for \"8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6\"" Mar 25 02:03:49.363687 containerd[1487]: time="2025-03-25T02:03:49.363367337Z" level=info msg="connecting to shim 8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6" address="unix:///run/containerd/s/4d41a5e32d31f506a398dfc86e59a363ca9d5a8211e81714443c01fe1e9324bb" protocol=ttrpc version=3 Mar 25 02:03:49.385566 systemd[1]: Started cri-containerd-8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6.scope - libcontainer container 8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6. Mar 25 02:03:49.423909 containerd[1487]: time="2025-03-25T02:03:49.423860190Z" level=info msg="StartContainer for \"8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6\" returns successfully" Mar 25 02:03:49.430830 systemd[1]: cri-containerd-8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6.scope: Deactivated successfully. Mar 25 02:03:49.431603 containerd[1487]: time="2025-03-25T02:03:49.431387057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6\" id:\"8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6\" pid:4577 exited_at:{seconds:1742868229 nanos:431030376}" Mar 25 02:03:49.431867 containerd[1487]: time="2025-03-25T02:03:49.431804353Z" level=info msg="received exit event container_id:\"8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6\" id:\"8aaf820ab5b122eef9c2800fd2480a2249f02b9a56598c8b5a815c1095ebbae6\" pid:4577 exited_at:{seconds:1742868229 nanos:431030376}" Mar 25 02:03:49.791259 sshd[4474]: Accepted publickey for core from 172.24.4.1 port 58948 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:49.793989 sshd-session[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:49.806344 systemd-logind[1465]: New session 26 of user core. Mar 25 02:03:49.810175 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 25 02:03:50.314354 containerd[1487]: time="2025-03-25T02:03:50.313652138Z" level=info msg="CreateContainer within sandbox \"e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 25 02:03:50.338040 containerd[1487]: time="2025-03-25T02:03:50.337954744Z" level=info msg="Container ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:03:50.362410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147894540.mount: Deactivated successfully. Mar 25 02:03:50.373762 containerd[1487]: time="2025-03-25T02:03:50.373720308Z" level=info msg="CreateContainer within sandbox \"e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8\"" Mar 25 02:03:50.375724 containerd[1487]: time="2025-03-25T02:03:50.374832562Z" level=info msg="StartContainer for \"ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8\"" Mar 25 02:03:50.380448 containerd[1487]: time="2025-03-25T02:03:50.380375265Z" level=info msg="connecting to shim ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8" address="unix:///run/containerd/s/4d41a5e32d31f506a398dfc86e59a363ca9d5a8211e81714443c01fe1e9324bb" protocol=ttrpc version=3 Mar 25 02:03:50.406557 systemd[1]: Started cri-containerd-ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8.scope - libcontainer container ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8. Mar 25 02:03:50.444938 systemd[1]: cri-containerd-ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8.scope: Deactivated successfully. Mar 25 02:03:50.448122 containerd[1487]: time="2025-03-25T02:03:50.446909638Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8\" id:\"ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8\" pid:4622 exited_at:{seconds:1742868230 nanos:446021957}" Mar 25 02:03:50.450729 containerd[1487]: time="2025-03-25T02:03:50.450601447Z" level=info msg="received exit event container_id:\"ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8\" id:\"ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8\" pid:4622 exited_at:{seconds:1742868230 nanos:446021957}" Mar 25 02:03:50.456519 containerd[1487]: time="2025-03-25T02:03:50.455033670Z" level=info msg="StartContainer for \"ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8\" returns successfully" Mar 25 02:03:50.483946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef6727b3aadcd8bd760bf032fa524cd0a29f073eb31648f7967d8effb09030a8-rootfs.mount: Deactivated successfully. Mar 25 02:03:50.563378 sshd[4607]: Connection closed by 172.24.4.1 port 58948 Mar 25 02:03:50.563904 sshd-session[4474]: pam_unix(sshd:session): session closed for user core Mar 25 02:03:50.580495 systemd[1]: sshd@23-172.24.4.226:22-172.24.4.1:58948.service: Deactivated successfully. Mar 25 02:03:50.583920 systemd[1]: session-26.scope: Deactivated successfully. Mar 25 02:03:50.585738 systemd-logind[1465]: Session 26 logged out. Waiting for processes to exit. Mar 25 02:03:50.591328 systemd[1]: Started sshd@24-172.24.4.226:22-172.24.4.1:58950.service - OpenSSH per-connection server daemon (172.24.4.1:58950). Mar 25 02:03:50.593863 systemd-logind[1465]: Removed session 26. Mar 25 02:03:51.327442 containerd[1487]: time="2025-03-25T02:03:51.327357100Z" level=info msg="CreateContainer within sandbox \"e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 25 02:03:51.348886 containerd[1487]: time="2025-03-25T02:03:51.348815094Z" level=info msg="Container 1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:03:51.368599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3160190043.mount: Deactivated successfully. Mar 25 02:03:51.395920 containerd[1487]: time="2025-03-25T02:03:51.395848634Z" level=info msg="CreateContainer within sandbox \"e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc\"" Mar 25 02:03:51.399093 containerd[1487]: time="2025-03-25T02:03:51.399018962Z" level=info msg="StartContainer for \"1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc\"" Mar 25 02:03:51.404561 containerd[1487]: time="2025-03-25T02:03:51.404417124Z" level=info msg="connecting to shim 1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc" address="unix:///run/containerd/s/4d41a5e32d31f506a398dfc86e59a363ca9d5a8211e81714443c01fe1e9324bb" protocol=ttrpc version=3 Mar 25 02:03:51.462791 systemd[1]: Started cri-containerd-1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc.scope - libcontainer container 1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc. Mar 25 02:03:51.518605 systemd[1]: cri-containerd-1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc.scope: Deactivated successfully. Mar 25 02:03:51.520273 containerd[1487]: time="2025-03-25T02:03:51.519133856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc\" id:\"1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc\" pid:4667 exited_at:{seconds:1742868231 nanos:518818943}" Mar 25 02:03:51.522271 containerd[1487]: time="2025-03-25T02:03:51.522110169Z" level=info msg="received exit event container_id:\"1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc\" id:\"1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc\" pid:4667 exited_at:{seconds:1742868231 nanos:518818943}" Mar 25 02:03:51.531472 containerd[1487]: time="2025-03-25T02:03:51.531440663Z" level=info msg="StartContainer for \"1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc\" returns successfully" Mar 25 02:03:51.548450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b24bc1e073e4bb3c618706d74427859ad0c4bcd0aca2cf797f63cc048bcbadc-rootfs.mount: Deactivated successfully. Mar 25 02:03:51.705511 sshd[4652]: Accepted publickey for core from 172.24.4.1 port 58950 ssh2: RSA SHA256:2p5KKBBmNEwazQvcAFKs6NISXxKbrLHbHWGQ80PLawU Mar 25 02:03:51.708328 sshd-session[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 02:03:51.719555 systemd-logind[1465]: New session 27 of user core. Mar 25 02:03:51.725771 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 25 02:03:52.336657 containerd[1487]: time="2025-03-25T02:03:52.336378192Z" level=info msg="CreateContainer within sandbox \"e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 25 02:03:52.404452 containerd[1487]: time="2025-03-25T02:03:52.401020086Z" level=info msg="Container 9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541: CDI devices from CRI Config.CDIDevices: []" Mar 25 02:03:52.417996 containerd[1487]: time="2025-03-25T02:03:52.417935551Z" level=info msg="CreateContainer within sandbox \"e40d20faf81228aad633b3507ff297ba7ea1559cd5a8c206631a4bce81fc7920\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541\"" Mar 25 02:03:52.420395 containerd[1487]: time="2025-03-25T02:03:52.420366848Z" level=info msg="StartContainer for \"9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541\"" Mar 25 02:03:52.421360 containerd[1487]: time="2025-03-25T02:03:52.421332015Z" level=info msg="connecting to shim 9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541" address="unix:///run/containerd/s/4d41a5e32d31f506a398dfc86e59a363ca9d5a8211e81714443c01fe1e9324bb" protocol=ttrpc version=3 Mar 25 02:03:52.455993 systemd[1]: Started cri-containerd-9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541.scope - libcontainer container 9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541. Mar 25 02:03:52.517017 containerd[1487]: time="2025-03-25T02:03:52.514951553Z" level=info msg="StartContainer for \"9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541\" returns successfully" Mar 25 02:03:52.675456 containerd[1487]: time="2025-03-25T02:03:52.675396225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541\" id:\"f198709a93ce61ac28ddf4d3e344c29996e206f952161478a5b0ccaf188c9a57\" pid:4742 exited_at:{seconds:1742868232 nanos:675097914}" Mar 25 02:03:52.974558 kernel: cryptd: max_cpu_qlen set to 1000 Mar 25 02:03:53.022571 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Mar 25 02:03:53.390708 kubelet[2782]: I0325 02:03:53.390352 2782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f64jr" podStartSLOduration=5.390317055 podStartE2EDuration="5.390317055s" podCreationTimestamp="2025-03-25 02:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 02:03:53.389620774 +0000 UTC m=+245.092663762" watchObservedRunningTime="2025-03-25 02:03:53.390317055 +0000 UTC m=+245.093360043" Mar 25 02:03:53.514767 kubelet[2782]: I0325 02:03:53.514647 2782 setters.go:580] "Node became not ready" node="ci-4284-0-0-7-d93044f3e4.novalocal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-25T02:03:53Z","lastTransitionTime":"2025-03-25T02:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 25 02:03:54.610788 containerd[1487]: time="2025-03-25T02:03:54.610672071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541\" id:\"feaa46c916a4152cfb50f5a83edfbd1485cd6bf280bb7a38245401cd3861f4f0\" pid:4927 exit_status:1 exited_at:{seconds:1742868234 nanos:609698238}" Mar 25 02:03:56.014833 systemd-networkd[1389]: lxc_health: Link UP Mar 25 02:03:56.026119 systemd-networkd[1389]: lxc_health: Gained carrier Mar 25 02:03:56.773275 containerd[1487]: time="2025-03-25T02:03:56.773229641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541\" id:\"e40cf4f0d69147ee73c7f9ace0e2b50a057d5ecadf7744e1185adf58369d15d5\" pid:5307 exited_at:{seconds:1742868236 nanos:772008823}" Mar 25 02:03:57.432543 systemd-networkd[1389]: lxc_health: Gained IPv6LL Mar 25 02:03:58.961855 containerd[1487]: time="2025-03-25T02:03:58.961784107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541\" id:\"bb6f78591fed80255b5a98fbd8ebdbafa61b608950dfe9af76f7de56cee98bed\" pid:5339 exited_at:{seconds:1742868238 nanos:961126890}" Mar 25 02:04:01.120288 containerd[1487]: time="2025-03-25T02:04:01.120241190Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541\" id:\"679b593042a5a59e874f3caababac4b2ee6d47e7b9df297479eb9dc2a164eb7f\" pid:5375 exited_at:{seconds:1742868241 nanos:119797274}" Mar 25 02:04:01.125857 kubelet[2782]: E0325 02:04:01.125819 2782 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47132->127.0.0.1:32901: write tcp 127.0.0.1:47132->127.0.0.1:32901: write: broken pipe Mar 25 02:04:03.310332 containerd[1487]: time="2025-03-25T02:04:03.310278379Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9996947f29b684aea0bf74aa9da3c0b4d7e69a383110d14b67b337db00ea1541\" id:\"70e03aee76437df33cc50dd7f0057c9b749e451adad31f261c043a8c55cdc042\" pid:5406 exited_at:{seconds:1742868243 nanos:309798344}" Mar 25 02:04:03.672473 sshd[4691]: Connection closed by 172.24.4.1 port 58950 Mar 25 02:04:03.671839 sshd-session[4652]: pam_unix(sshd:session): session closed for user core Mar 25 02:04:03.678119 systemd[1]: sshd@24-172.24.4.226:22-172.24.4.1:58950.service: Deactivated successfully. Mar 25 02:04:03.682556 systemd[1]: session-27.scope: Deactivated successfully. Mar 25 02:04:03.687293 systemd-logind[1465]: Session 27 logged out. Waiting for processes to exit. Mar 25 02:04:03.690169 systemd-logind[1465]: Removed session 27.